All posts in web

One thing I believe in is constant change, and constant learning, ideally one thing per day. Sometimes it’s learning how to cook the best eggs benedict ever for breakfast, or sometimes it’s helping out a friend with a special request.

Today I was asked by a colleague if I could help extract data from a web site. As an architect, the first thing I look at in the “code” is a clean separation of what is presentation and what is data. Obviously, I did not find that, which made me realise how frameworks which render html mixed with data are bad bad bad. Why can’t everything follow MVVM with some binding of some sort.

Anyways, we needed a solution and what we wipped up was screen scrapping.

My first attempt was to write a small html page that loads jquery, does an ajax call to hit the webpage we needed data from and then extract it from it’s DOM… Turns out it was an easy to execute but I was met an error : CORS headers not found for file://dlg/extractData.html. GRRRRR

New strategy !

I opened Chrome, did a search for screen scrapping extensions and behold I found this.

An extension that would allow me to navigate to any page, lookup how it is built, understand it’s usage of CSS selectors and voilà. Any page that reuses the same css selector to represent repeating data (as in a list) can be extracted to a json or csv file.

Well, thanks DL for getting me to learn something new today !

One of the project i was working on involves taking Office 365 to the next level. Take the tools that Microsoft gives us and bring it to the next level.

Some of the things we do involves extending Sharepoint, Excel, Word, Office with addons. Works great. But where do we get all the nice information we show the user ?

This is where the Microsoft Graph comes in (http://graph.microsoft.com). It is basically an odata feed that gives you access to a variety of data that represents you. Naturally, you need to send a bearer token to get access so authenticate first !

You can find then entire “metadata” on the internet itself, all self-describing ! Also, there is some documentation.

You can even try the Graph Explorer, which is a web tool to explore the graph in an interactive way.

Here a are a few example of me for my own company :

Who am i ?

GET https://graph.microsoft.com/v1.0/me
{
 "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users/$entity",
 "id": "",
 "businessPhones": [],
 "displayName": "Erik Renaud",
 "givenName": "Erik",
 "jobTitle": "Principal Architect",
 "mail": "erik.renaud@modelon.net",
 "mobilePhone": "+1 514...",
 "officeLocation": "+1.514...",
 "preferredLanguage": "en-CA",
 "surname": "Renaud",
 "userPrincipalName": "erik.renaud@modelon.net"
}

What about my picture ?

GET https://graph.microsoft.com/v1.0/me/photo/$value

 

This is a goldmine of information and makes any business tool soo much more powerful.

Add  an email, find recently used files, show how many unread things in teams… It’s all possible !

 

 

For one of the projects i manage i have two teams and in the end, one implements web services for the other to consume. In production and integration testing, things tend to go well. But when we are faced with debugging or testing the side that consumes web services, we need something more.

I love do discover new things or new ways of doing things, enter EasyMock (https://github.com/CyberAgent/node-easymock). It is a small nodejs web server that returns any file on disk with extra options in it’s config.json file.

You install it with :

$ npm install -g easymock
And you start it within your work directory with :
$ easymock
Server running on http://localhost:3000
Listening on port 3000 and 3001
Documentation at: http://localhost:3000/_documentation/
Logs at: http://localhost:3000/_logs/

If you wanted to mock a CurrentUserCount rest web service which is located at /api/CurrentUserCount, all you need to do is create a “api” directory with a file named “CurrentUserCount_get.json” within it. Here is that result :

There is even a handy automatically created documentation page:

Happy mocking !

I am sitting in a meeting where i am looking at beautifu PowerBI dashbaords built on top of a system we built that is going live soon.

The architecture we had planned called for using PowerBI, but didn`t just how good it is.

The best part, there is a whole gallery of custom visualisations (https://app.powerbi.com/visuals/) for those times you want to express something that wasn’t included inside the box.

Lastly, how did we get the data into PowerBI ? EntityFramework +Restier + Odata ! If the Power BI team is listening, let us  do custom data sources !

When i hear the word Token  –  i think of a random string of text that if i send it to some endpoint, it will be traded for something else (i.e. the ability to confirm account).

For example, if i click on this link in an en email :

http://www.reallygoodsite.com/confirmAccount?OKJASDIUERKSDNHKJASHDKAJDH

i would expect to confirm the account represented by the token OKJASDIUERKSDNHKJASHDKAJDH. Some advice here, i would also expect this token to expire if i don’t click on it in a reasonable amount of time; I would also expect to be able to use the token exactly once.

Now this token is opaque to the user, as in he cannot understand what it means, it’s kind of like an ID, it has no real meaning except to represent something in somme system.

But what if we wanted the token to represent something, allow it to convey information from one system to another. And that’s what JWT’s (JSON Web Tokens) are. Get started with a web helper here.

A JWT is usually three base64 segments separated by a dot, like this:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJodHRwczovL2p3dC1pZHAuZXhhbXBsZS5jb20iLCJzdWIiOiJtYWlsdG86ZXJpa0BleGFtcGxlLmNvbSIsIm5iZiI6MTQ4MDc3NzQzNiwiZXhwIjoxNDgwNzgxMDM2LCJpYXQiOjE0ODA3Nzc0MzYsImp0aSI6ImlkMTIzNDU2IiwidHlwIjoiaHR0cHM6Ly9leGFtcGxlLmNvbS9yZWdpc3RlciJ9.rykcVqxUx-qmV-bbJ79zRAA84tj3eIBJbv-OMx4mUE0

Each segment can be decoded independently. The first carries a JSON containing the signing algorithm information, the second carries a JSON with the claims (actual data) and the last contains a signature so that the authenticity of the token can be verified. The third part of JWT can be absent if the JWT is not signed. Like this :

JSON algorithm information

{“alg”:”HS256″,”typ”:”JWT”}

JSON claims

{"iss":"https://jwt-idp.example.com","sub":"mailto:erik@example.com","nbf":1480777436,"exp":1480781036,"iat":1480777436,"jti":"id123456","typ":"https://example.com/register"}

Signature

)VTƩm{4@-w8xA4

The signature is based on an symmetric algorithm, both the creator and the receiver of the token must share a secret. If the algorithm is assymetric, the creator needs access to a private key but anybody with the public key can verify the authenticity of the JWT.

The fun thing about JWT is that there are some standard claims to help with interoperability, all optionial and completely extendable. For example:

iss - Issuer
sub - Subject
nbf - A valid not before date
exp - An expiration date
iat - Issued date
jti - the JWT's ID
typ - the type of claims carried
aud - to whom this JWT is intended for

Modern authentication mechanisms use these JWT “en masse”. If the receiver of a JWT verifies :

  1. the signature to ensure that the JWT has not been tampered with
  2. the nbf and exp to ensure that JWT is valid at this point in time
  3. the iss to ensure that the token was created by then identity provider itself
  4. the aud to ensure that this token is targeted at the current service performing the authentication
  5. the sub to know who this JWT authenticates.

You have the means to have a great and secure token based authentication system.

Have fun !

Well, it’s 2016 and technology doesn’t stop advancing.

A couple of months ago, I though Xamarin was the best platform for creating cross platform apps from the same (or mostly same) source code. From a technology standpoint, Xamarin offers the mechanisms to use all of my Visual Studio and .NET skills to build cross platform apps. All I needed was a MacMini hidden in my basement to do compilations. Let’s not forget their Xamarin Forms technology that gives you a single thing to learn to build user interfaces for the three major platforms.

I built a lot of web apps, some with Angular or other SPA frameworks, but to be honest nothing made it worthwhile. Plus the fact that I find Angular soooo complicated to learn.

Fast forwards to today. The web has advanced, and all new Windows machines have Edge, a great browser without the need to install something else such as Chrome. Angular 2 is in beta, as well as Aurelia: new SPA frameworks that embrace the latest advances in html5. Add to that the new web UI frameworks (e.g. Framework7) that skin web apps with the same look and feel as native apps on IOS and Android (I guess UWP support will be trivial to add to these frameworks). I’ve built an Aurelia app with these technologies, and I have about 10 times less code to do the same thing as with Xamarin Forms. Coupled with ManifoldJS, I can package a web site to a mobile app…

Now where does Xamarin Forms stand in this new world ? I believe the cross platform technology for normal, business apps has a very short future ahead of it. As some of you who follow what I do know – it’s full of bugs which I keep on declaring and there are always a regression here and there. I still believe in the technology, but I believe Xamarin needs to open source it’s Xamarin Forms project. In a matter of weeks I am certain the bugs will disappear and a bunch of new functionality will emerge. That will make a difference. I believe Xamarin’s value proposition is in reusing .net skills and know how, not an cross platform API. Besides, Xamarin Forms is useless without the core Xamarin engine to generate the IOS and Android apps.

Now for my predications. In 2016, I will only start new projects with web technologies. If Xamarin Forms is open sourced, I will consider it again. Obviously, project specifics might make me use one or the other whatever my current preferences are…

 

 

 

I took a few minutes to review the Telerik Platform through a demo they are hosting at http://www.telerik.com/dineissimo.

The first thing I noticed were :

  • A web based IDE (cool)
  • It uses GIT
  • Cordova and plugins based architecture
  • Access to KendoUI for UI
  • A fun way to deploy your app to your phone through a “proxy app”. You scan the QR code onscreen which launches your app in a on device emulator (more like a container). Then you can run the app and refresh it off of the server using a three figure tap and slide…

I keep on having this dilemma : will hybrid apps like Cordova will prevail over native apps (think Xamarin). One thing is certain, as devices become more powerful, the end-user will not be able to tell the difference. The problem will be the OS companies and will they go through hoops to keep native apps more attractive on their platform?

For sure, the Telerik Platform looks like a great place to start with hybrid apps. And while your looking at hybrid apps, you can also look at Visual Studio 2015, which will support Cordova based projects and from there, you can integrate KendoUI. It will require a few more manual steps (that should go away when KendoUI is fully available through nuget and bower).

One of the projects we just finished had two requirements (well, the third was it had to be a web site that ran on the iPad). The first one was for the web site to be able to display a lot of data in a quick manner, and that this data should be available offline. The second requirement was for that data that is manipulated by the user to be available offline and sync back to the server a.s.a.p.

Offline pages

This part was easier to solve; what we did was to expose a bunch of pages using asp.net mvc that would render on the server and then send back those pages to the client (without any ajax in them). When the pages were finished, we added a button on the profile page which would force the server to add or remove a cookie, indicating if we wanted offline pages to be available or not. Then in _Layout.cshtml file, we added the following :

@{
    if (ViewBag.IsOfflinePage == null)
    {
        ViewBag.IsOfflinePage = false;
    }
    if (Request.IsAuthenticated && ViewBag.IsOfflinePage)
    {
        this.WriteLiteral(string.Format("<html manifest=\"{1}/{0}\">", User.Identity.GetUserName(), Url.Content("~/home/offlineManifest")));
    }
    else
    {
        this.WriteLiteral("<html>");
    }
}

This allowed us to dynamically include the manifest or not, and have a manifest per “user”. The “IsOfflinePage” property is only set to true on site`s main page (and only if the cookie is present) so that the system only tries to update the offline pages when the user is on the home page…

        [AllowAnonymous]
        public ActionResult Index()
        {
            if (!User.Identity.IsAuthenticated)
                return View("Index_NotAuthorised");

            if (Request.Cookies.AllKeys.Contains("Offline") &&
                Request.Cookies["Offline"].Value == User.Identity.GetUserId())
                ViewBag.IsOfflinePage = true; //send the manifest
            return View();
        }

In order to provide feedback, here is what we have in the _layout.cshtml page…

                $(function () {
                    if (window.applicationCache) {
                        var appCache = window.applicationCache;
                        appCache.addEventListener('error', function (e) {
                            $('#cacheStatus').text("- Offline Mode - Error-");
                            console.log(e);
                        }, false);
                        appCache.addEventListener('checking', function (e) {
                            $('#cacheStatus').text("- Offline Mode - Verifying -");
                        }, false);
                        appCache.addEventListener('noupdate', function (e) {
                            $('#cacheStatus').text("- Offline Mode - You have the lastest version -");
                        }, false);
                        appCache.addEventListener('downloading', function (e) {
                            $('#cacheStatus').text("- Offline Mode - Downloading -");
                        }, false);
                        appCache.addEventListener('progress', function (e) {
                            $('#cacheStatus').text("- Offline Mode - Downloading " + e.loaded + " / " + e.total + " -");
                        }, false);
                        appCache.addEventListener('updateready', function (e) {
                            $('#cacheStatus').html("- Offline Mode - New version downloaded, <a href='javascript:window.location.reload()'>click here to activate</a> -");
                            notifier.show("New version downloaded, click button in the footer to activate");
                        }, false);
                        appCache.addEventListener('cached', function (e) {
                            $('#cacheStatus').text("- Offline Mode activated -");
                        }, false);
                        appCache.addEventListener('obsolete', function (e) {
                            $('#cacheStatus').text("- Offline Mode Deactivated-");
                        }, false);
                    }
                });

The only thing missing was dynamic creation of the offline manifest, for that we added an action called “offlineManifest” to our home controller with the matching cshtml file. This is a sample of the cshtml file, not how we are including the bundled stuff :

CACHE MANIFEST
@{
    Layout = null;
}
# OfflineIndex: @ViewBag.OfflineIndex
CACHE:
@Url.Content("~/img/homebg.png")

@Styles.RenderFormat("{0}", "~/bundles/css")
@Styles.RenderFormat("{0}", "~/bundles/kendo-css")
@Scripts.RenderFormat("{0}", "~/bundles/modernizr")
@Scripts.RenderFormat("{0}", "~/bundles/jquery")
@Scripts.RenderFormat("{0}", "~/bundles/kendo")
@Scripts.RenderFormat("{0}", "~/bundles/js")
@Url.Content("~/home/kendouidatasources")

@Url.Content("~/fonts/glyphicons-halflings-regular.ttf")
@Url.Content("~/fonts/glyphicons-halflings-regular.eot")
@Url.Content("~/fonts/glyphicons-halflings-regular.svg")
@Url.Content("~/fonts/glyphicons-halflings-regular.woff")
@Url.Content("~/content/kendo/Bootstrap/sprite.png")
@Url.Content("~/content/kendo/Bootstrap/loading-image.gif")
@Url.Content("~/bundles/Bootstrap/sprite.png")
@Url.Content("~/bundles/Bootstrap/loading-image.gif")

@Url.Content("~/")
@Url.Content("~/home/about")
@Url.Content("~/home/notavailableoffline")
@Url.Content("~/catalog/products")
@{
        foreach (var id in ViewBag.CatalogProductIds)
        {
            Write(@Url.Content(string.Format("~/catalog/productimage/{0}", id)) + "\r\n");
            Write(@Url.Content(string.Format("~/catalog/product/{0}", id)) + "\r\n");
        }
}

NETWORK:
*

FALLBACK:
/ @Url.Content("~/home/notavailableoffline")

And here is our controller action code, note that we need special handling because for the manifest to be valid, we need to “trim()” the response before if got sent as razor was adding a blank space at the beginning, which was generating an error client side.

public ActionResult OfflineManifest(string id)
        {
            if (!Request.Cookies.AllKeys.Contains("Offline") ||
                Request.Cookies["Offline"].Value != User.Identity.GetUserId())
                return HttpNotFound();
            var offlineIndex = db.Parameters.Single().OfflineIndex;
            ViewBag.OfflineIndex = offlineIndex;
            //catalog stuff
            ViewBag.CatalogProductIds = db.Products.ToList();
            var partial = true;
            var viewPath = "~/views/home/OfflineManifest.cshtml";
            return GetCacheManifestContent(partial, viewPath, null);
        }
        private ActionResult GetCacheManifestContent(bool partial, string viewPath, object model)
        {
            // first find the ViewEngine for this view
            ViewEngineResult viewEngineResult = null;
            if (partial)
                viewEngineResult = ViewEngines.Engines.FindPartialView(ControllerContext, viewPath);
            else
                viewEngineResult = ViewEngines.Engines.FindView(ControllerContext, viewPath, null);
            if (viewEngineResult == null)
                throw new FileNotFoundException("ViewCouldNotBeFound");
            // get the view and attach the model to view data
            var view = viewEngineResult.View;
            ControllerContext.Controller.ViewData.Model = model;
            string result = null;
            using (var sw = new StringWriter())
            {
                var ctx = new ViewContext(ControllerContext, view,
                                            ControllerContext.Controller.ViewData,
                                            ControllerContext.Controller.TempData,
                                            sw);
                view.Render(ctx, sw);
                result = sw.ToString().Trim();
            }
            return Content(result, "text/cache-manifest");
        }

Here is a bunch of links that got us going with offline pages…

http://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html#concept-appcache-master
http://www.dullroar.com/detecting-offline-status-in-html-5.html
http://www.dullroar.com/html-5-offline-caching-gotcha-with-ipad.html
http://sixrevisions.com/web-development/html5-iphone-app/
http://www.webreference.com/authoring/languages/html/HTML5-Application-Caching/index.html
http://www.html5rocks.com/en/tutorials/appcache/beginner/

One nasty bug we did find with IE is that if the offline manifest file contains more than 1000 lines, it simply generates an error. The limit can be raised via group policies and I hope that they remove this limit in IE12 (I declared the bug at Microsoft). Here is the link to how to modify group policies : http://technet.microsoft.com/en-us/library/jj891001.aspx.

Offline data

This second requirement was a bit harder to fix. To have offline data we used the “DataSource” component from Telerik which itself uses LocalStorage (storage that continues to exist after the browser is closed).

LocalStorage is a very basic key-value store, and in order to save something complexe, we need to use JSON to represent that data as a string. The project also used Telerik’s KendoUI technology to make the online-offline transition almost code free…
Basically, a web page is loaded, and a “DataSource” is instantiated. I initialize is with “.offline(true)” and execute a “.fetch()” to force it to grab data right away from LocalStorage.

I then hookup some code to monitor for online and offline events. If I go online, I ping a web service to make sure it is true. If it turns out that the app is really online, I perform a “.offline(false)” which syncs data to the server and then a “.read()” on the datasource for it to flush it’s data and go grab fresh one. Note that I only perform the “.read()” on the datasource if I am on the home page, so that way i am only hitting the server when the site is opened and not when the user is working on the different pages.

All of this works great. Operations on the datasource are sent to the server in real time if I am online and queued offline for later sending if I am offline.

One problem I got, is that the DataSource transforms the data when it is read. i.e. it receives json from the webserver and converts any data according to what is written in the model (i.e. parse dates…). When the data is saved to local storage, everything is serialised to json. The problem occurs when the DataSource reads it’s data from LocalStorage, it doesn’t apply the same rules as when receiving the data from a web service and so, dates appear as strings in the model instead of dates. The problem is that json doesn’t have the metadata to tell the deserializer how to handle a particular field, and so custom code must be written. I am pretty sure this will be fixed with the next version of kendoui.

The second problem I got is that I must wait for the “.offline(false)” to finish before fetching new data with “.read()”. If they go in parallel, chances are I won’t get the new/modified data. Right now there are no promises and so I need a timer. What I do is if there were changes “.hasChanges()”, then I put the timer to 2-3 seconds, otherwise I put it on 500 ms. I am also hoping this will be fixed in the next version f kendoui.

Sway

Categories: web
Comments: No

As I was browsing through my news this morning, I discovered something quite nice. A new product by the Microsoft Office team called Sway.

As you know, I have blogged a few times on different presentation styles as I endeavor to find a better way to present information, let it be reports, talks, or even tutorials for apps. My previous post spoke of html based presentations that were self contained and had some pretty effects attached to them, a change from the slide-by-slide approach used by PowerPoint.

As soon as this app is released, I am certain we will witness the birth of a new way to present information. Better looking reports, no more boring slide-by-slide presentations, and numbers that come to live !

Here is the vision video for your enjoyment !

Wow, today is a big day because a few things were released…

The first piece of news is that Visual Studio 2013 Update 2 has been finalized and is available to download, it brings a bunch of new features including :

  • TypeScript 1.0
  • Universal Apps (for Windows Phone 8.1 and Windows 8.1, in XAML in HTML)
  • A json editor
  • Better LESS and new SASS support
  • upgrades to Owin, ASP.NET identity and more !

Read the complete annoucement here!

Next up is a teaser of what is coming next in ASP.NET… there seems to be a reference to MVC hosted over OWIN… (that would make a few of my clients happy !!!) and something about nugetting the .net framework…. Read the announcement here at Scott Hanselman’s blog.

The last feature is something that is just awesome… Visual Studio can be used to author apps that are HTML based and that can be packaged to run using Cordova (server-less) on any platform (iPhone, iPad, Android, Windows Phone…). That is great and I will have to make a few follow-up blog posts and how this stuff works. Get the preview and documentation here !