All posts in .net

One of the projects we just finished had two requirements (well, the third was it had to be a web site that ran on the iPad). The first one was for the web site to be able to display a lot of data in a quick manner, and that this data should be available offline. The second requirement was for that data that is manipulated by the user to be available offline and sync back to the server a.s.a.p.

Offline pages

This part was easier to solve; what we did was to expose a bunch of pages using mvc that would render on the server and then send back those pages to the client (without any ajax in them). When the pages were finished, we added a button on the profile page which would force the server to add or remove a cookie, indicating if we wanted offline pages to be available or not. Then in _Layout.cshtml file, we added the following :

    if (ViewBag.IsOfflinePage == null)
        ViewBag.IsOfflinePage = false;
    if (Request.IsAuthenticated && ViewBag.IsOfflinePage)
        this.WriteLiteral(string.Format("<html manifest=\"{1}/{0}\">", User.Identity.GetUserName(), Url.Content("~/home/offlineManifest")));

This allowed us to dynamically include the manifest or not, and have a manifest per “user”. The “IsOfflinePage” property is only set to true on site`s main page (and only if the cookie is present) so that the system only tries to update the offline pages when the user is on the home page…

        public ActionResult Index()
            if (!User.Identity.IsAuthenticated)
                return View("Index_NotAuthorised");

            if (Request.Cookies.AllKeys.Contains("Offline") &&
                Request.Cookies["Offline"].Value == User.Identity.GetUserId())
                ViewBag.IsOfflinePage = true; //send the manifest
            return View();

In order to provide feedback, here is what we have in the _layout.cshtml page…

                $(function () {
                    if (window.applicationCache) {
                        var appCache = window.applicationCache;
                        appCache.addEventListener('error', function (e) {
                            $('#cacheStatus').text("- Offline Mode - Error-");
                        }, false);
                        appCache.addEventListener('checking', function (e) {
                            $('#cacheStatus').text("- Offline Mode - Verifying -");
                        }, false);
                        appCache.addEventListener('noupdate', function (e) {
                            $('#cacheStatus').text("- Offline Mode - You have the lastest version -");
                        }, false);
                        appCache.addEventListener('downloading', function (e) {
                            $('#cacheStatus').text("- Offline Mode - Downloading -");
                        }, false);
                        appCache.addEventListener('progress', function (e) {
                            $('#cacheStatus').text("- Offline Mode - Downloading " + e.loaded + " / " + + " -");
                        }, false);
                        appCache.addEventListener('updateready', function (e) {
                            $('#cacheStatus').html("- Offline Mode - New version downloaded, <a href='javascript:window.location.reload()'>click here to activate</a> -");
                  "New version downloaded, click button in the footer to activate");
                        }, false);
                        appCache.addEventListener('cached', function (e) {
                            $('#cacheStatus').text("- Offline Mode activated -");
                        }, false);
                        appCache.addEventListener('obsolete', function (e) {
                            $('#cacheStatus').text("- Offline Mode Deactivated-");
                        }, false);

The only thing missing was dynamic creation of the offline manifest, for that we added an action called “offlineManifest” to our home controller with the matching cshtml file. This is a sample of the cshtml file, not how we are including the bundled stuff :

    Layout = null;
# OfflineIndex: @ViewBag.OfflineIndex

@Styles.RenderFormat("{0}", "~/bundles/css")
@Styles.RenderFormat("{0}", "~/bundles/kendo-css")
@Scripts.RenderFormat("{0}", "~/bundles/modernizr")
@Scripts.RenderFormat("{0}", "~/bundles/jquery")
@Scripts.RenderFormat("{0}", "~/bundles/kendo")
@Scripts.RenderFormat("{0}", "~/bundles/js")


        foreach (var id in ViewBag.CatalogProductIds)
            Write(@Url.Content(string.Format("~/catalog/productimage/{0}", id)) + "\r\n");
            Write(@Url.Content(string.Format("~/catalog/product/{0}", id)) + "\r\n");


/ @Url.Content("~/home/notavailableoffline")

And here is our controller action code, note that we need special handling because for the manifest to be valid, we need to “trim()” the response before if got sent as razor was adding a blank space at the beginning, which was generating an error client side.

public ActionResult OfflineManifest(string id)
            if (!Request.Cookies.AllKeys.Contains("Offline") ||
                Request.Cookies["Offline"].Value != User.Identity.GetUserId())
                return HttpNotFound();
            var offlineIndex = db.Parameters.Single().OfflineIndex;
            ViewBag.OfflineIndex = offlineIndex;
            //catalog stuff
            ViewBag.CatalogProductIds = db.Products.ToList();
            var partial = true;
            var viewPath = "~/views/home/OfflineManifest.cshtml";
            return GetCacheManifestContent(partial, viewPath, null);
        private ActionResult GetCacheManifestContent(bool partial, string viewPath, object model)
            // first find the ViewEngine for this view
            ViewEngineResult viewEngineResult = null;
            if (partial)
                viewEngineResult = ViewEngines.Engines.FindPartialView(ControllerContext, viewPath);
                viewEngineResult = ViewEngines.Engines.FindView(ControllerContext, viewPath, null);
            if (viewEngineResult == null)
                throw new FileNotFoundException("ViewCouldNotBeFound");
            // get the view and attach the model to view data
            var view = viewEngineResult.View;
            ControllerContext.Controller.ViewData.Model = model;
            string result = null;
            using (var sw = new StringWriter())
                var ctx = new ViewContext(ControllerContext, view,
                view.Render(ctx, sw);
                result = sw.ToString().Trim();
            return Content(result, "text/cache-manifest");

Here is a bunch of links that got us going with offline pages…

One nasty bug we did find with IE is that if the offline manifest file contains more than 1000 lines, it simply generates an error. The limit can be raised via group policies and I hope that they remove this limit in IE12 (I declared the bug at Microsoft). Here is the link to how to modify group policies :

Offline data

This second requirement was a bit harder to fix. To have offline data we used the “DataSource” component from Telerik which itself uses LocalStorage (storage that continues to exist after the browser is closed).

LocalStorage is a very basic key-value store, and in order to save something complexe, we need to use JSON to represent that data as a string. The project also used Telerik’s KendoUI technology to make the online-offline transition almost code free…
Basically, a web page is loaded, and a “DataSource” is instantiated. I initialize is with “.offline(true)” and execute a “.fetch()” to force it to grab data right away from LocalStorage.

I then hookup some code to monitor for online and offline events. If I go online, I ping a web service to make sure it is true. If it turns out that the app is really online, I perform a “.offline(false)” which syncs data to the server and then a “.read()” on the datasource for it to flush it’s data and go grab fresh one. Note that I only perform the “.read()” on the datasource if I am on the home page, so that way i am only hitting the server when the site is opened and not when the user is working on the different pages.

All of this works great. Operations on the datasource are sent to the server in real time if I am online and queued offline for later sending if I am offline.

One problem I got, is that the DataSource transforms the data when it is read. i.e. it receives json from the webserver and converts any data according to what is written in the model (i.e. parse dates…). When the data is saved to local storage, everything is serialised to json. The problem occurs when the DataSource reads it’s data from LocalStorage, it doesn’t apply the same rules as when receiving the data from a web service and so, dates appear as strings in the model instead of dates. The problem is that json doesn’t have the metadata to tell the deserializer how to handle a particular field, and so custom code must be written. I am pretty sure this will be fixed with the next version of kendoui.

The second problem I got is that I must wait for the “.offline(false)” to finish before fetching new data with “.read()”. If they go in parallel, chances are I won’t get the new/modified data. Right now there are no promises and so I need a timer. What I do is if there were changes “.hasChanges()”, then I put the timer to 2-3 seconds, otherwise I put it on 500 ms. I am also hoping this will be fixed in the next version f kendoui.

We were doing a little performance related work on a ASP.NET MVC website. There were a few pages where we transmitted a lot of unnecessary data (binding a big model to KendoUI if you don’t use the data is bad bad bad) and a few SQL n+1 scenarios…

All in all, the performance was pretty good, and the site had only a few users who used it on an intranet. This time around though, we are exposing the site to the internet for hundreds of people to use from everywhere on earth, and that changed a few things.

Then we discovered MiniProfiler. which is available on Nuget. It’s an extendable system that allows you to track the length of time used by different part of your system, and shows the information in a non invasive way.


 This is how we got going:

  1. We installed a few a few nuget packages : MiniProfiler, MiniProfiler.EF6 and MiniProfiler.MVC4.
  2. We added the following code to Application_Start()
    MiniProfiler.Settings.Results_Authorize = IsUserAllowedToSeeMiniProfilerUI;
    private bool IsUserAllowedToSeeMiniProfilerUI(HttpRequest httpRequest)
        var principal = httpRequest.RequestContext.HttpContext.User;
        return principal.IsInRole("admin");     
  3. We had to modify our web.config to have the following in the handlers section of system.webservice section
    <add name="MiniProfiler" path="mini-profiler-resources/*" verb="*" type="System.Web.Routing.UrlRoutingModule" resourceType="Unspecified" preCondition="integratedMode" />
  4. To make sure the MiniProfiler shows up on every page, we add this line to _Layout.cshtml
  5. To profile the MVC views, we added the following code to Application_Start
    var copy = ViewEngines.Engines.ToList();
    foreach (var item in copy)
        ViewEngines.Engines.Add(new ProfilingViewEngine(item));
  6. To profile the MVC controllers, we added the following code to FilterConfig
    filters.Add(new ProfilingActionFilter());
  7. Anywhere we had a lot of processing, we isolated each section like this
    using (MiniProfiler.Current.Step("Calculate sales"))
    { ... }

We were amazed at how easy all of this went, and also at the result. The EF6 plugin automatically detects n+1 or duplicate queries and so that gave us a big speed boost in our optimisation phase.

Wow, today is a big day because a few things were released…

The first piece of news is that Visual Studio 2013 Update 2 has been finalized and is available to download, it brings a bunch of new features including :

  • TypeScript 1.0
  • Universal Apps (for Windows Phone 8.1 and Windows 8.1, in XAML in HTML)
  • A json editor
  • Better LESS and new SASS support
  • upgrades to Owin, ASP.NET identity and more !

Read the complete annoucement here!

Next up is a teaser of what is coming next in ASP.NET… there seems to be a reference to MVC hosted over OWIN… (that would make a few of my clients happy !!!) and something about nugetting the .net framework…. Read the announcement here at Scott Hanselman’s blog.

The last feature is something that is just awesome… Visual Studio can be used to author apps that are HTML based and that can be packaged to run using Cordova (server-less) on any platform (iPhone, iPad, Android, Windows Phone…). That is great and I will have to make a few follow-up blog posts and how this stuff works. Get the preview and documentation here !


I had to perform a small spike for one of my clients – how to download csv files from a website that requires authentication in an automated way.

The actual downloading of the file is quite simple, there are simple APIs in dotNet that do the job just fine. THe problem is how do I get the authentication part done.

My solution was to try to download the files, if I get a redirection to the login page, open a browser and let the user log in. Once he is logged in, transfer to cookies from the browser to the download api and then try to download the files again. Here is what the code looks like (based on a WPF application) :

The first thing to do is to download the file, and if that fails, show a browser with the redirected url.

        private void DownloadFiles()
            HttpWebRequest request = (HttpWebRequest)WebRequest.Create("");
            request.CookieContainer = GetUriCookieContainer(new Uri(""));

            // execute the request
            HttpWebResponse response = (HttpWebResponse)request.GetResponse();

            // this is how the website tells you are not logged in, it doesn't use http error codes but rather
            // redirects you to the login page. Each web site will do this differently.
            if (!IsLoggedIn(response.ResponseUri))
                // in this case, just show the form's web browser control and set it's soure to the redirected page
                wb.Visibility = System.Windows.Visibility.Visible;
                wb.Source = response.ResponseUri;
                // hide the browser control, the download seems valid
                wb.Visibility = System.Windows.Visibility.Hidden;

                //todo : do something smart with the response

        private static bool IsLoggedIn(Uri responseUri)
            return !responseUri.AbsoluteUri.ToLower().Contains("access.denied");

Notice the call to

>GetUriCookieContainer(new Uri(""))

which basically grabs the cookies from the app’s cookie store and allows them to be attached to the request. Here is how the magic occurs (thanks to

        public static CookieContainer GetUriCookieContainer(Uri uri)
            CookieContainer cookieContainer = null;

            int datasize = 131072; // allocate 128k of memory for interop
            StringBuilder cookieData = new StringBuilder(datasize);
            if (!InternetGetCookieEx(uri.ToString(), null, cookieData, ref datasize, InternetCookieHttponly, IntPtr.Zero))
                if (datasize < 0) return null; //error condition

                // Allocate stringbuilder large enough to hold the cookie and redo the call
                cookieData = new StringBuilder(datasize);
                if (!InternetGetCookieEx(uri.ToString(), null, cookieData, ref datasize, InternetCookieHttponly, IntPtr.Zero))
                    return null;

            //use the cookie data received from the cookie store
            if (cookieData.Length > 0)
                cookieContainer = new CookieContainer();
                cookieContainer.SetCookies(uri, cookieData.ToString().Replace(';', ','));
            return cookieContainer;

        [DllImport("wininet.dll", SetLastError = true)]
        public static extern bool InternetGetCookieEx(
            string url,
            string cookieName,
            StringBuilder cookieData,
            ref int size,
            Int32 dwFlags,
            IntPtr lpReserved);

        private const Int32 InternetCookieHttponly = 0x2000;

Now the last problem, how do we know that the user has authenticated himself in the browser control. I accomplished this by simple monitoring the web browser’s navigation for any urls that would hint that the user is not being redirected to the login page.

            if (IsLoggedIn(e.Uri))

And that’s it, you use a browser control to let the user authenticate to the web site and then fetch the authentication cookies to use them to automate downloads.

As you know, Visual Studio 2013 just came out and since there now seems to be a new version of Visual Studio each year, one of the realities we face is that we much upgrade our projects and solutions much more often.

In this particular case, I had to touch-up a project from 2 years ago that was started with Visual Studio 2010 with MVC 3 and then upgraded to VS 2012. I figured I would upgrade the solution as I would have to do so one day and the MVC part was what was scaring me, as that project wasn`t started with nuget or anything that allowed for developer controlled upgrading.

Anyways, I opened Visual Studio 2013, opened my solution and then the browser appeared with a migration report. Then only thing on there that scared me was :

ASP.NET MVC 3 projects have limited functionality in Visual Studio 2013. Commands such as Add Controller, Add View, Add Area, and Go to View/Controller are not available. Intellisense for Razor (CSHTML and VBHTML) files is limited to HTML markup. Please see for additional information on how to upgrade an MVC 3 project.

Turns out that wasn’t too bad, I followed the link, followed the instructions and in a few minutes later the solution was upgraded to MVC 4. I took a few more minutes and upgraded to MVC 5 but that broke my authentication stuff, so now I have a little refactoring to do on Monday morning to get it all going.

So here is the lesson – upgrade often and continuously. Little upgrades are easier to do than the big ones !

Online Learning

Categories: .net
Comments: No

Each time I meet a new time, one of the questions they always ask me is where can I learn more.

I like to recommend a few books but to be honest, that`s not where I learn anymore. Still though, I end up mentioning a few design pattern and architecture books that I feel are required reading by my fellow developers. More than technology, I also recommend a few process books because it turns out most focus a lot on their technical skills but tend to forget we have to work on teams.

Then I go through the usual blogs and web sites, such as StackOverflow and Curah which provide great point information for specific problems.

Finally, there is online learning, and there are two great sites out there which I feel do the job just great. The first is Pluralsight, which has been there a few years now and covers a multitude of topics from specific to architecture. The newer one on the block is MVA, short for Microsoft Virtual Academy which is great because it is by Microsoft on it’s own technologies and free.

I trust those few links will keep you surfing for a few hours so enjoy !

Good news!

On my home from the MVP Summit, I read the great news from my old colleague Grigori that the P&P team at Microsoft was going to open source the Enterprise Library blocks onto CodePlex ( That is good news as their approach is a totally transparent one, where the Microsoft team will commit directly to the public repository, at the same time allowing the public to contribute themselves. Read the official announcement!

It will be interesting to see what contributions are made to this project and how “releases” will be defined with this new process.

Well, I bit a small bug in a one of the systems we deployed a few weeks ago that imports massive amounts of sales and inventory data. The program itself can run on any computer within the network, and the problem showed up when we ran it off a Windows 7 computer installed in French.

It didn’t take a lot of time to realize that the default culture on the computer was off, which was creating a problem when parsing the numbers in the csv file. But the question how to fix it…

At any time, you can do something like Thread.CurrentThread.CurrentCulture (or CurrentUICulture) and that will change the current thread’s culture. That is good, but what do you do when you don’t really have a lot of async code and a parallelism ?

The answer lies in a new feature in .net 4.5. You can basically change the current culture for all threads in a AppDomain with a single call. For threads that already exist, if there culture hasn’t yet been overridden using Thread.CurrentThread.CurrentCulture (or CurrentUICulture), it will pickup the new culture information specified.

CultureInfo.DefaultThreadCurrentCulture and CultureInfo.DefaultThreadCurrentUICulture are the two new properties you have to learn; they are described more in depth on msdn :

Enjoy !