All posts in Architecture

The Microsoft Orleans project (http://dotnet.github.io/orleans/) is a .NET framework for building systems on an Actor Model paradigm.

A typical transactional system receives a command and executes it. Executing it usually means fetching data from a database, modifying it and then saving it.

The reason I was interested in Orleans for To-Do.Studio was that each action in the app generates a command, there is no save button, no transactional boundary. This naturally creates a chatty interface, where many small commands are sent to the server, which much for each: get data, modify it and save it. Combine that with the fact that NOSQL components such as CosmosDB make you pay for reads and what you have is an expensive bottleneck.

The Actor Model in Orleans would have fixed this for me by doing the following. The first time a command executes which references a particular domain object, a grain is activated and the data is read. Each command then modifies the data in the grain. Eventually, when no one uses the grain the system will deactivate the grain causing it to save itself.

This would have the added benefit for us to minimize data access costs and offer speedups similar to adding a caching layer.

As with all magic though, we lose some control:

  1. First thing is that as the system scales to thousands of users and tens of thousands of grains, we have to think of system scalability (Orleans doesn`t yet deactivate resources based on memory pressure, only an “unused delay”)
  2. The deployment model is not fully integrated with Azure.
    1. Hosting is possible within a VM – not PAAS enough for me
    2. Hosting is possible within worker roles – sounds interesting but not exactly what I want
    3. Hosting is possible within Service Fabric (which is another implementation of an Actor Model from the Azure team and not the .NET team) – doesn’t feel seamless this would be my ideal hosting option)
    4. Host in a Linux container which is scaled automatically by Kubernetes – to be honest, I am not a container fan; Ido see their advantages for certain workloads, but it feels like the PAAS of infrastructure people.

Anyways, my best option would be hosting on top of Service Fabric. It would need to be 100% integrated with dotnetcore stuff though.

I could also recommend having a dedicated PASS offering for Orleans (kind of like what happened with SignalR).

Finally, Orleans should be upgraded to support memory pressure for deactivations as well as some sort of queuing mechanism when grains start swapping to keep the amount of work which is executing in parallel rationalized.

As time goes by, the web becomes more and more powerful. It’s been around for a while, but I think that with Chrome and Edge finally supporting PWA apps (and native apps starting to become mobile web apps), it is time to embrace this technology. It is time to create web pages that can push notifications directly to the desktop even if the browser is closed.

This works right now on any device running Microsoft Edge, as well as devices that run Google Chrome (Android devices and desktops running Chrome).

Architecturally, the Service Worker part of your website (running in the background) creates a subscription with a push notification service provider and then that “token” is sent to your application server, which will use it to send you a notification.

What I didn’t grasp, is that you the concept of the push notification service provider is not any server running some software running a specific protocol, but rather tied directly to the browser. For example, the ONLY push notification service provider if the website is being viewed within Google Chrome is FCM (Firebase), Mozilla or Microsoft Edge (Windows WNS). In fact, these are the same technologies that are used by real native apps on those platforms.

Soooo…

  1. The API to subscribe and get some sort of token to enable push notifications is defined by internet standards, so it is safe to use in any browser on any website.
  2. Each browser will implement those methods using its own push notification service provider
  3. Sending a push notification will be different depending on whom the push notification service provider is.

Here is a link to a functional demo that works everywhere : https://webpushdemo.azurewebsites.net/

So my startup To-Do.Studio is advancing, and we started getting rolling on our informational website.

The first thing was to create a “web site” project with Visual Studio, and get it into GIT.

As you can see, it’s just a bunch of HTML, CSS and stuff, with a small web.config so that we can host on IIS (we are using Azure App Services for this). The first thing which is abnormal though and unlike a normal ASP.NET project, I do not have a CSPROJ that describes my project, instead, I have a PUBLISHPROJ, and it gets ignored by GIT. So how did we get here ?

Since Visual Studio relies on things like MSBUILD to “publish” projects, it needs some sort of project file and in this case, Visual Studio created the PUBLISHPROJ the first time I created the publish profile, this allows me to publish from the command line and more.

Although the file is GIT ignore, I had to add it to GIT in order for the VSTS build system to be able to publish this project.

The other modification we had to do was to add to GIT a special file, in the folder of the solution called “after.ToDoStudio.StaticWebFront.sln.targets”. Turns out MSBUILD looks for these files automatically to include them in your build process, without the need to modify the solution file. This is what we put in there:

<!--?xml version="1.0" encoding="utf-8"?-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
 <Target Name="Deploy Website" AfterTargets="Build">
 <Message Text="Starting Website deployment" Importance="high">
 </Message>
 <MSBuild Projects="$(MSBuildProjectDirectory)\ToDoStudio.StaticWebFront\website.publishproj" 
 BuildInParallel="true" /> 
 </Target>
</Project>

What this does is ensure that after the Build action (which by default will build nothing cause this is a project with nothing to build), the system will automatically invoke the publish action using the PUBLISHPROJ.

Now we were ready to build this thing within VSTS.

As you can see here, our build is pretty standard with defaults used everywhere:

The release pipeline is also pretty standard, nothing fancy here:

 

Oy,  I just spent a crazy week to learn that :

It is impossible for an AppService application to connect to a FTP server on the internet (in passive mode). The reason is that each Azure App-Service is assigned a pool of IP addresses for outgoing traffic and Azure is free to choose a new outgoing IP for each connection (the natting stuff) and FTP expects the data connection to be from the same ip as the control connection.

This said, when using sftp, the handshake is negociated with the connection that composes the control channel. When the second connection to transfer data is built, the ftp server receives the connection from a potentially different IP address, which is a 425 type error.

Took us 3 days to diagnose, one evening to write a basic http web api that allows to post and get files, a few hours to install it all and a full day to rewrite the code that worked with ftp…

This said, it was something that no one on the team could of had predicted, live and learn !

One thing I believe in is constant change, and constant learning, ideally one thing per day. Sometimes it’s learning how to cook the best eggs benedict ever for breakfast, or sometimes it’s helping out a friend with a special request.

Today I was asked by a colleague if I could help extract data from a web site. As an architect, the first thing I look at in the “code” is a clean separation of what is presentation and what is data. Obviously, I did not find that, which made me realise how frameworks which render html mixed with data are bad bad bad. Why can’t everything follow MVVM with some binding of some sort.

Anyways, we needed a solution and what we wipped up was screen scrapping.

My first attempt was to write a small html page that loads jquery, does an ajax call to hit the webpage we needed data from and then extract it from it’s DOM… Turns out it was an easy to execute but I was met an error : CORS headers not found for file://dlg/extractData.html. GRRRRR

New strategy !

I opened Chrome, did a search for screen scrapping extensions and behold I found this.

An extension that would allow me to navigate to any page, lookup how it is built, understand it’s usage of CSS selectors and voilà. Any page that reuses the same css selector to represent repeating data (as in a list) can be extracted to a json or csv file.

Well, thanks DL for getting me to learn something new today !

While working on the backend for To-Do.Studio, we ran into a scenario where we needed to test on a developer’s machine with https (ssl) enabled.

note – we use Kestral for testing and not IISExpress.

First thing we did was to try and add a https url in the LaunchSettings.json file. That didn’t work 🙂

What we found was that had to configure Kestral manually and tell it which certificate to use. The code in our Program class looks like this :

public static IWebHost BuildWebHost(string[] args) =>
   WebHost.CreateDefaultBuilder(args)
     .UseApplicationInsights()
     .UseAzureAppServices()
#if DEBUG
     .UseKestrel(t => {
        t.Listen(IPAddress.Loopback, 55172);
        t.Listen(IPAddress.Loopback, 55000, o =>
        {
            o.UseHttps("localhostCertificate.pfx", "password");
        });
     })
#endif
    .UseStartup<Startup>()
    .Build();

But how do we get a certificate ? There are various ways but i didn’t feel like finding my win32 sdk as some instruction’s on the web… so i decided to use my Ubuntu WSL…

Two commands

erik@ErikLAPTOP:~$ openssl req -x509 -days 10000 -newkey rsa:2048 -keyout cert.pem -out cert.pem
erik@ErikLAPTOP:~$openssl pkcs12 -export -in cert.pem -inkey cert.pem -out cert.pfx

and i had a good looking self made certificate. The hardest part was to copy this cert.pfx file to a Windows directory so i could use it my code.

Voilà ! after modifying my LaunchSettings.json, i could test in either http or https mode !

"ToDoStudio.Server_http": {
 "commandName": "Project",
 "launchBrowser": true,
 "environmentVariables": {
 "ASPNETCORE_ENVIRONMENT": "Development"
 },
 "applicationUrl": "http://localhost:55172/"
 },
 "ToDoStudio.Server_https": {
 "commandName": "Project",
 "launchBrowser": true,
 "environmentVariables": {
 "ASPNETCORE_ENVIRONMENT": "Development"
 },
 "applicationUrl": "https://localhost:55000/"
 }
I have been debugging software for the last 20 years, and although things have changed, I realize I took it for granted and never really thought about it.

Basics

There were two debugging technique if was introduced to when I started.
  1. Attach a debugger to a program and hit breakpoints. When you hit a breakpoint, the debugger can show you your source code because the symbols have a mapping from the assembly code to the source code. At a breakpoint you can inspect variables, change their values. You can control the flow of execution of your program by executing the next instructions and even change what the next instruction is.
  2. This technique is a bit more brute force, but consists of using logging to get insight into code which is hard to debug with breakpoints. Although less efficient than breakpoints, I have done this soooo often because it allows me to debug also in cases like production, where I might not be able use breakpoints.

Step forward a few years and we now have Visual Studio 2017. Both techniques above still work but with a few new features. For example, for logging we don’t have to use the console but have access to Trace and Debug, both with listeners to output where you want. Even better are TraceSources, which are specialized objects for logging for modules. Always making things easier, VS also shows us a few graph about memory and threads.

To complement the advances in the platform, there are a bunch of other logging frameworks such as nlog, log4net, serilog… One of the advance i love about these frameworks are the concept of structured logging, imagine logging with DTOs. While we are on the subject, there are even logging frameworks as a service, all cloud based !
There are dump files that can be created at breakpoints in your code, which are basically a memory snapshot of your process. These can be inspected with WinDBG or Visual Studio
It does feel much more modern but all that we have seen is fundamentally the same thing. Things are getting much better…

Modern Debugging

One of the bigger breakthroughs in debugging we got a few years ago was called edit and continue. You can literally change code of running programs as they run… Quite interesting although I cant say have used this mechanism often. This requires Visual Studio.
The second one which has served me well is IntelliTrace. This is basically a configuration based approach to automatic logging – as the debugger hits different places (breakpoints) in the code, events are generated with some contextual information. For example – File.Delete() will generate an IntelliTrace event with the text “File asdf.txt deleted”. This is great as all major components of dot net are instrumented with this technology. IntelliTrace requires Visual Studio but using the offline IntelliTraceCollector, you can record the execution of your code and analysis everything in Visual Studio.
The newest feature which is currently in preview is called Time Travel Debugging, which basically allows your to execute your code… in reverse. You can go forward into your code’s execution as well as backwards, just look at the ribbon in WinDBG Preview (available in the Windows Store).

I haven’t had the chance to play with this too much, but it looks very fun, as it will allow you to go back in time in a break point within a loop… The recorded traces have to be viewed with WinDBG.

Another new feature which i think i will love is the concept of snappoints and logpoints. Imagine the ability to attach to a running process, but instead of pausing the process when you hit a breakpoint, a myriad of information gets recorded when a snappoint is passed. You could then inspect the variables and stack trace of every one of those hits without causing impact. Logpoints are dynamic logging, pass this instruction and add this event to the log. These are great advancements to the art of debugging. This feature requires Visual Studio and Azure hosted sites.

I hope you are excited about the future of debugging !

One of the project i was working on involves taking Office 365 to the next level. Take the tools that Microsoft gives us and bring it to the next level.

Some of the things we do involves extending Sharepoint, Excel, Word, Office with addons. Works great. But where do we get all the nice information we show the user ?

This is where the Microsoft Graph comes in (http://graph.microsoft.com). It is basically an odata feed that gives you access to a variety of data that represents you. Naturally, you need to send a bearer token to get access so authenticate first !

You can find then entire “metadata” on the internet itself, all self-describing ! Also, there is some documentation.

You can even try the Graph Explorer, which is a web tool to explore the graph in an interactive way.

Here a are a few example of me for my own company :

Who am i ?

GET https://graph.microsoft.com/v1.0/me
{
 "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users/$entity",
 "id": "",
 "businessPhones": [],
 "displayName": "Erik Renaud",
 "givenName": "Erik",
 "jobTitle": "Principal Architect",
 "mail": "erik.renaud@modelon.net",
 "mobilePhone": "+1 514...",
 "officeLocation": "+1.514...",
 "preferredLanguage": "en-CA",
 "surname": "Renaud",
 "userPrincipalName": "erik.renaud@modelon.net"
}

What about my picture ?

GET https://graph.microsoft.com/v1.0/me/photo/$value

 

This is a goldmine of information and makes any business tool soo much more powerful.

Add  an email, find recently used files, show how many unread things in teams… It’s all possible !

 

 

For one of the projects i manage i have two teams and in the end, one implements web services for the other to consume. In production and integration testing, things tend to go well. But when we are faced with debugging or testing the side that consumes web services, we need something more.

I love do discover new things or new ways of doing things, enter EasyMock (https://github.com/CyberAgent/node-easymock). It is a small nodejs web server that returns any file on disk with extra options in it’s config.json file.

You install it with :

$ npm install -g easymock
And you start it within your work directory with :
$ easymock
Server running on http://localhost:3000
Listening on port 3000 and 3001
Documentation at: http://localhost:3000/_documentation/
Logs at: http://localhost:3000/_logs/

If you wanted to mock a CurrentUserCount rest web service which is located at /api/CurrentUserCount, all you need to do is create a “api” directory with a file named “CurrentUserCount_get.json” within it. Here is that result :

There is even a handy automatically created documentation page:

Happy mocking !