All posts in Architecture

So my startup To-Do.Studio is advancing, and we started getting rolling on our informational website.

The first thing was to create a “web site” project with Visual Studio, and get it into GIT.

As you can see, it’s just a bunch of HTML, CSS and stuff, with a small web.config so that we can host on IIS (we are using Azure App Services for this). The first thing which is abnormal though and unlike a normal ASP.NET project, I do not have a CSPROJ that describes my project, instead, I have a PUBLISHPROJ, and it gets ignored by GIT. So how did we get here ?

Since Visual Studio relies on things like MSBUILD to “publish” projects, it needs some sort of project file and in this case, Visual Studio created the PUBLISHPROJ the first time I created the publish profile, this allows me to publish from the command line and more.

Although the file is GIT ignore, I had to add it to GIT in order for the VSTS build system to be able to publish this project.

The other modification we had to do was to add to GIT a special file, in the folder of the solution called “after.ToDoStudio.StaticWebFront.sln.targets”. Turns out MSBUILD looks for these files automatically to include them in your build process, without the need to modify the solution file. This is what we put in there:

<!--?xml version="1.0" encoding="utf-8"?-->
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
 <Target Name="Deploy Website" AfterTargets="Build">
 <Message Text="Starting Website deployment" Importance="high">
 </Message>
 <MSBuild Projects="$(MSBuildProjectDirectory)\ToDoStudio.StaticWebFront\website.publishproj" 
 BuildInParallel="true" /> 
 </Target>
</Project>

What this does is ensure that after the Build action (which by default will build nothing cause this is a project with nothing to build), the system will automatically invoke the publish action using the PUBLISHPROJ.

Now we were ready to build this thing within VSTS.

As you can see here, our build is pretty standard with defaults used everywhere:

The release pipeline is also pretty standard, nothing fancy here:

 

Oy,  I just spent a crazy week to learn that :

It is impossible for an AppService application to connect to a FTP server on the internet (in passive mode). The reason is that each Azure App-Service is assigned a pool of IP addresses for outgoing traffic and Azure is free to choose a new outgoing IP for each connection (the natting stuff) and FTP expects the data connection to be from the same ip as the control connection.

This said, when using sftp, the handshake is negociated with the connection that composes the control channel. When the second connection to transfer data is built, the ftp server receives the connection from a potentially different IP address, which is a 425 type error.

Took us 3 days to diagnose, one evening to write a basic http web api that allows to post and get files, a few hours to install it all and a full day to rewrite the code that worked with ftp…

This said, it was something that no one on the team could of had predicted, live and learn !

One thing I believe in is constant change, and constant learning, ideally one thing per day. Sometimes it’s learning how to cook the best eggs benedict ever for breakfast, or sometimes it’s helping out a friend with a special request.

Today I was asked by a colleague if I could help extract data from a web site. As an architect, the first thing I look at in the “code” is a clean separation of what is presentation and what is data. Obviously, I did not find that, which made me realise how frameworks which render html mixed with data are bad bad bad. Why can’t everything follow MVVM with some binding of some sort.

Anyways, we needed a solution and what we wipped up was screen scrapping.

My first attempt was to write a small html page that loads jquery, does an ajax call to hit the webpage we needed data from and then extract it from it’s DOM… Turns out it was an easy to execute but I was met an error : CORS headers not found for file://dlg/extractData.html. GRRRRR

New strategy !

I opened Chrome, did a search for screen scrapping extensions and behold I found this.

An extension that would allow me to navigate to any page, lookup how it is built, understand it’s usage of CSS selectors and voilà. Any page that reuses the same css selector to represent repeating data (as in a list) can be extracted to a json or csv file.

Well, thanks DL for getting me to learn something new today !

While working on the backend for To-Do.Studio, we ran into a scenario where we needed to test on a developer’s machine with https (ssl) enabled.

note – we use Kestral for testing and not IISExpress.

First thing we did was to try and add a https url in the LaunchSettings.json file. That didn’t work 🙂

What we found was that had to configure Kestral manually and tell it which certificate to use. The code in our Program class looks like this :

public static IWebHost BuildWebHost(string[] args) =>
   WebHost.CreateDefaultBuilder(args)
     .UseApplicationInsights()
     .UseAzureAppServices()
#if DEBUG
     .UseKestrel(t => {
        t.Listen(IPAddress.Loopback, 55172);
        t.Listen(IPAddress.Loopback, 55000, o =>
        {
            o.UseHttps("localhostCertificate.pfx", "password");
        });
     })
#endif
    .UseStartup<Startup>()
    .Build();

But how do we get a certificate ? There are various ways but i didn’t feel like finding my win32 sdk as some instruction’s on the web… so i decided to use my Ubuntu WSL…

Two commands

erik@ErikLAPTOP:~$ openssl req -x509 -days 10000 -newkey rsa:2048 -keyout cert.pem -out cert.pem
erik@ErikLAPTOP:~$openssl pkcs12 -export -in cert.pem -inkey cert.pem -out cert.pfx

and i had a good looking self made certificate. The hardest part was to copy this cert.pfx file to a Windows directory so i could use it my code.

Voilà ! after modifying my LaunchSettings.json, i could test in either http or https mode !

"ToDoStudio.Server_http": {
 "commandName": "Project",
 "launchBrowser": true,
 "environmentVariables": {
 "ASPNETCORE_ENVIRONMENT": "Development"
 },
 "applicationUrl": "http://localhost:55172/"
 },
 "ToDoStudio.Server_https": {
 "commandName": "Project",
 "launchBrowser": true,
 "environmentVariables": {
 "ASPNETCORE_ENVIRONMENT": "Development"
 },
 "applicationUrl": "https://localhost:55000/"
 }
I have been debugging software for the last 20 years, and although things have changed, I realize I took it for granted and never really thought about it.

Basics

There were two debugging technique if was introduced to when I started.
  1. Attach a debugger to a program and hit breakpoints. When you hit a breakpoint, the debugger can show you your source code because the symbols have a mapping from the assembly code to the source code. At a breakpoint you can inspect variables, change their values. You can control the flow of execution of your program by executing the next instructions and even change what the next instruction is.
  2. This technique is a bit more brute force, but consists of using logging to get insight into code which is hard to debug with breakpoints. Although less efficient than breakpoints, I have done this soooo often because it allows me to debug also in cases like production, where I might not be able use breakpoints.

Step forward a few years and we now have Visual Studio 2017. Both techniques above still work but with a few new features. For example, for logging we don’t have to use the console but have access to Trace and Debug, both with listeners to output where you want. Even better are TraceSources, which are specialized objects for logging for modules. Always making things easier, VS also shows us a few graph about memory and threads.

To complement the advances in the platform, there are a bunch of other logging frameworks such as nlog, log4net, serilog… One of the advance i love about these frameworks are the concept of structured logging, imagine logging with DTOs. While we are on the subject, there are even logging frameworks as a service, all cloud based !
There are dump files that can be created at breakpoints in your code, which are basically a memory snapshot of your process. These can be inspected with WinDBG or Visual Studio
It does feel much more modern but all that we have seen is fundamentally the same thing. Things are getting much better…

Modern Debugging

One of the bigger breakthroughs in debugging we got a few years ago was called edit and continue. You can literally change code of running programs as they run… Quite interesting although I cant say have used this mechanism often. This requires Visual Studio.
The second one which has served me well is IntelliTrace. This is basically a configuration based approach to automatic logging – as the debugger hits different places (breakpoints) in the code, events are generated with some contextual information. For example – File.Delete() will generate an IntelliTrace event with the text “File asdf.txt deleted”. This is great as all major components of dot net are instrumented with this technology. IntelliTrace requires Visual Studio but using the offline IntelliTraceCollector, you can record the execution of your code and analysis everything in Visual Studio.
The newest feature which is currently in preview is called Time Travel Debugging, which basically allows your to execute your code… in reverse. You can go forward into your code’s execution as well as backwards, just look at the ribbon in WinDBG Preview (available in the Windows Store).

I haven’t had the chance to play with this too much, but it looks very fun, as it will allow you to go back in time in a break point within a loop… The recorded traces have to be viewed with WinDBG.

Another new feature which i think i will love is the concept of snappoints and logpoints. Imagine the ability to attach to a running process, but instead of pausing the process when you hit a breakpoint, a myriad of information gets recorded when a snappoint is passed. You could then inspect the variables and stack trace of every one of those hits without causing impact. Logpoints are dynamic logging, pass this instruction and add this event to the log. These are great advancements to the art of debugging. This feature requires Visual Studio and Azure hosted sites.

I hope you are excited about the future of debugging !

One of the project i was working on involves taking Office 365 to the next level. Take the tools that Microsoft gives us and bring it to the next level.

Some of the things we do involves extending Sharepoint, Excel, Word, Office with addons. Works great. But where do we get all the nice information we show the user ?

This is where the Microsoft Graph comes in (http://graph.microsoft.com). It is basically an odata feed that gives you access to a variety of data that represents you. Naturally, you need to send a bearer token to get access so authenticate first !

You can find then entire “metadata” on the internet itself, all self-describing ! Also, there is some documentation.

You can even try the Graph Explorer, which is a web tool to explore the graph in an interactive way.

Here a are a few example of me for my own company :

Who am i ?

GET https://graph.microsoft.com/v1.0/me
{
 "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users/$entity",
 "id": "",
 "businessPhones": [],
 "displayName": "Erik Renaud",
 "givenName": "Erik",
 "jobTitle": "Principal Architect",
 "mail": "erik.renaud@modelon.net",
 "mobilePhone": "+1 514...",
 "officeLocation": "+1.514...",
 "preferredLanguage": "en-CA",
 "surname": "Renaud",
 "userPrincipalName": "erik.renaud@modelon.net"
}

What about my picture ?

GET https://graph.microsoft.com/v1.0/me/photo/$value

 

This is a goldmine of information and makes any business tool soo much more powerful.

Add  an email, find recently used files, show how many unread things in teams… It’s all possible !

 

 

For one of the projects i manage i have two teams and in the end, one implements web services for the other to consume. In production and integration testing, things tend to go well. But when we are faced with debugging or testing the side that consumes web services, we need something more.

I love do discover new things or new ways of doing things, enter EasyMock (https://github.com/CyberAgent/node-easymock). It is a small nodejs web server that returns any file on disk with extra options in it’s config.json file.

You install it with :

$ npm install -g easymock
And you start it within your work directory with :
$ easymock
Server running on http://localhost:3000
Listening on port 3000 and 3001
Documentation at: http://localhost:3000/_documentation/
Logs at: http://localhost:3000/_logs/

If you wanted to mock a CurrentUserCount rest web service which is located at /api/CurrentUserCount, all you need to do is create a “api” directory with a file named “CurrentUserCount_get.json” within it. Here is that result :

There is even a handy automatically created documentation page:

Happy mocking !

So I was working on DayTickler, and we suddenly decided to start using Xamarin controls from Telerik(Progress) and Syncfusion. Traditionally, that usually meant downloading the installer and then referencing the proper assemblies from the local drive. Another workflow was to copy the assemblies to the project directory in some sort of “lib” folder so that those assemblies could be used in a CI environment.

Fast forward to 2016  and we have something called Nuget, so i tried using these to achieve the same objective. The first was to add the two nuget feeds to Visual Studio’s Nuget configuration screen. Easy enough and from there, i was able to provide my Telerik credentials (their feed is private) and install the packages. Yay !

But when you commit to Visual Studio Online, there is no way to build because it would now fail on package restore.

The solution is to:

1 – Create a nuget.config file in the solution so that the build server knows where to download nuget feeds from:

2 – Open your build definition, and in the “Feeds and authentication” section, go point to your nuget.config file:

3 – Press “+” near “Credentials for feeds outside the account/collection”, and add the appropriate details.

 

That’s it ! worked like a charm and there is a “Verify connection” button to ensure all is good.

I just helped put a rather big system in production. There are always a lot of things to do and we lately turned on availability monitoring with Azure Application Insights.

Since we knew we were going to use this feature, we had added a “ping” controller to our API tier. This controller had a single method called “IsAlive”. It returns a 200 if it can access the database and a 400 if not.

5 minutes after we turned on monitoring, we were able to visualise the latency from 5 spots on the internet likely to have clients using the system. If something fails, we get alerts thrown that tell us when the system falls and when it gets back online.

Doing this a few years ago would had required specialised tools and now, it is a few clicks away in all of the major platforms. Use it, it’s five minutes to ensure you are warned if a problem arises.