All posts in .net

Git, Roslyn and Nuget Magic

Categories: .net, Architecture
Comments Off on Git, Roslyn and Nuget Magic

I was writing some code into To-Do.Studio so that when people demo their code and features, I have a better idea of what it is I am seeing – more specifically, what version number would I see (to understand branches and commits)…

My initial thought was : on startup i will launch git and try to get some information about branch and commit hash and then incorporate that into the configuration system and then show that in the footer…

Well, that sounded complicated and so it probably was… after a few seconds, I found this great Nuget called GitInfo that basically creates a partial class at build time that can be referred to in code,  my startup code now looks like this :

public string GitVersion
{
    get => string.Format($"V{ThisAssembly.Git.BaseVersion.Major}.{ThisAssembly.Git.BaseVersion.Minor}.{ThisAssembly.Git.BaseVersion.Patch} T={ThisAssembly.Git.Tag} {ThisAssembly.Git.Branch}[{ThisAssembly.Git.Commit}] {{0}}", ThisAssembly.Git.IsDirty?"Dirty":"");
}
...
Configure()
{
    ...
    Configuration["GitVersion"] = GitVersion;
    ...
}

Microsoft .net Orleans

Categories: .net, Architecture, Azure
Comments Off on Microsoft .net Orleans

The Microsoft Orleans project (http://dotnet.github.io/orleans/) is a .NET framework for building systems on an Actor Model paradigm.

A typical transactional system receives a command and executes it. Executing it usually means fetching data from a database, modifying it and then saving it.

The reason I was interested in Orleans for To-Do.Studio was that each action in the app generates a command, there is no save button, no transactional boundary. This naturally creates a chatty interface, where many small commands are sent to the server, which much for each: get data, modify it and save it. Combine that with the fact that NOSQL components such as CosmosDB make you pay for reads and what you have is an expensive bottleneck.

The Actor Model in Orleans would have fixed this for me by doing the following. The first time a command executes which references a particular domain object, a grain is activated and the data is read. Each command then modifies the data in the grain. Eventually, when no one uses the grain the system will deactivate the grain causing it to save itself.

This would have the added benefit for us to minimize data access costs and offer speedups similar to adding a caching layer.

As with all magic though, we lose some control:

  1. First thing is that as the system scales to thousands of users and tens of thousands of grains, we have to think of system scalability (Orleans doesn`t yet deactivate resources based on memory pressure, only an “unused delay”)
  2. The deployment model is not fully integrated with Azure.
    1. Hosting is possible within a VM – not PAAS enough for me
    2. Hosting is possible within worker roles – sounds interesting but not exactly what I want
    3. Hosting is possible within Service Fabric (which is another implementation of an Actor Model from the Azure team and not the .NET team) – doesn’t feel seamless this would be my ideal hosting option)
    4. Host in a Linux container which is scaled automatically by Kubernetes – to be honest, I am not a container fan; Ido see their advantages for certain workloads, but it feels like the PAAS of infrastructure people.

Anyways, my best option would be hosting on top of Service Fabric. It would need to be 100% integrated with dotnetcore stuff though.

I could also recommend having a dedicated PASS offering for Orleans (kind of like what happened with SignalR).

Finally, Orleans should be upgraded to support memory pressure for deactivations as well as some sort of queuing mechanism when grains start swapping to keep the amount of work which is executing in parallel rationalized.

Azure AppService to FTP (not in Azure)

Categories: .net, Architecture, Azure
Comments Off on Azure AppService to FTP (not in Azure)

Oy,  I just spent a crazy week to learn that :

It is impossible for an AppService application to connect to a FTP server on the internet (in passive mode). The reason is that each Azure App-Service is assigned a pool of IP addresses for outgoing traffic and Azure is free to choose a new outgoing IP for each connection (the natting stuff) and FTP expects the data connection to be from the same ip as the control connection.

This said, when using sftp, the handshake is negociated with the connection that composes the control channel. When the second connection to transfer data is built, the ftp server receives the connection from a potentially different IP address, which is a 425 type error.

Took us 3 days to diagnose, one evening to write a basic http web api that allows to post and get files, a few hours to install it all and a full day to rewrite the code that worked with ftp…

This said, it was something that no one on the team could of had predicted, live and learn !

While working on the backend for To-Do.Studio, we ran into a scenario where we needed to test on a developer’s machine with https (ssl) enabled.

note – we use Kestral for testing and not IISExpress.

First thing we did was to try and add a https url in the LaunchSettings.json file. That didn’t work 🙂

What we found was that had to configure Kestral manually and tell it which certificate to use. The code in our Program class looks like this :

public static IWebHost BuildWebHost(string[] args) =>
   WebHost.CreateDefaultBuilder(args)
     .UseApplicationInsights()
     .UseAzureAppServices()
#if DEBUG
     .UseKestrel(t => {
        t.Listen(IPAddress.Loopback, 55172);
        t.Listen(IPAddress.Loopback, 55000, o =>
        {
            o.UseHttps("localhostCertificate.pfx", "password");
        });
     })
#endif
    .UseStartup<Startup>()
    .Build();

But how do we get a certificate ? There are various ways but i didn’t feel like finding my win32 sdk as some instruction’s on the web… so i decided to use my Ubuntu WSL…

Two commands

erik@ErikLAPTOP:~$ openssl req -x509 -days 10000 -newkey rsa:2048 -keyout cert.pem -out cert.pem
erik@ErikLAPTOP:~$openssl pkcs12 -export -in cert.pem -inkey cert.pem -out cert.pfx

and i had a good looking self made certificate. The hardest part was to copy this cert.pfx file to a Windows directory so i could use it my code.

Voilà ! after modifying my LaunchSettings.json, i could test in either http or https mode !

"ToDoStudio.Server_http": {
 "commandName": "Project",
 "launchBrowser": true,
 "environmentVariables": {
 "ASPNETCORE_ENVIRONMENT": "Development"
 },
 "applicationUrl": "http://localhost:55172/"
 },
 "ToDoStudio.Server_https": {
 "commandName": "Project",
 "launchBrowser": true,
 "environmentVariables": {
 "ASPNETCORE_ENVIRONMENT": "Development"
 },
 "applicationUrl": "https://localhost:55000/"
 }

The art of Debugging

Categories: .net, Architecture
Comments Off on The art of Debugging
I have been debugging software for the last 20 years, and although things have changed, I realize I took it for granted and never really thought about it.

Basics

There were two debugging technique if was introduced to when I started.
  1. Attach a debugger to a program and hit breakpoints. When you hit a breakpoint, the debugger can show you your source code because the symbols have a mapping from the assembly code to the source code. At a breakpoint you can inspect variables, change their values. You can control the flow of execution of your program by executing the next instructions and even change what the next instruction is.
  2. This technique is a bit more brute force, but consists of using logging to get insight into code which is hard to debug with breakpoints. Although less efficient than breakpoints, I have done this soooo often because it allows me to debug also in cases like production, where I might not be able use breakpoints.

Step forward a few years and we now have Visual Studio 2017. Both techniques above still work but with a few new features. For example, for logging we don’t have to use the console but have access to Trace and Debug, both with listeners to output where you want. Even better are TraceSources, which are specialized objects for logging for modules. Always making things easier, VS also shows us a few graph about memory and threads.

To complement the advances in the platform, there are a bunch of other logging frameworks such as nlog, log4net, serilog… One of the advance i love about these frameworks are the concept of structured logging, imagine logging with DTOs. While we are on the subject, there are even logging frameworks as a service, all cloud based !
There are dump files that can be created at breakpoints in your code, which are basically a memory snapshot of your process. These can be inspected with WinDBG or Visual Studio
It does feel much more modern but all that we have seen is fundamentally the same thing. Things are getting much better…

Modern Debugging

One of the bigger breakthroughs in debugging we got a few years ago was called edit and continue. You can literally change code of running programs as they run… Quite interesting although I cant say have used this mechanism often. This requires Visual Studio.
The second one which has served me well is IntelliTrace. This is basically a configuration based approach to automatic logging – as the debugger hits different places (breakpoints) in the code, events are generated with some contextual information. For example – File.Delete() will generate an IntelliTrace event with the text “File asdf.txt deleted”. This is great as all major components of dot net are instrumented with this technology. IntelliTrace requires Visual Studio but using the offline IntelliTraceCollector, you can record the execution of your code and analysis everything in Visual Studio.
The newest feature which is currently in preview is called Time Travel Debugging, which basically allows your to execute your code… in reverse. You can go forward into your code’s execution as well as backwards, just look at the ribbon in WinDBG Preview (available in the Windows Store).

I haven’t had the chance to play with this too much, but it looks very fun, as it will allow you to go back in time in a break point within a loop… The recorded traces have to be viewed with WinDBG.

Another new feature which i think i will love is the concept of snappoints and logpoints. Imagine the ability to attach to a running process, but instead of pausing the process when you hit a breakpoint, a myriad of information gets recorded when a snappoint is passed. You could then inspect the variables and stack trace of every one of those hits without causing impact. Logpoints are dynamic logging, pass this instruction and add this event to the log. These are great advancements to the art of debugging. This feature requires Visual Studio and Azure hosted sites.

I hope you are excited about the future of debugging !

Mocking web services

Categories: .net, Architecture, web
Comments Off on Mocking web services

For one of the projects i manage i have two teams and in the end, one implements web services for the other to consume. In production and integration testing, things tend to go well. But when we are faced with debugging or testing the side that consumes web services, we need something more.

I love do discover new things or new ways of doing things, enter EasyMock (https://github.com/CyberAgent/node-easymock). It is a small nodejs web server that returns any file on disk with extra options in it’s config.json file.

You install it with :

$ npm install -g easymock
And you start it within your work directory with :
$ easymock
Server running on http://localhost:3000
Listening on port 3000 and 3001
Documentation at: http://localhost:3000/_documentation/
Logs at: http://localhost:3000/_logs/

If you wanted to mock a CurrentUserCount rest web service which is located at /api/CurrentUserCount, all you need to do is create a “api” directory with a file named “CurrentUserCount_get.json” within it. Here is that result :

There is even a handy automatically created documentation page:

Happy mocking !

Custom nuget feeds with Visual Studio Online Build

Categories: .net, Architecture
Comments Off on Custom nuget feeds with Visual Studio Online Build

So I was working on DayTickler, and we suddenly decided to start using Xamarin controls from Telerik(Progress) and Syncfusion. Traditionally, that usually meant downloading the installer and then referencing the proper assemblies from the local drive. Another workflow was to copy the assemblies to the project directory in some sort of “lib” folder so that those assemblies could be used in a CI environment.

Fast forward to 2016  and we have something called Nuget, so i tried using these to achieve the same objective. The first was to add the two nuget feeds to Visual Studio’s Nuget configuration screen. Easy enough and from there, i was able to provide my Telerik credentials (their feed is private) and install the packages. Yay !

But when you commit to Visual Studio Online, there is no way to build because it would now fail on package restore.

The solution is to:

1 – Create a nuget.config file in the solution so that the build server knows where to download nuget feeds from:

2 – Open your build definition, and in the “Feeds and authentication” section, go point to your nuget.config file:

3 – Press “+” near “Credentials for feeds outside the account/collection”, and add the appropriate details.

 

That’s it ! worked like a charm and there is a “Verify connection” button to ensure all is good.

For enterprises, creating a web site that can link to files stored locally to work with “desktop apps”

Categories: .net
Comments Off on For enterprises, creating a web site that can link to files stored locally to work with “desktop apps”

One of my clients wanted a way to interact with local files (open folders, launch the associated application) from a web page. In this way, they could construct home page for the user and link to “local files” the same way as if they were on the internet.

This worked using IE (not Chrome nor Edge) through the file protocol. You could effectively do this :

<a href="file://t:\data">Data directory</a>
<a href="file://t:\data\file.txt">text file</a>

And the web page (on IE) would render two links, which would either open the folder in explorer or the file in notepad.

How would you fix this problem with Chrome and Edge ?

What we have prototyped is the creation of a UWP app that registers two URL protocols. Then you simply use these protocols in your web page and the UWP app will handle the request and do something, effectively bridging the gap between web and desktop…

Here is the updated HTML:

<a href=”myAppOpenFolder://t:\data”>Data directory</a>
<a href=”myAppOpenFile://t:\data\file.txt”>text file</a>

Once that is done, you need to create a new UWP app and register your new protocol declarations in the manifest, it should look like this:

<Extensions>
 <uap:Extension Category="windows.protocol">
 <uap:Protocol Name="myAppOpenFolder">
 <uap:DisplayName>myAppOpenFolder</uap:DisplayName>
 </uap:Protocol>
 </uap:Extension>
<uap:Extension Category="windows.protocol">
 <uap:Protocol Name="myAppOpenFile">
 <uap:DisplayName>myAppOpenFile</uap:DisplayName>
 </uap:Protocol>
 </uap:Extension>
 </Extensions>
 </Application>
 </Applications>

Once the manifest is done, you simply handle the situation in you app.cs:

protected override void OnActivated(IActivatedEventArgs e)
{
if (e.Kind == ActivationKind.Protocol)
{
var protocolArgs = e as ProtocolActivatedEventArgs;

if (protocolArgs.Uri.Scheme == “myAppOpenFolder”)
{
Windows.System.Launcher…. (protocolArgs.Uri);
}
else if (protocolArgs.Uri.Scheme == “myAppOpenFile”)

{
Windows.System.Launcher…. (protocolArgs.Uri);
}

base.OnActivated(e);
}
}