All posts in Azure

Azure DevOps Pipelines & Atlassian BitBucket

Categories: Architecture, Azure
Comments Off on Azure DevOps Pipelines & Atlassian BitBucket

Ever since Azure DevOps was created and Pipelines was created as an individual product, capable of building software housed elsewhere (let’s say Atlassian BitBucket Cloud), I’ve had the pleasure of using it on a few legacy softwares…

What did I learn – super easy to setup and use, very reliable.

I only ran into one problem once, which was that since I was not an “admin” on the BitBucket account, the WebHook could not be created and I was getting into an error… As soon as Microsoft support helped me figure it out, everything worked out fine.

Azure CosmosDB Pricing Intricacies

Categories: Azure
Comments Off on Azure CosmosDB Pricing Intricacies

The Azure CosmosDB system has the potential to be a great storage layer for your solution. It automatically scales to maintain performances by splitting the data into partitions. It can geo-replicate in order to minimize data transfer latencies and has multiple consistency models to suit your different needs.


In CosmosDB you pay for two things within a data center.
1 – The storage you use, which is a flat fee per gigabyte.
2 – The amount of RUs you would like to provision per second for performance. An RU is an arbitrary unit meaning roughly “something similar to reading 1k”.
If you replicate your dataset to another data center for redundancy or to reduce latency, these costs are PER data center.

Now for the trickiness, when you provision performance, let’s say 10k RU/sec, you are saying that you would like the entire dataset to be served with that performance level. Then arrives a more complicated subject: partitions. A partition is basically a split of your container into multiple parts that represent different sections of your data. For example, if your partition key is a person’s family name, you might end up with partitions for people’s last name starting with [A-G], [H-J], [K-Q], [R-Z]. In this scenario, 4 partitions whom must share equally the performance hence 2500 RU/sec per partition. Note that globally the performance level is the same, but if for a given amount of time, only one partition is solicited, then it will appear as if only 2500 RU/sec was available.

For small datasets that might not be so dramatic as partitions might never occur… but for larger datasets that can grow, CORRECTLY choosing a partition key becomes of paramount importance…

With this understanding, you might think that CosmosDB is great because you pay the same amount per month for a given performance. But two scenarios might arise…

1 – Partitions might exhaust their part of the total RU/sec faster than anticipated due to bad partition keys or specific usage patterns. Your usage of CosmosDB must be resilient to that fact that the system is unavailable because RU/sec have been exhausted.

2 – There is a minimum amount of RU/sec required to host a partition. If the container splits to a point where a partition has less than 100 RU/sec (current value), then RU/sec will be added to your bill in order to guarantee that minimum per partition.

Hope that clears a few things up !

Microsoft .net Orleans

Categories: .net, Architecture, Azure
Comments Off on Microsoft .net Orleans

The Microsoft Orleans project ( is a .NET framework for building systems on an Actor Model paradigm.

A typical transactional system receives a command and executes it. Executing it usually means fetching data from a database, modifying it and then saving it.

The reason I was interested in Orleans for To-Do.Studio was that each action in the app generates a command, there is no save button, no transactional boundary. This naturally creates a chatty interface, where many small commands are sent to the server, which much for each: get data, modify it and save it. Combine that with the fact that NOSQL components such as CosmosDB make you pay for reads and what you have is an expensive bottleneck.

The Actor Model in Orleans would have fixed this for me by doing the following. The first time a command executes which references a particular domain object, a grain is activated and the data is read. Each command then modifies the data in the grain. Eventually, when no one uses the grain the system will deactivate the grain causing it to save itself.

This would have the added benefit for us to minimize data access costs and offer speedups similar to adding a caching layer.

As with all magic though, we lose some control:

  1. First thing is that as the system scales to thousands of users and tens of thousands of grains, we have to think of system scalability (Orleans doesn`t yet deactivate resources based on memory pressure, only an “unused delay”)
  2. The deployment model is not fully integrated with Azure.
    1. Hosting is possible within a VM – not PAAS enough for me
    2. Hosting is possible within worker roles – sounds interesting but not exactly what I want
    3. Hosting is possible within Service Fabric (which is another implementation of an Actor Model from the Azure team and not the .NET team) – doesn’t feel seamless this would be my ideal hosting option)
    4. Host in a Linux container which is scaled automatically by Kubernetes – to be honest, I am not a container fan; Ido see their advantages for certain workloads, but it feels like the PAAS of infrastructure people.

Anyways, my best option would be hosting on top of Service Fabric. It would need to be 100% integrated with dotnetcore stuff though.

I could also recommend having a dedicated PASS offering for Orleans (kind of like what happened with SignalR).

Finally, Orleans should be upgraded to support memory pressure for deactivations as well as some sort of queuing mechanism when grains start swapping to keep the amount of work which is executing in parallel rationalized.

Web Push Notifications

Categories: Architecture, Azure
Comments Off on Web Push Notifications

As time goes by, the web becomes more and more powerful. It’s been around for a while, but I think that with Chrome and Edge finally supporting PWA apps (and native apps starting to become mobile web apps), it is time to embrace this technology. It is time to create web pages that can push notifications directly to the desktop even if the browser is closed.

This works right now on any device running Microsoft Edge, as well as devices that run Google Chrome (Android devices and desktops running Chrome).

Architecturally, the Service Worker part of your website (running in the background) creates a subscription with a push notification service provider and then that “token” is sent to your application server, which will use it to send you a notification.

What I didn’t grasp, is that you the concept of the push notification service provider is not any server running some software running a specific protocol, but rather tied directly to the browser. For example, the ONLY push notification service provider if the website is being viewed within Google Chrome is FCM (Firebase), Mozilla or Microsoft Edge (Windows WNS). In fact, these are the same technologies that are used by real native apps on those platforms.


  1. The API to subscribe and get some sort of token to enable push notifications is defined by internet standards, so it is safe to use in any browser on any website.
  2. Each browser will implement those methods using its own push notification service provider
  3. Sending a push notification will be different depending on whom the push notification service provider is.

Here is a link to a functional demo that works everywhere :

Office365 Two Factor Authentication

Categories: Azure
Comments Off on Office365 Two Factor Authentication

Two-factor authentication is an important thing, we should all now that by now. For a long time, I’ve had it activated on my Microsoft Account and Google Account. Today I decided to turn it on for the Office365 tenant, I went into the admin tools and turned it on. Easy enough.

As expected, I had to remove the account from Outlook and Windows 10 and re-add them. But surprise, Outlook 2016 didn’t want to connect. It seems there is a manual intervention that needed to be done.

I am no IT admin and no PowerShell guru, but following the articles here saved my life:

By using the instructions in the second link, you get some sort of Office365 configured local PowerShell thing. Once you open that, all you have to do is these two commands to make everything work:

Connect-EXOPSSession -UserPrincipalName
Set-OrganizationConfig -OAuth2ClientProfileEnabled $true

VSTS – Build & Release for To-Do.Studio’s web site

Categories: Architecture, Azure
Comments Off on VSTS – Build & Release for To-Do.Studio’s web site

So my startup To-Do.Studio is advancing, and we started getting rolling on our informational website.

The first thing was to create a “web site” project with Visual Studio, and get it into GIT.

As you can see, it’s just a bunch of HTML, CSS and stuff, with a small web.config so that we can host on IIS (we are using Azure App Services for this). The first thing which is abnormal though and unlike a normal ASP.NET project, I do not have a CSPROJ that describes my project, instead, I have a PUBLISHPROJ, and it gets ignored by GIT. So how did we get here ?

Since Visual Studio relies on things like MSBUILD to “publish” projects, it needs some sort of project file and in this case, Visual Studio created the PUBLISHPROJ the first time I created the publish profile, this allows me to publish from the command line and more.

Although the file is GIT ignore, I had to add it to GIT in order for the VSTS build system to be able to publish this project.

The other modification we had to do was to add to GIT a special file, in the folder of the solution called “after.ToDoStudio.StaticWebFront.sln.targets”. Turns out MSBUILD looks for these files automatically to include them in your build process, without the need to modify the solution file. This is what we put in there:

<!--?xml version="1.0" encoding="utf-8"?-->
<Project ToolsVersion="4.0" xmlns="">
 <Target Name="Deploy Website" AfterTargets="Build">
 <Message Text="Starting Website deployment" Importance="high">
 <MSBuild Projects="$(MSBuildProjectDirectory)\ToDoStudio.StaticWebFront\website.publishproj" 
 BuildInParallel="true" /> 

What this does is ensure that after the Build action (which by default will build nothing cause this is a project with nothing to build), the system will automatically invoke the publish action using the PUBLISHPROJ.

Now we were ready to build this thing within VSTS.

As you can see here, our build is pretty standard with defaults used everywhere:

The release pipeline is also pretty standard, nothing fancy here:


Podcasts – Mine and Others

Categories: Azure
Comments Off on Podcasts – Mine and Others

Let me be honest – I hadn’t listened to podcasts in a while, so was pleasantly surprised when I was recording a podcast with Mario Cardinal and Guy Barrette on CosmosDB and, during the initial chitchat, Mario talks about a great podcast he started listening to called After On.

Here is a link to the podcast that I have recorded, it is an introduction into CosmosDB, a service within Microsoft Azure that is a NOSQL and built from the ground up to scale with the web. It is the kind of technology that could be used to build things like Facebook or Twitter.

As for After-On, this is a podcast I instantly became hooked too. The discussions are out of this world, on diverse subjects and very in depth. Please go listen and let me know which is your favorite episode.

Azure AppService to FTP (not in Azure)

Categories: .net, Architecture, Azure
Comments Off on Azure AppService to FTP (not in Azure)

Oy,  I just spent a crazy week to learn that :

It is impossible for an AppService application to connect to a FTP server on the internet (in passive mode). The reason is that each Azure App-Service is assigned a pool of IP addresses for outgoing traffic and Azure is free to choose a new outgoing IP for each connection (the natting stuff) and FTP expects the data connection to be from the same ip as the control connection.

This said, when using sftp, the handshake is negociated with the connection that composes the control channel. When the second connection to transfer data is built, the ftp server receives the connection from a potentially different IP address, which is a 425 type error.

Took us 3 days to diagnose, one evening to write a basic http web api that allows to post and get files, a few hours to install it all and a full day to rewrite the code that worked with ftp…

This said, it was something that no one on the team could of had predicted, live and learn !

SQL and extensibility

Categories: Azure
Comments Off on SQL and extensibility

Having a schema in SQL defeats extensibility – and there is no way i am having columns named “ExtensibleColumn1, ExtensibleColumn2″…

I have come to embrace storing JSON directly in the database, quite easy actually as JSON is simply text…

The bigger problem though was how to extract specific data from this JSON, or filter on it… and here is the solution: OpenJSON !


SQL Magic row to column concatenation

Categories: Azure
Comments Off on SQL Magic row to column concatenation

Imagine you have a very simple data model with two tables, Invoice and InvoiceTaxes.

When creating a report, i would like to see all invoices and the taxes applied to each, but not as a 1 to n relationship, but rather as a sting concatenation.


Invoice1, “TPS”
Invoice2, “TPS, TVQ”
Invoice3, “TVH”

I went around and found a bunch of different ways on StackOverflow, everyone single one looking super duper complicated…

Then i found something called String_AGG, and it’s available on Azure SQL right now !

select invoiceid,
(select string_agg(taxname, ', ') from invoiceTaxes where invoiceTaxes.invoiceID = invoicesid) as Taxes
from invoices

Voilà !