All posts in Azure

I just helped put a rather big system in production. There are always a lot of things to do and we lately turned on availability monitoring with Azure Application Insights.

Since we knew we were going to use this feature, we had added a “ping” controller to our API tier. This controller had a single method called “IsAlive”. It returns a 200 if it can access the database and a 400 if not.

5 minutes after we turned on monitoring, we were able to visualise the latency from 5 spots on the internet likely to have clients using the system. If something fails, we get alerts thrown that tell us when the system falls and when it gets back online.

Doing this a few years ago would had required specialised tools and now, it is a few clicks away in all of the major platforms. Use it, it’s five minutes to ensure you are warned if a problem arises.

I know I know, how can i have memory leaks, you’ve got a garbage collector in that fancy .net don’t you ?

So this is the second project i work on where we find memory leaks in production. The first one was a third party library that didn’t release a bitmap when it generated reports using it, the second one wants to hold on to Entity Framework contexts after they’ve been used.

Not much i can do about the first case cause i don’t have access to the objects and can release them, but the second one is directly in our code.

The thing is, as soon as Entity Framework context goes out of scope, it should release it’s memory -it’s all managed, and IDispose juste releases the memory deterministically.

Now the fun part, with IOC and DI containers, we tend to receive our references and the container itself manages the lifetime. So if our container is badly configured, we depend on the container to release objects (so they can garbage collected) and just lost the ability to get garbage collection based on out of scope…

DI containers are fun and fancy, and a real plus when you are testing, but PLEASE ensure they are configured correctly !

The first memory leak was hosted on premise with IT people who had no idea how to take a memory snapshot, that was hard.

Azure on the other hand, has all of this stuff built-in, i was completely blown away at how much diagnostic tools were build into the plateform. It’s called “Diagnostics as a Service”. Go discover it !

When working with database, we sometimes have to backup-restore them in order to bring them to a test environments. When this happens on premise, there is a good chance the backup gets restored to a know machine.

With the cloud and outsourcing, the backup could be restored miles away on another continent.

Go read this : SQL Azure Data Masking

Ok, the concept is easy enough to grasp, but it is based on the fact that you login with one or another user. Most web applications use a generic user to connect to the database and implement application level security, not at the database layer.

Who knows how we can keep using application security, but pass a special token or something when we need to read the actual data ?

Is there a way to apply the dynamic masking within the database at time of backup so people getting a copy of it gets a safe copy ?

There are two use cases, the first is scrambling data. The second is just not showing it all (i.e. a Er** instead of Erik). Do the where clauses apply to the real value or the mask ?

 

Great things with this technology, i just have to start testing it out…

Well, one of my teams just went to production with a rather big project. The whole thing is on hosted on Microsoft Azure.

Like all projects, you always learn a couple of things, there are always good things, bad things and things that could of been done better.

For this project, we chose the Azure SQL Databases to hold the data. An S0 instance gives us plenty of space, and since we actually spent time benchmarking the system, ou sql queries were optimised and we had caching where it counted. I thought we were pretty good and the there would have to be a massive amount of concurrent users on the system to kill it…

Turns out all you need is 60, because although a S0 can take a lot of parallel inserts at a time, it can only handle 60 connections at a time. That is a real bummer.

The chances that we hit 60 connections at the same moment is still pretty slim, because connections are only open for the lifetime of a request. Still, i brought our system up to S1 which gives us 90 connections, just in case…

Another fun thing is we implemented in EF the required SQL error retry logic, so if we get denied because of no available connections, the system will simply retry after a certain timeout, that should prevent the code from failing at the expense of longer execution time.

I though i understood the whole DTU thing, but never took into account the connection limit.. Oh well, live and learn

Refer to this page for information of these limits.

By  now you’ve probably heard of something called ServerLess architecture. Now let’s get something straight – this is not about creating apps without servers, but rightly about building apps about we don’t think about servers as the deployment unit.

Non-ServerLess architectures (wow, that sounds weird) is our traditional way of building aps where we compile something into a DLL (or something), then package a few of these things together into a unit. Each unit has a bunch of responsibilities and usually depends on other units (a SQL database, a web service…). Each unit is then deployed on a server of some sort and then we have to think about scaling these things correctly and maintaining connections between the different parts… sounds crazy if you ask me 🙂 When you scale, you end up scaling the whole functionalities included in the unity being scaled.

ServerLess architectures focus on small functionalities (think micro-services). Each function is deployed independently and probably depends on other functions. The thing is each function is scalable on it’s own. One function that is used often might be scaled up, while the other function which gets called once a day stays calm. Under the hood, there are servers but we just tend not to think about them too much. There are usually two kinds of functions, ones that are triggered (http call, event…) or timed (on a schedule).

Azure Functions is one of many offerings (Amazon Lambda is another big player and i love these guys WebTask.IO) that allows you to deploy systems using this kind of architecture.

The last thing i really like about these systems is the extension concept they provide. For example, let’s say a search aggregator requires an algorithm by each provider to verify if user can click on a link. Instead of coding each rue in the main system, each provider can easily host a “Function” that provides the answer and the search aggregator calls them just in time…

Next big thing i do will use these concepts in one way or another – watch out servers out there cause i will start forgetting you exist !!!

I love queuing mechanisms, they allow you to decouple the input of a system from the output of a system, basically so they can be processed at different speeds relative to each other. They also usually come with retry mechanisms just because of the way they are build.

Service Bus is basically a FIFO. Things go in, and things get out in the order they went in. Consumers take an item from the queue, process it and then it gets removed. Azure allows you to define “Topics” instead of “Queues” so that you can have multiple readers (pub-sub concept) for a given item but it’s still the same relative process, read, process, remove.

Event Hub is a much nicer thing if you ask me because it is based on the concept of a stream and removal of items from this stream is independent from consumers processing it. Basically, publishers insert items into the stream and consumers are free to read from this stream from any starting point. Each reader has to maintain a pointer which indicates where they are in the stream and are free to move this pointer forward or backwards. This is great for things where you would like to replay items because you flushed a database somewhere. The catch is you have to maintain the size of stream to keep it at a manageable size. With storage so  cheap now it might be easier for some systems to just never delete items. If it’s a stream for IoT devices with millions of items per day, then maybe only the last 48 hours are relevant and readers who are not quick enough simply “miss out on some data”, which could be okay for some scenarios.

I am still new to this so if I make some over simplifications or errors, please let me know.

I can’t wait for building my first Azure Event Hub app which is not a prototype !

In Canada, as we develop certain kind of solutions for our clients, we must sometimes deal with the fact that we must ensure that the data within our systems stays in Canada. I am certain other coutries or organisatinons have similar laws.

It is sometimes necessary to understand Azure Regions, and how redundancy works within that region.

For example, if i geo replicate my blob storage which has it’s primary location in the new Azure Canada East data center (Quebec city), the system will choose the Azure Canada Central data center (Toronto) as the secondary because they are more than so many miles from each other.

Since we only have two data centers in Canada, the big question becomes : if Quebec explodes and Toronto becomes the primary (for everything where it was secondary), will a USA based data center become the new secondary ?

The answer is no, in that scenario no data would cross the border and break laws. At the same time, we would lose geo replication till a new secondary site can be put online.

This link will help you answer these kind of questions : http://azuredatacentermap.azurewebsites.net/

Azure What Where

Categories: Azure
Comments: No

Azure is a big system, with many service offerings and many geographically located data servers.

Sometimes it is necessary to understand where which service is offered, for example to ensure that we can keep our data files safe in Canada and still use blob storage as the technology.

Keep this link, it might come in handy: https://azure.microsoft.com/en-us/regions/#services