Category Archives: thoughts

Percentiles aggregation with Redis

We all know that we should monitor pro-actively our applications and response is a natural choice when working on performance. Concerning timings, it’s important to keep this golden rule in mind “Average is bad !” Averages are used too often for in monitoring because they are quite easy/memory efficient to compute (sum of values/number of samples). There are hundreds of posts and articles that explain this and why you should not rely on it (like Anscombe’s quartet). Percentiles are much better, but less easy to compute especially on a large-scale distributed system (big data, multiple sources, …)

Aggregating percentiles is a common problem for everyone and it exists a lot of ways to do it. I highly suggest to read this article from baselab : this is just a must-read if you plan to work on this topic !  One section in this last article refers to “Buckets” and this is exactly the approach we will use here.  “Another approach is to divide the domain into multiple buckets and for each value/occurrence within the bucket boundaries increment this bucket’s count by one”

buckets

Increments ? Do you know an appropriate key-value store for this purpose ? Yes, Redis.  My implementation consists of storing all buckets into a redis hash, each bucket being a hash field. The field name is the upper bound and the value is the number of samples for this interval. We use HINCRBY (complexity O(1)) on client/emitter side to increment the number of samples for the target bucket. This will give us some sort of histogram in redis for that event. It could be incremented by one to many clients because commands are isolated and atomic. On the other side (supervisor, admin dashboard, …), in order to get the N-percentile for that hash, we need to get all fields, sort them (hash fields are not sorted by default) and find the exact field name where the aggregated number of samples (starting from the first) is beyond the target number of samples (total samples * N-percentiles).

One particular thing to notice is the bucket size. There is no rule concerning this and it could a constant, linear, exponential, … scale. The choice of the bucket size depends mainly on two factors : the desired accuracy (small size means better accuracy) and the domain (delta between  min/max). For example, a constant bucket size of 50 ( [0-50],[50-100],[100-150], …) is not accurate for small values (ex 130 => [100-150]) but much more accurate for higher values (like 1234 => [1200-1250]). Over the time, I generally use some kind of varying bucket size inspired by HDR Histogram.

Bucket Size/Round Factor Step Lower Value Upper Value
1 0 0 128
2 1 128 384
4 2 384 896
8 3 896 1920
16 4 1920 3968
32 5 3968 8064
64 6 8064 16256
128 7 16256 32640
256 8 32640 65408

Number of redis hash fields

For example, a sample value of 2040 is located in the interval [2032-2048]. This is just an example of a non-linear scale that provides a good accuracy, while keeping an acceptable number of hash fields

The number of hash fields is important because it will directly impact on memory or storage (if configured). Hashes have a hidden memory optimization using ziplist when the number of fields is below a configurable threshold (512 by default). So, we can’t have too many fields/buckets if persistence is required. In practice, depending on the number of samples and the distribution of values, some buckets are often missing. This is not a problem and give us more possibilities to select the bucket size.

Querying Redis

During my work on this topic, my initial intention was to create a LUA script to compute percentiles in redis : we don’t need to get all the fields+values on processing side, and percentile computation is just a matter of sorting/counting hash fields. After several test, I can easily say that it is a very bad idea ! The best version of my script is quite efficient –I suppose-  but taking several ms for the worst cases (eg. thousands of buckets). You should know that only one script can be executed at the same time on redis and I prefer to do more IO than blocking my redis instance. By the way, the complexity of HGETALL is O(1), so quite CPU-efficient.

Precision on client side

This implementation has a nice advantage: the precision is configured on client/emitter side, because the algorithm to compute the bucket is located there. You could change the algorithm at runtime or even have multiple implementations that you can apply to your components. On server-side, it doesn’t matter because you just have to sort and count. This is pretty good when you have multiple kind of apps and components, all having a unique domain (nano seconds, milliseconds…)

Smart Key Pattern

Since now, I’ve considered only one hash, but this is not enough if we want to create beautiful graphs (99th percentile for the last 7 days per hour). A common way –in NoSQL- to solve this problem is to use the smart key pattern. A Key can be much more that a simple constant string value. If we know how we will query the data model later (and we know in this case), let’s add dynamic parts into the key. We’re working with timeseries so a rounded timestamp is a natural choice. All the keys will follow the pattern ‘mykey:hour:roundedts’.

We can’t aggregate percentiles, and there is no “magical algorithm” to compute percentiles from percentiles. If we want to investigate response times during a few minutes, we need aggregations  for every minute. The best way to do this is to increment several smart keys at the same time (minute, 5 minute, hour, day, …) on client side. That’s not a real problem on client side if your binding implementents pipelining.

Limit the impact on client side

We’ve seen that buckets can be incremented quite easily on client side, I mean without spending too much cpu cycles. However, itis not always a good thing. Depending on your applications, a better approach could be to implement some kind of batch processing. If you receive 100 requests during the same second, there is a high probability that you could group your timings (10 requests < 10ms, 30 requests between 10 and 20, 15 between 20 and 30ms, …). This will reduce significantly the number of commands executed.

To illustrate the concept, I’ve created a small gist using nodejs. This example is not prod-ready and far from my current implementation (C#) but it should help you to understand the principles of this approach.

A few line of codes later, and with the help of your favorite chart library, you can visualize this kind of graph. Please take a moment to compare the average and percentiles. Percentiles are definitively muc better !

 

range

Advertisements

Being valuable, Being a swiss knife

ultimate-swiss A Swiss Knife is a very popular pocket knife, used by several armies -but not only- and generally has a very sharp blade, as well as various tools, such as screwdrivers, a can opener, and many others. These attachments are stowed inside the handle of the knife through a pivot point mechanism. When I was a child, I owned one and was fascinated by its utility. What a great design! To be clear, my wish, as a software engineer is to be a swiss knife and I am very proud to be considered as is by my co-workers or my managers. I will try to explain in this post why it’s important to me and why every developer should take this into consideration.

The need for a swiss knife

A swiss knife is a highly valuable tool that everyone wants to have in its pocket because it helps you to solve any problem. And yes this is the sad tragedy of the software industry: problems and puzzles are everywhere. Why do I have an exception here? How will you implement this business feature ? What is this crappy code ? What will be the architecture of our future web site? Why our application has miserably failed yesterday? Is this new hype tech/language/framework/tool/… interesting for us? .… Of course, we can’t predict our future problems but solving puzzles is the essence of software engineering. At the end of the month, we’re not paid just for coding but for solving problems. The most important thing is here : each time that we fix up something, we bring value to someone or something: our customers, our company, our co-workers or even ourself. Bringing value is what makes us valuable, one of the most important recognition in our job. Even better, try to avoid puzzles, but it’s another story.

To be a swiss knife you need … to be focused on value

Even nowadays, I can see some developers waiting for tasks, committing quick and dirty solutions, being focused only on their scope and not aligned on company’s objectives … of course this isn’t how we should be. Everyone in the company should be focused on the final product, on the customer, on business value, to help customer support & product owners. Read more on devops culture. Many people can say here “Of course, I am” but don’t be too naive : Being focused only on value means that you should sometimes work on boring tasks, old-fashioned techs, deprecated libraries, crappy code, ..etc… This is the tradeoff

To be a swiss knife you need … to think as a software engineer

An engineer is a professional practitioner of engineering, concerned with applying scientific knowledge, mathematics, and ingenuity to develop solutions for technical, societal and commercial problems.

We’ve already seen several articles that illustrate this concept but please stop to be focused on your favorite stack. NOW ! After a decade, I’ve seen so many things… For example, asp.net WebForms were very cool 10 years ago (and yes it was really compared to the others available techs at this period), but at some moment I’ve switched to something else. Because I learned strong foundations, I was able to reuse my skills quite easily in another tech after. Don’t learn to code. Learn to think ! The Silver Bullet Syndrome is a good illustration of this problem. It is the belief that “the next big change in tools, resources or procedures will miraculously or magically solve all of an organization’s problems”. Stop to chase the chimera, it won’t really fix it. If it helps you to fix your current problems, you will quickly see new ones. The Answer to the Ultimate Question of Life, The Universe, and Everything is not “42”, but “it depends on the context”. Like for design patterns, there is absolutely no magic solution for everything, you just need to find a good one, by applying YOUR vision in YOUR context. Like a real craftman, you need tools in your daily job but unlike a real craftman, our tools are constantly evolving. Do you know an Industry in the world with so many flavors & communities that allow you to work on tools released less than two weeks ? Neither do I. You have to learn to think and practices and not technologies, you should not be narrow minded but open to experience and feedbacks.

To be a swiss knife you … don’t need to be a technical expert

There are several technical experts and evangelists in this world (and we need them) but we’re not all dedicated to this future. By the way, it’s so boring to always do the same thing and to always chase the same daemons. We have the illusion that TOP programmers are the most valuable resource for a company but I don’t think so. You no longer need to be a brilliant programmer to bring success to your company or to achieve success. You just have to solve problems and the good news is that it’s fairly useless to stay hours in front of our screen to reinvent the wheel in each project. On one side, you have a supercomputer in your pocket, another supercomputer on your desk, and dozens of supercomputers in the cloud. On another side, thousands of open source frameworks and libraries can do 90% of the work for you. GitHub, Wikipedia, Stack Overflow, and of course the very wide range of articles, tutorials, feedbacks, posts available on the Internet. The hard part of building software is specification, design, modelling … not how you will write code. Architecture, Monitoring & Diagnostics, TestS, Ownership, Continuous delivery, maintainability, Performance & Scalability, … so many things that should be part of your software.

To be a swiss knife you should … practice

The positive side effect of considering myself as a swiss knife is for learning new stuff and run experimentation. We know that it’s vital for us to learn because our world is moving so fast. When I started to work, it was very different and I know it will be different in 10 years too. Compared to a traditional developers attached to its “confort zone”, I have much more occasions to try & test new things in a real world examples. At the end, if it doesn’t work properly, if I takes too much times, … it doesn’t matter because I am my own product owner. All these tests give me more arguments, experiences, more skills, to become a better problem solver aka a better swiss knife. It’s a way to be pro-active : I have a wide range of things in my pocket to help me in every situation. To conclude, I prefer to know thousands of topics -imperfectly- compared to mastering just only one. I know that I have enough skills and motivations to adapt myself to any problem. A new stack to learn? Ok, let me several days, I’ll do it. It doesn’t really matter the number of skills that you have, what is important is to be able to adapt to each situation, as a soldier with a swiss knife. I know that if I have to work on an unknow topic, I will spent my hours to learn it in order to be efficient and in order to make the right decisions when it’s needed.

How to avoid 26 API requests on your page?

The problem

Create an applications relying on web APIs seems to be quite popular these days. There is already an impressive collection of ready-to-use public APIs (check it at http://www.programmableweb.com ) that we can consume to create mashup, or add features into our web sites.

In the meantime, it has never been so easy to create your own REST-like API with node.js, asp.net web api, ruby or whatever tech you want. It’s also very common to create your own private/restricted API for your SPA, Cross-platform mobiles apps or your own IoT device. The naïve approach, when building our web API, is to add one API method for every feature ; at the end, we got a well-architectured and brilliant web API following the Separation of Concerns principle : one method for each feature. Let’s put it all together in your client … it’s a drama in terms of web performance for the end user and there is never less than 100s requests/sec  on staging environment; Look at your page, there are 26 API calls on your home page !

I talk here about web apps but it’s pretty the same for mobile native applications .RTT and latency is much more important than bandwidth speed. It’s impossible to create a responsive and efficient application with a chatty web API.

The proper approach

At the beginning of December 2014, I attended at the third edition in APIdays in Paris. There was an interesting session –among the others- on Scenario Driven Design by @ijansch.

The fundamental concept in any RESTful API is the resource. It’s an abstract concept and it’s merely different from a data resource. A resource should not be a raw data model (the result of an SQL query for the Web) but should be defined with client usage in mind. “REST is not an excuse to expose your raw data model. “ With this kind of approach, you will create dump clients and smart APIs with a thick business logic.

A common myth is “Ok, but we don’t know how our API is consumed, that’s why we expose raw data”. Most of the times, it’s false. Let’s take the example of twitter timelines. They are list of tweets or messages displayed in the order in which they were sent, with the most recent on top. This is a very common feature and you can see timelines in every twitter client. Twitter exposes a timeline API and API clients just have to call this API to get timelines. Especially, clients don’t have to compute timelines by themselves, by requesting XX times the twitter API for friends, tweets of friends, etc …

I think this is an important idea to keep in mind when designing our APIS. generally,  We don’t need to be so RESTful (Wwhat about HATEOS ?). Think more about API usability and scenarios, that RESTfullness.

The slides of this session are available here.

Another not-so-new approach: Batch requests

Reducing the number of request from a client is a common and well-known Web Performance Optimization technique. Instead of several small images, it’s better to use sprites. Instead of many js library files, it’s better to combine them. Instead of several API calls, we can use batch requests.

Batch requests are not REST-compliant, but we already know that we should sometimes break the rules to have better performance and better scaliblity.

If you find yourself in need of a batch operation, then most likely you just haven’t defined enough resources., Roy T. Fieldin, Father of REST

What is a batch request ?

A batch request contains several different API requests into a single POST request. HTTP provides a special content type for this kind of scenario: Multipart. On server-side, requests are unpacked and dispatched to the appropriate API methods. All responses are packed together and sent back to the client as a single HTTP response.

Here is an example of a batch request:

Request

POST http://localhost:9000/api/batch HTTP/1.1
Content-Type: multipart/mixed; boundary="1418988512147"
Content-Length: 361

--1418988512147
Content-Type: application/http; msgtype=request

GET /get1 HTTP/1.1
Host: localhost:9000


--1418988512147
Content-Type: application/http; msgtype=request

GET /get2 HTTP/1.1
Host: localhost:9000


--1418988512147
Content-Type: application/http; msgtype=request

GET /get3 HTTP/1.1
Host: localhost:9000


--1418988512147--

Response

HTTP/1.1 200 OK
Content-Length: 561
Content-Type: multipart/mixed; boundary="91b1788f-6aec-44a9-a04f-84a687b9d180"
Server: Microsoft-HTTPAPI/2.0
Date: Fri, 19 Dec 2014 11:28:35 GMT

--91b1788f-6aec-44a9-a04f-84a687b9d180
Content-Type: application/http; msgtype=response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

"I am Get1 !"
--91b1788f-6aec-44a9-a04f-84a687b9d180
Content-Type: application/http; msgtype=response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

"I am Get2 !"
--91b1788f-6aec-44a9-a04f-84a687b9d180
Content-Type: application/http; msgtype=response

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

"I am Get3 !"
--91b1788f-6aec-44a9-a04f-84a687b9d180--

Batch requests are already supported by many web Frameworks and allowed by many API providers:  Asp.net web api, a node.js module, Google Cloud platform, Facebook, Stackoverflow,Twitter …

Batch support in  asp.net web api

To support batch requests in your asp.net web API, you just have to add a new custom route  :

config.Routes.MapHttpBatchRoute(
routeName: "batch",
routeTemplate: "api/batch",
batchHandler: new DefaultHttpBatchHandler(GlobalConfiguration.DefaultServer)
);

Tip : the DefaultBatchHandler doesn’t provide a way to limit the number of requests in a batch. To avoid performance issues, we may want to limit to 100/1000/… concurrent requests. You have to create your own implementation by inheriting DefaultHttpBatchHandler.

This new endpoint will allow client to send batch requests and you have nothing else to do on server-side. On client side,  to send batch requests, you can use this jquery.batch, batchjs, angular-http-batcher module, …

I will not explain all details here but there is an interesting feature provided by DefaultHttpBatchHandler : the property ExecutionOrder allow to choose between sequential or no sequential processing order. Thanks to the TAP programming model, it’s possible to execute API requests in parallel (for true async API methods)

Here is the result of async/sync batch requests for a pack of three 3 web methods taking one second to be processed.

batch

Finally, Batch requests are not a must-have feature but it’s certainly something to keep in mind. It can help a lot in some situations. A very simple demo application is available here. Run this console app or try to browse localhost:9000/index.html. From my point of view here are some Pros/Cons of this approach.

Pros Cons
Better client performance (less calls) May increase complexity of client code
Really easy to implement on server side Hide real clients scenarios, not REST compliant
Parallel requests processing on server side Should limit batch size at server level for public API
Allow to use GET, POST, PUT, DELETE, … Browser cache may not work properly

Towards a better local caching strategy

Why Caching?

A while ago, I explained to some of my co-workers the benefits of caching. I’m always surprised to see how this technique is so misunderstood by some developers.

In computing, a cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere.

The thing is that caching is already present everywhere: CPU, Disk, Network, Web, DNS, … it’s one of the oldest programming techniques, available in any programming language and frameworks. You may think that it was mandatory with –only- 8 Ko of RAM two decades ago, but don’t too naive: it’s still a pertinent approach in our always-connected world : more data, more users, more clients, real-time, …

In this article, I will focus only on application caching through System.Runtime.Caching. Nothing really new here, but I just want to review 3 basics caching strategies, that you can see in popular and oss projects; it’s important to have solid foundations. Even if the language is C#, many concepts listed here are also valid in others languages.

Local Caching Strategies

By Local/InMemory cache, I mean that data is held locally on the computer running an instance of an application. System.Web.Caching.Cache, System.Runtime.Caching.MemoryCache, EntLib CacheManager are well-known local cache.

There is no magic with caching and there is a hidden trade-off: caching means working with stale data. Should I increase the cache duration? Should I keep short TTL value? It’s never easy to answer to these questions, because it simply depends on your context: topology of data, number of clients, user load, database activity…

When implementing a local caching strategy, there is an important list of questions to ask yourself :

  • How long the item will be cached?
  • Is data coherence important?
  • How long it takes to reload the data item?
  • Does the number of executed queries on the data source matter?
  • Does caching strategy impact the end user ?
  • What is the topology of data: Reference data, Activity data, session data, … ?

The –very- basic interface we will implement in those 3 following examples contains a single method.

 public interface ICacheStrategy
 {
 /// <summary>
 /// Get an item from the cache (if cached) else reload it from data source and add it into the cache.
 /// </summary>
 /// <typeparam name="T">Type of cache item</typeparam>
 /// <param name="key">cache key</param>
 /// <param name="fetchItemFunc">Func<typeparamref name="T"/> used to reload the data from the data source (if missng from cache)</param>
 /// <param name="durationInSec">TTL value for the cache item</param>
 /// <param name="tokens">list of string to generate the final cache key</param>
 /// <returns></returns>
 T Get<T>(string key, Func<T> fetchItemFunc, int durationInSec, params string[] tokens);
 }

Basic Strategy

The full implementation is available here.

        public T Get<T>(string key, Func<T> fetchItemFunc, int durationInSec, params string[] tokens)
        {
            var cacheKey = this.CreateKey(key, tokens);
            var item = this.Cache.Get<T>(cacheKey);
            if (this.IsDefault(item))
            {
                item = fetchItemFunc();
                this.Cache.Set(cacheKey, item, durationInSec, false);
            }
            return item;
        }

This is similar to Read-Through caching. The caller will always get an item back, coming from the cache itself or the data source. When a cache client asks for an entry, and that item is not already in the cache, the strategy will automatically fetch it from the underlying data source, then place it in the cache for future use and finally will return the loaded item to the caller.

Double-checked locking

The full implementation is available here.

        public T Get<T>(string key, Func<T> fetchItemFunc, int durationInSec, params string[] tokens)
        {
            string cacheKey = this.CreateKey(key, tokens);
            var item = this.Cache.Get<T>(cacheKey);

            if (this.IsDefault(item))
            {
                object loadLock = this.GetLockObject(cacheKey, SyncLockDuration);
                lock (loadLock)
                {
                    item = this.Cache.Get<T>(cacheKey);
                    if (this.IsDefault(item))
                    {
                        item = fetchItemFunc();
                        this.Cache.Set(cacheKey, item, durationInSec);
                    }
                }
            }

            return item;
        }

This version introduces a locking system. A global synchronization mechanism (a single object for every cache item) is not efficient here, that’s why there is a dedicated synchronization object per cache item (depending on the cache key). The double-checked locking is also really important here to avoid useless/duplicated requests on the data source.

Refresh ahead strategy

The full implementation is available here.

        public T Get<T>(string key, Func<T> fetchItemFunc, int durationInSec, params string[] tokens)
        {
            // code omitted for clarity

            // not stale or don't use refresh ahead, nothing else to do =&gt; back to double lock strategy
            if (!item.IsStale || staleRatio == 0) return item.DataItem;
            // Oh no, we're stale - kick off a background refresh

            var refreshLockSuccess = false;
            var refreshKey = GetRefreshKey(cachekey);

            // code omitted for clarity

            if (refreshLockSuccess)
            {
                var task = new Task(() =>
                {
                    lock (loadLock)
                    {
                        // reload logic
                    }
                });
                task.ContinueWith(t =>
                {
                    if (t.IsFaulted) Trace.WriteLine(t.Exception);
                });
                task.Start();
            }
            return item.DataItem;
        }

In this implementation it’s possible to configure a stale ratio, enabling an automatic and asynchronous refresh on any recently accessed cache entry before its expiration. The application/end user will not feel the impact of a read against a potentially slow cache store when the entry is reloaded due to expiration. If the object is not in the cache and If the object is accessed after its expiration time, it’s similar to the double-checked locking strategy.

Refresh-ahead is especially useful if objects are being accessed by a large number of users. Values remain fresh in the cache and the latency that could result from excessive reloads from the cache store is avoided.

Experimental Results

To see the impact of each strategy, I’ve committed code Github. This program simulates fake workers, that get the same item from the cache during 60 sec for each strategy. A fake reload method taking one sec is used to simulate access to the datasource.  All cache hits, cache misses and reloads are recorded. This may not be the best program of the world, but it’s fairly enough to illustrate this article.

Strategy Number of Gets/Reloads Avg. Get Time (ms) < 100 ms
Basic 3236 / 181 59.17 ms 94.15 %
Double-checked locking 3501 / 11 51.22 ms 94.10 %
Refresh Ahead 3890 / 11 0.20 ms 100%

Between the basic and the double-checked locking strategy, there is an important improvement in terms of reloads from the data source, with nearly the same avg. response time. Between the double-checked locking and refresh-ahead strategy, the number of reloads is exactly the same but the response time is greatly improved. It’s very easy to use a cache, but be sure to use the appropriate pattern that fits correctly to your use cases.

Bonus : Local Cache invalidation

One year ago, I’ve posted an article on Betclic Tech Blog about implementing local memory cache invalidation with Redis PubSub feature. The idea is fairly simple: catch invalidation messages at application level to remove or more items from a local cache. The current version is more mature and a nuget is available here.  This can be easily used in many ways:

  • Invalidate items, such as remove them from the cache
  • Mark items as stale, to use Background reloading with external event..

To conclude

We covered here only some variations of the cache-aside pattern. I hope that you’re now more aware of the possibilities and troubles you may have with a local cache. It’s very easy and efficient to use a local cache, so be sure to use the appropriate pattern that fits correctly to your use cases.

By the way, in an era of cloud applications, fast networks, low latencies, a local cache is –still- very important. Nothing, absolutely nothing is faster that accessing local memory. One natural evolution often cited for a local cache, is a distributed cache. As we’ve seen here, this is not always the unique solution, but there is another story.

The full source code is available on Github.

The importance of useless Micro-optimization

Micro-optimization is “the process of meticulous tuning of small sections of code in order to address a perceived deficiency in some aspect of its operation (excessive memory usage, poor performance, etc)”

We’ve all seen this kind of debates : Is X faster than Y? Should I have to replace A by B in my code? It’s not specific to any language, but it’s a real question for all developers. Programmers are generally advised to avoid micro-optimization, unless they have a solid justification. Yes, it depends

Most of the time, Micro-optimizations are confirmed by benchmarks: a very specific code section is executed thousands/millions times to illustrate the problem and to confirm initial hypothesis. A is X times slower than B. Of course, in real world applications, we rarely call one piece of code so many times, so this stuff may seem inappropriate. But the trouble is that the extra N-milliseonds is CPU time on the server. A web server could simply be idle waiting for request, processing other requests, … but instead it is busy executing the same inefficient methods over and over. Those N-ms can even become N-s during peaks load.

Improving performance of an application is an endless road. It’s a very time-consuming activity and don’t forget that’s it’s not the core of your Business : Your Boss wants you to deploy new features and not pointless optimizations. It’s very common to spend several hours (days?) to reach your performance goals.

By performance I mean something that limits scalability of your application. It could be CPU, network IO, Memory … You may know that all applications (systems) have throughput limits. Your Job, as a chief performance officer, is to be keep fast response times in all circumstances (unexpected system difficulties, light load/heavy load) and to be far from these limits.

The performance of a code section is a mix between frequency & efficiency. I don’t care to write an inefficient code if it’s rarely used but I’m really concerned by most called operations. Have you already count how many times ToString() is called in your .net project ? We can’t blame someone else for having written un-optimized code that we want to use in our –different- context. For example, .toString() has maybe not been coded to be used as a key for Dictionaries.

Since 10 years of professional programming, I’ve seen serious performance problems. “I don’t understand why it takes 5 sec to generate my page each evening. Everything is cached and I’ve run many benchmarks: all my methods take less than 5ms.” Yes, but these methods are called 100 times while generating your page. Simply speaking, 5 ms is too slow for your context !

Even with a clean architecture it’s sometimes difficult to determine the critical rendering path.  That’s why I’m a big fan of profiling tools like Visual Studio Profiler: they give you a complete overview without changing your application. Collected data during a load test session,  can be precious.

I often see Micro-optimization as a challenge: let me try to significantly improve the performance –and scalability- of your application by changing the fewest lines as possible. If I can’t do this, I’m either not good enough or there are no interesting optimizations.

Examples in .net

A few months ago, I already explained on the Betclic Techblog one bad experience we had with Structuremap and Web Api. To summarize, we’ve significantly reduced CPU usage of a web application, just by splitting IoC bindings into multiple containers.

More recently, I was mandated to do a performance review on a WCF service consuming too much CPU & memory. Without any doubts –thanks to my favorite profiler- , one method is causing several alerts & warnings.

buildmessage

This method is in a base class. Because the project follows the Onion architecture, dependency injection is heavily used here to manage dependencies (at core, service & repository layers). As a consequence to be in a pure stateless world, this method can be invoked several times during each request processing.  The average call rate is approx. 150/sec during peaks load, so this is a good candidate for a micro-optimization.

How to improve this method? It doesn’t look so bad at the first view ….

A first idea could be to optimize string operations. We all know the obvious beginner mistakes of string concatenation. Memory allocations are fast on modern PCs, but they’re far from free.  String.Join()/StringBuilder seems better in this case.

A Second better idea is to remove Enum.ToString(). The type of Legislation/Brand/Platform is enum  The method Enum.ToString() is called implicitly in this code section (during concatenation). An interesting article on codeproject explains the troubles with Enum.ToString().

A few minutes later, I finally produced this extension method (Gist)


public static class EnumExtensions
{
    public static class EnumCache&lt;TEnum&gt;
    {
        public static Dictionary&lt;TEnum, string&gt; Values = new Dictionary&lt;TEnum, string&gt;();
        static EnumCache()
        {
            var t = typeof(TEnum);
            var values = (TEnum[])Enum.GetValues(t);
            var names = Enum.GetNames(t);
            for (var i = 0; i &lt; values.Length; i++)
            {
                Values.Add(values[i], names[i]);
            }           
        }

        public static string Get(TEnum enm)
        {
            return Values[enm];
        }
     }

     public static string AsString&lt;TEnum&gt;(this TEnum enm) where TEnum : struct, IConvertible
     {
         return EnumCache&lt;TEnum&gt;.Values[enm];
     }
}

Let’s run benchmarks it on 50000 iterations (check the gist for the full source code)

vsprofilerenum

consoleoutputenum

Faster and Less allocated memory, Not so bad. (Note : It’s even a little better than the original codeproject article but with less possibilities). For sure, a .net Guru may find a more efficient way but It’s enough for me: I reached my performance goal. We now have an optimized ready-to-use extension method with less than 30 lines of code, that’s 40 times faster. Very well done!

But, We’re ALL WRONG !

Optimizing code means you have to think differently. Simply create a property (eventually lazy),  computed just once per request, and inserted somewhere in the scope. That’s it.

In every project, there are code conventions: this class to manage logging, this one to format date format, this one to translate string… We should be very careful with this kind of helpers, extension methods, utilities; we all have this kind of classes in our projects to factorize pure-technical stuff. Unsupervised usage may lead to serious performance issues.  Never do anything more than once.

Is this micro-optimization a complete waste of time? No at all. We’ve done a lot of interesting things: explore .net internals, run profiling sessions & benchmarks. The final solution is still valid but just not so important in our context. But it could be useful in the future…

Front-end development with Visual Studio

Update (28/01/2015)  : My team has just released a new nuget package that combine latest versions of node, nogit and npm. Don’t hesitate to try it.

The emergence of single page applications introduces a new need for web developers: a front end build process. Javascript MV* frameworks now allow web developers to build complex and sophisticated applications with many files (js, css, sass/less, html …). We’re very far from those 3 lines of JavaScript to put “a kind of magic” on your web site.

In traditional back end development such as asp.net MVC, a compiler transforms source code written in a –human readable- programming language (C#/VB for asp.net) into another computer language (MSIL for .net). Produced files are often called binaries or executables. Compilation may contain several operations: code analysis, preprocessing, parsing, language translation, code generation, code optimization….What about JavaScript?

JavaScript was traditionally implemented as an interpreted language, which can be executed directly inside a browser or a developer console. Most of the examples and sample applications we’ll find on the web have very basic structure and file organization. Some developers may think that it’s the “natural way” to build & deploy web applications, but it’s not.

Nowadays, we’re now building entire web applications using JavaScript. Working with X files is merely different. Few frameworks like RequireJS help us to build modular JavaScript applications thanks to Asynchronous Module Definitions. Again, this is not exactly what we need here because it focuses only on scripts.

What do we need to have today to build a web site? Here are common tasks you may need:

  • Validate scripts with JSLint
  • Run tests (unit/integration/e2e) with code coverage
  • Run preprocessors for scripts (coffee, typescript) or styles (LESS, SASS)
  • Follow WPO recommendations (minify, combine, optimize images…)
  • Continuous testing and Continuous deployment
  • Manage front-end components
  • Run X, execute Y

So, what is exactly a front-end build process? An automated way to run one or more of these tasks in YOUR workflow to generate your production package.

vs-webapp-angular

Example front-end build process for an AngularJS application

Please note that asp.net Bundling & Minification is a perfect counter-example: it allows to combine & to minify styles and scripts (two important WPO recommendations) without having a build process. The current implementation is very clever but the scope is limited to these features. By the way, I will still use it for situations where I don’t need a “full control”.

I talk mainly about Javascript here but it’s pretty the same for any kind of front-end file here. SASS/LESS/Typescript/… processing is often well integrated into IDE like Visual Studio but it’s just another way to avoid using a front-end build process.

About the process

Node.js is pretty cool but it’s important to understand that this runtime is not only “Server-side Javascript”. Especially it provides awesome tools and possibilities, only limited by your imagination. You can use it while developing your application without using node.js runtime for hosting, even better, without developing a web site. For example, I use redis-commander to manage my redis database. I personally consider that Web Stack Wars are ended (PHP vs Java vs .net vs Ruby vs …) and I embrace one unique way to develop for the web. It doesn’t really matter which language & framework you use, you develop FOR the web, a world of standards.

Experienced & skilled developers know how important is automation. Most of VS extensions provide rich features and tools, like Web Essentials, but can rarely be used in an automated fashion. For sure, it could help a lot, but I don’t want this for many of my projects: I want something easily configurable, automated and fast.

Bower to manage front-end components

The important thing to note here is that Bower is just a package manager, and nothing else. it doesn’t offer the ability to concatenate or minify code, it doesn’t support a module system like AMD: its sole purpose is to manage packages for the web. Look at the search page, I’m sure you will recognize most of the available packages. Why bower and not nuget for my front-end dependencies ?

Nuget packages are commonly used to manage references in a Visual Studio Solution. A package generally contains one or more assemblies but it can also contain any kind of file … For example, the JQuery nuget package contains JavaScript files.  I have a great respect for this package manager but I don’t like to use it for file-based dependencies. Mainly because:

  • Packages are often duplicates of others package managers/official sources
  • Several packages contains too much files and you generally don’t have the full control
  • Many packages are not up-to-date and most of the times they are maintained by external contributors
  • When failing, Powershell scripts may break your project file.

But simply, this is not how work the vast majority of web developers. Bower is very popular tool and extremely simple to use.

Gulp/Grunt for the build process

Grunt, auto-proclaimed “The JavaScript Task Runner” is the most popular task runner in the Node.js world. Tasks are configured via a configuration Javascript object (gruntfile). Unless you want to write your own plugin, you mostly write no code logic. After 2 years, it now provides many plugins (tasks), approx. 3500 and the community is very active. For sure, we will find all what you need here.

Gulp “the challenger” is very similar to Grunt (plugins, cross-platform). Gulp is a code-driven build tool, in contrast with Grunt’s declarative approach to task definition, making your task definitions a bit easier to read. Gulp relies heavily on node.js concepts and non-Noders will have a hard time dealing with streams, pipes, buffers, asynchronous JavaScript in general (promises, callbacks, whatever).

Finally, Gulp.js or Grunt.js? It doesn’t really matter which tool you use, as long as it allows you to compose easily your own workflows. Here is an interesting post about Gulp vs Grunt.

Microsoft also embraces these new tools with a few extensions (see below) and even a new build process template for TFS released by MsOpenTech a few months ago.

Isn’t it painful to code in Javascript in Visual Studio ? Not at all. Let’s remember that JavaScript is now a first-class language in Visual Studio 2013 : this old-fashioned language can be used to build Windows Store, Windows Phone, Web apps, and Multi-Device Hybrid Apps. JavaScript IntelliSense is also pretty cool.

How do I work with that tool chain inside Visual Studio?

Developing with that rich ecosystem often means to work with command line. Here are a few tools, extensions, packages that may help you inside Visual Studio.

SideWaffle Template Pack

A complete pack of Snippets, Project and Item Templates for Visual Studio.

sidewaffle

More infos on the official web site http://sidewaffle.com/

Grunt Luncher

Originally a plugin made to launch grunt tasks from inside Visual studio. It has now been extended with new functionalities (bower, gulp).

gruntlauncher

Download this extension here (Repository)

ChutzpathChutzpath

Chutzpah is an open source JavaScript test runner which enables you to run unit tests using QUnit, Jasmine, Mocha, CoffeeScript and TypeScript. Tests can be run directly from the command line and from inside of Visual Studio (Test Explorer Tab).

By using custom build assemblies, it’s even possible to run all your js tests during the build process (On premises or Visual Studio Online). All the details are explained in this post on the ALM blog.

Project is hosted here on codeplex.

Package Intellisense

Search for package directly in package.json/bower.json files. Very useful if you don’t like to use the command-line to manage your package.

package_intellisense

More info on the VS gallery

TRX – Task Runner Explorertrx

This extension lets you execute any Grunt/Gulp task or target inside Visual Studio by adding a new Task Runner Explorer window. This is an early preview of the Grunt/Gulp support coming in Visual Studio “14”. It’s definitively another good step in the right direction.

Scott Hanselman blogged about this extension a few days ago.

I really like the smooth integration into VS, especially Before/After Build. The main disadvantage –in this version- is that this extension requires node.js runtime and global grunt/gulp packages. It won’t work on a workstation (or a build agent) without installing these prerequisites. Just for information, it’s not so strange to install node.js on a build agent: it’s already done for VSO agents. http://listofsoftwareontfshostedbuildserver.azurewebsites.net/

To conclude

To illustrate all these concepts and tools, I created a repository on github.  Warning, this can be considered as a dump of my thoughts. This sample is based on the popular todomvc repository. Mix an angular application and an asp.net web api in the same application may not be the best choice in terms of architecture, but it’s just to show that it’s possible to combine both in the same solution.

Points of interest

  • Front-end dependencies are managed via Bower (bower.json)
  • Portable nuget package (js, Npm, Bower, Grunt, …) created by whylee. So, I can run gulp/grunt tasks before build thanks to the custom wpp.targets.
  • Works locally as well as on TFS and Visual Studio Online.
  • Allow yourto use command line at the same time (>gulp tdd)

Does it fit into your needs? No ? That’s not a problem, just adapt it to your context, that’s the real advantage of these new tools. They are very close to developers and building a project no longer depends on a complex build template or service.

Become a Responsible Programmer

Our job is so … special. Isn’t it?

Like me, you may have already been in this delicate situation where one of your friends ask you for help  “I have a problem with program XXX”, “I can’t order an item in this e-commerce web site”, “my computer is slow”…. Could you help me? A vast majority of people thinks that working in computers means that you know everything about computers. We –programmers- know that it’s false but it’s just a consequence of how developers are considered. The programmer’s stereotype is a perfect example: A geek/nerd living alone in his world.  Nobody can understand a programmer better than … another programmer.

Have you ever try to discuss computer science with a non-programmer? Sometimes it’s quite funny but most of the time he will have a headache; you too. How can you explain Web 2.0, the N existing programming languages, Agile Vs waterfall, scripting vs compiled language, MVC pattern …? It’s even worst between developers. In the past, We saw all endless fights & debates between antagonist technology stacks.

After 10 years of programming, my vision has also changed a lot and I’m sure it will change again in the next coming years. Thanks to communities, we’re now all connected. I have the feeling that programmers now share the same vision of their job. asp.net vNext and  .NET  foundation are great examples of Microsoft (“Linux is Cancer”, 2001) embracing Open Source.

So, as a programmer, you have great responsibilities. You have to contribute to this common ideal,  you’re responsible to keep our job a nice -and fun- place to work, to encourage programming concepts especially in education where it is learned too late, to promote team working and collaboration, to be professional and respectful towards users…

There’re a wide range of ideas I’d like to say here, but I will list only those that come immediately to my mind. So here are my 10 commandments of Responsible Programming

  • Embrace & be active in communities
  • Accept changes and react positively to events
  • Banish license violation, code/content robbery
  • Accept your mistakes, and take the opportunity to become better
  • Write efficient code and don’t hesitate to prove it
  • Write maintainable code with clean and strong architecture, following common quality rules and common security guidelines
  • Don’t deceive your users ; users satisfaction is your primary objective
  • Learn continuously and get better every day
  • Share your knowledge and your vision to everyone and especially to less experienced developers
  • Don’t to be a lone hero, but embrace team working & collaboration

What we -programmers – have done in the last 10 years has changed the world for ever. Let’s continue to change the world and make it a better place to live. This is how I consider about my job and the values I try to share.