Tag Archives: redis

Percentiles aggregation with Redis

We all know that we should monitor pro-actively our applications and response is a natural choice when working on performance. Concerning timings, it’s important to keep this golden rule in mind “Average is bad !” Averages are used too often for in monitoring because they are quite easy/memory efficient to compute (sum of values/number of samples). There are hundreds of posts and articles that explain this and why you should not rely on it (like Anscombe’s quartet). Percentiles are much better, but less easy to compute especially on a large-scale distributed system (big data, multiple sources, …)

Aggregating percentiles is a common problem for everyone and it exists a lot of ways to do it. I highly suggest to read this article from baselab : this is just a must-read if you plan to work on this topic !  One section in this last article refers to “Buckets” and this is exactly the approach we will use here.  “Another approach is to divide the domain into multiple buckets and for each value/occurrence within the bucket boundaries increment this bucket’s count by one”

buckets

Increments ? Do you know an appropriate key-value store for this purpose ? Yes, Redis.  My implementation consists of storing all buckets into a redis hash, each bucket being a hash field. The field name is the upper bound and the value is the number of samples for this interval. We use HINCRBY (complexity O(1)) on client/emitter side to increment the number of samples for the target bucket. This will give us some sort of histogram in redis for that event. It could be incremented by one to many clients because commands are isolated and atomic. On the other side (supervisor, admin dashboard, …), in order to get the N-percentile for that hash, we need to get all fields, sort them (hash fields are not sorted by default) and find the exact field name where the aggregated number of samples (starting from the first) is beyond the target number of samples (total samples * N-percentiles).

One particular thing to notice is the bucket size. There is no rule concerning this and it could a constant, linear, exponential, … scale. The choice of the bucket size depends mainly on two factors : the desired accuracy (small size means better accuracy) and the domain (delta between  min/max). For example, a constant bucket size of 50 ( [0-50],[50-100],[100-150], …) is not accurate for small values (ex 130 => [100-150]) but much more accurate for higher values (like 1234 => [1200-1250]). Over the time, I generally use some kind of varying bucket size inspired by HDR Histogram.

Bucket Size/Round Factor Step Lower Value Upper Value
1 0 0 128
2 1 128 384
4 2 384 896
8 3 896 1920
16 4 1920 3968
32 5 3968 8064
64 6 8064 16256
128 7 16256 32640
256 8 32640 65408

Number of redis hash fields

For example, a sample value of 2040 is located in the interval [2032-2048]. This is just an example of a non-linear scale that provides a good accuracy, while keeping an acceptable number of hash fields

The number of hash fields is important because it will directly impact on memory or storage (if configured). Hashes have a hidden memory optimization using ziplist when the number of fields is below a configurable threshold (512 by default). So, we can’t have too many fields/buckets if persistence is required. In practice, depending on the number of samples and the distribution of values, some buckets are often missing. This is not a problem and give us more possibilities to select the bucket size.

Querying Redis

During my work on this topic, my initial intention was to create a LUA script to compute percentiles in redis : we don’t need to get all the fields+values on processing side, and percentile computation is just a matter of sorting/counting hash fields. After several test, I can easily say that it is a very bad idea ! The best version of my script is quite efficient –I suppose-  but taking several ms for the worst cases (eg. thousands of buckets). You should know that only one script can be executed at the same time on redis and I prefer to do more IO than blocking my redis instance. By the way, the complexity of HGETALL is O(1), so quite CPU-efficient.

Precision on client side

This implementation has a nice advantage: the precision is configured on client/emitter side, because the algorithm to compute the bucket is located there. You could change the algorithm at runtime or even have multiple implementations that you can apply to your components. On server-side, it doesn’t matter because you just have to sort and count. This is pretty good when you have multiple kind of apps and components, all having a unique domain (nano seconds, milliseconds…)

Smart Key Pattern

Since now, I’ve considered only one hash, but this is not enough if we want to create beautiful graphs (99th percentile for the last 7 days per hour). A common way –in NoSQL- to solve this problem is to use the smart key pattern. A Key can be much more that a simple constant string value. If we know how we will query the data model later (and we know in this case), let’s add dynamic parts into the key. We’re working with timeseries so a rounded timestamp is a natural choice. All the keys will follow the pattern ‘mykey:hour:roundedts’.

We can’t aggregate percentiles, and there is no “magical algorithm” to compute percentiles from percentiles. If we want to investigate response times during a few minutes, we need aggregations  for every minute. The best way to do this is to increment several smart keys at the same time (minute, 5 minute, hour, day, …) on client side. That’s not a real problem on client side if your binding implementents pipelining.

Limit the impact on client side

We’ve seen that buckets can be incremented quite easily on client side, I mean without spending too much cpu cycles. However, itis not always a good thing. Depending on your applications, a better approach could be to implement some kind of batch processing. If you receive 100 requests during the same second, there is a high probability that you could group your timings (10 requests < 10ms, 30 requests between 10 and 20, 15 between 20 and 30ms, …). This will reduce significantly the number of commands executed.

To illustrate the concept, I’ve created a small gist using nodejs. This example is not prod-ready and far from my current implementation (C#) but it should help you to understand the principles of this approach.

A few line of codes later, and with the help of your favorite chart library, you can visualize this kind of graph. Please take a moment to compare the average and percentiles. Percentiles are definitively muc better !

 

range

Storing Time Series in Redis

Last time, I explored how to store time series in Microsoft Azure Table Service. This time I’ll do the same but in Redis. It’s is a very popular key-value store (but not only) and I highly encourage you to review it if you still don’t know it. In addition to my latest post it’s important to note that Time Series have very special properties.

  • Append only
    Most of the time, collectors or emitters append data “at the end”.
  • Readonly mode
    Time Series are never updated. Points are just read to create beautiful charts, compute statistics, etc … Most probably, latest points are more important than the others : it’s common to use last minute, last hour, last day in charts.
  • Big Data
    A simple collector that runs every second creates 86 400 points per day,  2 592 000 per months, 31 536 000 per year… and this is just for a single metric.

There are many ways to implement Time series in Redis, for example by using sorted sets, lists or even hashes. None is really better and the choice depends on your context (yes, again). A few weeks ago, for one of my needs, I needed a very efficient way to store thousands of thousands of data points.  Here are the details of this implementation, heavily inspired by a reposity of @antirez https://github.com/antirez/redis-timeseries

How it works

The main idea is to use the APPEND command.

If key already exists and is a string, this command appends the value at the end of the string. If key does not exist it is created and set as an empty string, so APPEND will be similar to SET in this special case.

redistsdatapoints
Every time series is stored into a single string containing multiple data points that are mostly ordered in time. So, the library appends every data point as a serialized string terminated by a special character (#).Internally, every data point contains two parts: a timestamp and a value (also delimited by a special character (‘:’).

Now to make things a little bit more efficient, Key Names are generated via a basic algorithm. The key name contains a constant (user-defined) and a rounded timestamp, so we will not add all the points to the same key. The rounding factor depends on your configuration settings. If you set it to one hour, all data points inserted between 4PM and 5PM will go to the same key.

Reading data is done via GETRANGE. Why not a basic GET ? It’s mainly because each key can have a lot of data points. I don’t want to risk an OutOfMemoryException, Excessive Large Heap Allocations or GC … Depending of the rounding factor, it is also possible to hit redis several times. For example, if the rounding factor is set to one hour and you want the last 4 hours.

Pros Cons
Very space efficient Inserts at the end only
Fast inserts (APPEND is O(1)) Unordered list
Up to 512 MB Range queries in a serie not supported (requires fixed-size)
TTL on keys

Introducing RedisTS (Redis Time Series)

I’ve committed an implementation on github and a nuget package is available here. The usage is pretty trivial and you can inspect the test project or samples to understand how to use it.

Note : RedisTS depends on StackExchange.Redis. An open ConnectionMultiplexer is required and redists will never open a new connection or close the current.

At first, you have to create a client you’re your time series.

var options= new TimeSeriesOptions(3600 * 1000, 1, TimeSpan.FromDays(1));
var client = TimeSeriesFactory.New(db, "msts", options);

This operations is cheap and the client is –more or less- stateless. Once you have a client, you can add one or more datapoints. This is typically done via a background task or a scheduled task.

//here I add new datapoint. Date : now and Value : 123456789
client.AddAsync(DateTime.UtcNow, 123456789);

That’s all on client side. Now if you insert data points from one location, it’s expected to read them from another one, maybe to display beautiful charts or compute statistics. For example, if you want to get all the data points since one hour for the serie “myts”, you can use the following piece of code :

client.RangeAsync(DateTime.UtcNow.AddHours(-1), DateTime.UtcNow);

That’s all for today. I hope this will help you and feedback is heavily encouraged. Thanks

Introducing Toppler

I recently created a new repository (https://github.com/Cybermaxs/Toppler ) and I would like to share with you the idea behind this project.

It’s been officially one year since I’ve discovered Redis and like every fan I can see many possibilities here and there. I’m also quite surprised that this DB is so little known in Microsoft stack but I’m sure this will change in a few months as Redis Cache will become the default caching service in Azure. But Redis is not just a cache! This Key-value Store has unique features and possibilities. Give it a chance.

So what is Toppler ? It’s just a very small package built on the top of StackExchange.Redis that helps you to count hits and get emitted events to build Rankings/Leaderboard.

Here are a few use cases where Toppler could help you

  • You want a counter for various events (an item is viewed, a game is played, … ) and get statistics about emitted events for custom time ranges (today, last week, this month, …)
  • You want to implement a leaderboard with single or incremental updates.
  • You want to track events in custom dimensions and get statistics for one, two, .. or all dimensions
  • You want to provide basic recommendations by combining most emitted events and random items for custom time ranges

How does it work?

One of the most important aspects in Toppler is the Granularity. Each hit is stored in a range of sorted sets representing different granularities of time (e.g. seconds, minutes, …).

The granularity class has 3 properties (Factor, Size, TTL) that allow to compose a smart key following this pattern [PREFIX]:[GRAN_NAME]:[ TS_ROUNDED_BY_FACTOR_AND_SIZE]:[TS_ROUNDED_BY_FACTOR] where [PREFIX] is the combination of the configured namespace with the current dimension and [TS_ROUNDED_XX] is the rounded unix timestamp for a given granularity.

Here are the values for the 4 default Granularities

Factor TTL Size
Second 1 7200 3600
Minute 60 172800 1440
Hour 3600 1209600 168
Day 86400 63113880 365

A TTL is assigned to each key (using Redis EXPIREAT) to keep a reasonable DB space usage.

So, a hit emitted at 17/07/2014 14:23:18 (UTC) will create/update these keys

  • [NAMSPACE]:[DIMENSION]:second:1405605600:1405606998
  • [NAMSPACE]:[DIMENSION]:minute:1405555200:1405606980
  • [NAMSPACE]:[DIMENSION]:hour:1405555200:1405605600
  • [NAMSPACE]:[DIMENSION]:day:1387584000:1405555200

When an event is emitted, the number of hits (often 1) is added to the target Sorted Set via the ZINCRBY command.

The retrieval of results use the same logic to recompose keys as the granularity and resolution are parameters of the Ranking method, but we use the ZUNIONSTORE command to combine all results in a single sorted set. This allows to store the result of the query (like a cache) or to apply a weight function.

Show Me the code !

topplerintro

It’s just a very basic example and many additional options are available to emit events (when, in which dimension, how many hits …) or compute statistics (single/multi dimensions, caching, granularity & resolution, weight function …).

The project is currently in Beta  so please be indulgent and patient ; Feel free to contact me, create issues, tell me what’s wrong … Thanks.

Acknowledgements