Our technology is based on the comparison of data segments. We are capable of detecting a similarity degree of the data from different servers, from different buildings, etc.
Our main objective is anomaly detection. In other words, we are looking for segments that are unique and do not resemble any of the previous events. However, an ability to compare the segments itself could provide us with an interesting view of the data.
Thus, for example we can:
- Make sure that our assumption of resemblant servers being actually similar is correct;
- Detect differences in the web traffic of various countries;
- Detect changes that occurred with the data overnight;
- Detect web sites whose behaviour was mostly changed during the previous hour;
- Detect web sites which have the most and the least number of resembling sites.
There are many existing insights which we can obtain by examining statistical similarities of segments. Note that such analysis is becoming very complicated and inefficient if performed via SQL.