One of the benchmark results we are extremely proud of at X-IO is how the Axellio Edge Compute Platform did on the Securities Technology Analysis Council’s tick analytics benchmark (STAC-M3).  This benchmark measures the total solution performance of analyzing time-series data such as tick-by-tick quote and trade histories.  The faster that these systems can back-test and eliminate the haystack of unprofitable ideas, the more productive the data scientist developing algorithms will be.  The test can be run with 1-yr and 5-yrs worth of data to test the scaling capability of the solution.

Axellio Edge Computing PlatformIn the 1-yr version of the test, there are 17 different metrics. Axellio has been the undisputed winner beating all other submissions in a head-to-head comparison (won in >50% of the metrics).  We’ve held that title since the end of October 2017 until just a few nights ago.

You can learn a lot from benchmarks.  And just like other things in life, you really learn a lot when you get beat.  This week ExtremeDB and E8 storage submitted STAC-M3 benchmark results that take the mantle away from Axellio by 1 test case (9/17 vs 8/17).  I have been waiting for this day for a while, as I had always wondered what the system that finally beat us would look like.  I have to say, that I’m impressed with the amount of gear they had to throw at the problem to win by a single test (of 17).

The design goal of the Axellio Edge Compute platform is to get the CPU, RAM, and Data as close together as possible, on a hardware platform designed to maximize the throughput between these components.  Through the FabricXpress Architecture, Axellio can reach internal storage speeds of up to 60GB/sec, and can employ up to 88x cores and 2TB of RAM.

For our STAC-M3 benchmark submission, we used a single 2u Axellio, with 88x CPU cores, 1TB of RAM, and 70x 800GB NVMe Storage devices.  This configuration enabled us to “knock-off” the 2 previous record holders, Vexata and DSSD.Axellio Edge Computing Platform

The new E8 submission is designed in the same theme as with both the Vexata and DSSD submissions, lots of CPU cores, Lots of RAM, a HUGE expensive switch in the middle (consuming 14x ports), dedicated adapters for storage, and a separate dedicated storage system with insanely expensive Optane SSDs. E8 only ran the test with 1-yr of data, as 5-yrs worth of data is over 50TB of required capacity.  That would just be an insane amount of Optane SSDs (@ 350GB ea).

So let’s summarize how much more gear was used:

And for this huge amount of gear, they must have really won by a lot, right?  Not really…  Below are the scores from each of the tests, and they are a measure of the time that it took to complete each test (latency).  Smaller numbers are better, and wins are in green:

Axellio kept its records across all submissions for the QTRHIBID, WKHIBID, STATS-AGG, and STATS-UI test cases.

There’s a measure of storage throughput that was achieved during the test as well, and with 8x 100Gbps connection to the storage, E8 had to have done great on that, right?  No again…  Axellio beat them here as well, getting to ≈20GB/sec of read throughput in 2 of the tests.

So, while I do have to give credit where credit is due (they are on top after all), the cost and size of the solution that it took to beat Axellio absolutely proves the point of the design.  The days of having a solution with massive (i.e. expensive) high-speed interconnects, tons of CPU cores and RAM, high power usage, and lots of Rackspace are just going to be too expensive both in terms of CapEx and OpEx. Getting a simpler system, that performs better, for less money and less cost is going to be a massive competitive advantage for financial organizations- where getting the answer is how they generate alpha.  The Axellio Edge Compute Platform can be the advantage that your organization has been looking for to break out of the trap of the traditional architecture approach for high performance. Check out more on Axellio on our STAC page.

We use cookies to offer you a better browsing experience. By using our site, you consent to the use of cookies. Learn more about how we use cookies in our privacy policy.