One of the benchmark results we are extremely proud of at Axellio Inc. is how our FabricXpress Edge Compute Platform did on the Securities Technology Analysis Council’s (STAC) tick analytics benchmark (STAC-M3).  This benchmark measures the total solution performance of analyzing time-series data such as tick-by-tick quote and trade histories.  The faster that these systems can back-test and eliminate the haystack of unprofitable ideas, the more productive the data scientist developing algorithms will be.  Run this test with 1-yr and 5-yrs worth of data to test the scaling capability of the solution.

Axellio Edge Computing PlatformIn the 1-yr version of the test, there are 17 different metrics. Axellio’s FabricXpress has been the undisputed winner beating all other submissions in a head-to-head comparison (won in >50% of the metrics).  We’ve held that title since the end of October 2017 until just a few nights ago.

You can learn a lot from benchmarks.  Just like outside the data center – getting beat teaches you a lot.  This week ExtremeDB and E8 storage submitted STAC-M3 benchmark results that take the mantle away from Axellio by 1 test case (9/17 vs 8/17).  We’ve waited patiently for this day, knowing it would come.  I have to say, I’m impressed with the amount of gear they had to throw at the problem to win by a single test (of 17).

The design goal of the Axellio’s Edge Compute platform is to create a hardware platform designed to maximize throughput and minimize latency between CPU, RAM, and Data.  Through the unique PCIe fabric architecture, FabricXpress can reach internal storage speeds of up to 60GB/sec, and can employ up to 88x cores and 2TB of RAM.

For our STAC-M3 benchmark submission, we used a single 2u FabricXpress, with 88x CPU cores, 1TB of RAM, and 70x 800GB NVMe Storage devices.  This configuration enabled us to “knock-off” the 2 previous record holders, Vexata and DSSD.Axellio Edge Computing Platform

Similar to both Vexata and DSSD, the new E8 submission is designed in the same theme with large amounts of CPU cores and RAM, a HUGE expensive switch in the middle (consuming 14x ports), dedicated adapters for storage, and a separate dedicated storage system with insanely expensive Optane SSDs. E8 ran the test with 1-yr of data, as 5-yrs worth of data is over 50TB of required capacity – which we can only conclude isn’t a worthwhile endeavor for E8 with such a high-end SSD required at only 350GB each.

So let’s summarize how much more gear was used:

And for this huge amount of gear, they must have really won by a lot, right?  Not really…  Below are the scores from each of the tests, and they are a measure of the time that it took to complete each test (latency).  Smaller numbers are better, and wins are in green:

Axellio kept its records across all submissions for the QTRHIBID, WKHIBID, STATS-AGG, and STATS-UI test cases.

There’s a measure of storage throughput that was achieved during the test as well, and with 8x 100Gbps connection to the storage, E8 had to have done great. Right?  No again…  Axellio beat them here as well, getting to ≈20GB/sec of read throughput in 2 of the tests.

So, while I do have to give credit where credit is due (they are on top after all), the cost and size of the solution that it took to beat Axellio absolutely proves the point of the design.  The days of having a solution with massive (i.e. expensive) high-speed interconnect, tons of CPU cores and RAM, high power usage, and lots of Rackspace are just going to be too expensive both in terms of CapEx and OpEx. Getting a simpler system, that performs better, for less money and less cost is going to be a massive competitive advantage for financial organizations- where getting the answer is how they generate alpha.  The Axellio FabricXpress Edge Compute Platform can be the advantage that your organization has been looking for to break out of the trap of the traditional architecture approach for high performance. Check out more on Axellio on our STAC page.

We use cookies to offer you a better browsing experience. By using our site, you consent to the use of cookies. Learn more about how we use cookies in our privacy policy.
Accept