The Trading Chain and Latency Hotspots
Building A Low Latency Infrastructure For Electronic Trading, by Sun Microsystems, Inc., p27, Chapter 4
Several applications make up the entire trading chain and each performs specific functions with different latency profiles and requirements. Major applications are discussed below.
Liquidity Venue / Exchange Order Matching
Order matching systems maintain a ‘book’ of both sides of an order (bids and offers) across a number of securities. For each new bid or ask received, it is the task of the matching software to scan the entire book as quickly as possible to match it to alternate sides of the transaction. While the matching process calls for fairly simple logic, the requirement to handle thousands of matches per second calls for very high performance.
When a match occurs, both sides to the transaction are informed and a trade report is distributed to the marketplace at large (and archived for end of day data feed services and for regulatory purposes). Single digit millisecond latency is common for major execution venues.
Orders and confirmations are typically received and conveyed via the FIX protocol and usually communication via this protocol is performed via FIX gateways. Such gateways are usually supplied by specialist vendors, such as NYFIX, Orc Software and ULLINK.
However, the matching element of the process is usually conducted via a proprietary-coded application, which is highly optimized for speed. Consistency of matching speed (that is, low jitter) is considered as important as absolute speed, as is very high availability. Efficient connectivity between the FIX gateway and the order matching engine (as well as market data distribution covered below) is a fundamental requirement.
Thus, Sun technologies such as x64 servers (for high throughput), Solaris (high performance, low jitter, high availability), Containers (fast inter-application communication) and the Real-Time Java System (for low jitter) are appropriate for developing order matching systems.
Market Data Handling Market data handlers – whether they be at an execution venue distributing feeds to market participants (both trading firms and commercial aggregators/redistributors) or at a trading firm to receive data – perform similar functions such as very fast input/output, data normalization and error detection and recovery.
Between an execution venue and trading firms, the historical norm has been to deploy proprietary communications protocols that are optimized for high throughput (small message headers and payloads). Some venues are beginning to use the FAST (FIX Adapted For Streaming) protocol, though not all are supporting this standard to the full.
For trading firms, market data receivers would typically maintain a local cache of recent latest prices, perhaps enriching it with hi/lo information or time series data. Such functionality is of general use to trading applications, so maintaining a single cache makes design sense.
Akin to order matching, the business logic that makes up market data feed handlers is relatively simple, though once again the need for high performance is paramount (handling hundreds of thousands of messages per second from a single datafeed). Sun x64 servers are ideal for the hardware platform (Intel chipsets being generally considered as offering optimum I/O), running the combination of Solaris Containers and Real-Time Java for the operating and application software.
Sun’s specialist storage servers, especially when equipped with Flash technology, might be used for maintaining caches, or audit trail time series for regulatory purposes.
Messaging middleware links market data handling components with those performing order management and trade execution. Sun’s close work with messaging middleware vendors to improve performance and scalability makes it the platform of choice for such middleware.
Algorithmic Trading Engines
Algorithms come in many forms with very different purposes. Consequently it is difficult to be specific about the functionality performed by algorithmic trading engines. However, as a generalization, all are likely to require access to different data points upon which they perform analysis which triggers one or more events. The analysis can vary in complexity from the simpler pattern recognition to the application of complex mathematical formulae.
In addition to proprietary code, some functionality for complex event processing (CEP) might be acquired from a specialist vendor and would make an ideal platform on which to build algorithms. CEP engines are specialist applications that can be programmed to look for patterns of events in incoming data streams, generating alerts when specific conditions are met. Such a condition might cause some proprietary code to be executed in order to process specific algorithm rules. CEP engines generally are designed to take advantage of multi-threading, and hence scale linearly when implemented on multi-core systems. Sun partners with several
CEP vendors including Aleri, Progress/Apama, Corel8, and StreamBase Systems, and has worked with them to optimize their offerings for Sun environments. A recent benchmark running Aleri’s CEP offering on a Sun Intel-based server running Solaris resulted in a data update rate of 300,000 updates per second, simulating a cross-venue order book liquidity aggregation function that might form part of an execution algorithm.
Depending on the complexity of the algorithm, Sun x64 or SPARC servers might be deployed. In either case, Solaris, Containers, and the Real-Time Java System are suitable technologies on which to base algorithmic trading, whether or not it includes vendor CEP offerings.
As noted earlier, collocation of algorithmic trading engines with execution venue matching systems can reduce latency caused by physical distance. Since collocation services usually charge by physical floor space used and power consumed, Sun server offerings – whether rack mount or blade – are an attractive option because of the high density of compute power they provide. Solaris and
Container technologies can maximize the application load running on servers, while also increasing performance.
Smart Order Routing and DMA
Within a Sell Side trading firm, Smart Order Routers and DMA infrastructure is used to route orders to execution venues, either on behalf of the firm’s own proprietary algorithmic trading function, or (in the case of DMA) from the firm’s buy side customers.
Smart Order Routing systems ensure that orders are executed at the best price, which must be determined across a number of execution venues including dark pools. As such, they complement execution algorithms that determine how best to feed large orders into the market.
DMA provides buy-side firms running their own algorithms or using those supplied by sell-side trading partners with a fast route to the execution venues, while maintaining control over how order execution is handled. SOR and DMA applications often fall into the high performance/low complexity area best suited to leverage x64 Servers. Once again, Solaris and Containers (perhaps allowing a SOR application and a FIX gateway to run on the same server) would form the operating software stack. Proprietary SOR and DMA applications might be written to leverage the Real-Time Java System. Alternatively, packages such as Fidessa LatentZero and SunGard BRASS – both optimized to run in Sun environments – might be considered.
Oxford Knight is a technical recruitment agency. None of our consultants have written a line of code... yet. We apologise if this article doesn’t keep some purist happy, but we’re trying to build a new generation of technical recruitment agencies…. We listen, participate, and deliver.