In our previous blog, we explored whether Java was fast enough for trading applications. We began to discuss what languages are – and where we need to make optimisation choices, which is where C++ and FPGAs come into play.
Accelerating applications with C++
Given enough time and an expert developer (and at IMC, we have plenty) C++ is usually faster than Java. Although C++ does not have the luxury of a tracing compiler, its compile-once model allows it to spend more time employing more advanced and time-intensive optimisations. The C++ compiler can optimise your entire application in a very sophisticated way before it even runs. Simply put, improvements are front-loaded so the application runs faster.
What’s more, C++ naturally offers a lower level of abstraction when compared to a language like Java. This allows more complicated optimisations in tune with the hardware of your target system. However, for some tasks even C++’s speed is not enough. This is where Field Programmable Gate Arrays (FPGAs) enter the picture.
Specialising with FPGAs
Normally an application runs on a general-purpose CPU. The application is compiled into instructions which the CPU processes. FPGAs are different. Here, our application is a description of hardware; an alternative to the CPU itself!
Why is this useful? Specialisation. Rather than feeding instructions to a one-size-fits-all CPU, FPGAs allow us to direct hardware to do exactly (and only) what we need it to do. This provides the best possible determinism and, for many applications, the best latency.
However, while FPGAs can deliver raw performance, parallelism and, crucially, deterministic behavior, they come with considerable costs (think people, tools and time). For example, compiling a C++ program to run on a CPU might take minutes, while most FPGA designs take at least an hour.
Programming FPGAs typically takes more effort too (as well as an increasingly niche and rare skill set). FPGAs also require significant up-front effort and investment to learn the nuances of high-speed solutions. And you can’t do without them, as these nuances become vital competitive advantages. Put this together with a market response time that has dropped from seconds to nanoseconds, and the cost of being competitive becomes increasingly high.
We have reached a point where FPGA development has crossed the threshold of viability with respect to traditional software architectures. With their ability to maintain ultra low-latency service times while simultaneously executing computationally intensive algorithms, they are quickly becoming a staple of electronic markets. Indeed, the advantages of specialisation and raw performance offered by FPGAs make them a key tool here at IMC.
When it comes to Java, C++ and FPGAs, there’s an inverse relationship between achievable performance and difficulty. There is no clear ‘winner,’ especially considering the entire development process. What really matters is using the right layer of abstraction for the necessary level of performance, and knowing what is fast enough.
If your application spends 90% of the time running the other 10%, Java makes optimising that 10% harder, but it makes writing and maintaining 90% of your application easier. For teams of mixed ability, it’s often the smartest choice of language.
If the problematic 10% of your application is particularly latency-sensitive and Java does not meet performance requirements, C++ offers the ability to fine tune with the help of expert developers. However, it makes writing and maintaining the other 90% of your application more difficult.
If either of these languages still can’t meet requirements, FPGAs offer an even higher level of performance – but they come with the high cost of overall complexity and difficulty.