For mission critical IT systems, robustness is simply a must. This is nothing new or specific to IT industry. Take a bridge for example - engineers calculate maximum loads, look into specifics of the surrounding environment (geological formation, earthquake risks), consider construction material characteristics and then they build the bridge with decent margins of error. Someone could say that over-engineering in the context of IT is not needed, since if software crashes, we just restart them – we don’t need to build a new bridge (there is the wonderful concept of “ephemeral services”, for modern software services). Well, that is true, if seen as a single instance (one bridge), or if impact is non-critical (you weren’t driving on the bridge). However, if thousands or even millions of users depend on the service, or even if the service outage could create major impact, then the ephemeral becomes the essential.
No skills, no fun – how to balance new and proven cryptography
In the context of applied cryptography and our products, we address robustness from different perspectives – the underlying cryptography and the computer science. Both require proper skills - understanding of how cryptography works and sophisticated software engineering.
When it comes to cryptographic mechanisms built into IT security products, it is desirable to see that there is a track record of use, and that there was plenty of time to analyze and try to break the encryption. What cryptographers want from an algorithm or a protocol, is to age well. The more cryptographers have analyzed something, the more we trust it. This is a continuous validation, and it is the most reliable when we use open standards and open implementations (open source), since these are available for analysis to the maximal number of researches, and open source is available for scrutiny by any interested party (nerds that spend evenings making sure someone else coded properly; often unrecognized heros!). Furthermore, it is important to know under what conditions it is sensible to use a particular technique. It is perfectly possible that, say, a cryptographic protocol is great fit in one situation, while it has drawbacks under other circumstances.
We do not imply that the new is never better – the art is to know, when the new is the better. Innovation is in our hearts and we at PrimeKey, now being a part of Keyfactor, strive to stay ahead of the curve. When some new technique promises better value to our customers, we are happy to be the first to implement and deliver to our customers. Balancing between proven and novel is a question of experience and knowledge – therefore our team comprises experts in various fields.
(It is not) Elementary, my dear Watson – randomness is paramount
Allow me to continue with an analogy from construction engineering – it is of paramount importance that we use proper components when building a bridge, literarily down to the nuts and bolts. In our area, the nuts and bolts are the quality of cryptographic key material and the quality of the mechanism itself. The keys we use for encryption must be of appropriate length in order to assure the strength of the encryption, against both today’s and possible future attackers.
Furthermore, the keys must be very, very, very random. The repetition was fully intentional – even if we use some mathematically speaking “provably secure” algorithm, it does not help if our adversaries can guess the keys we use. It is paramount to use good “source of entropy” - intuitively being a source of data that has some useful degree of unpredictability. Most all manufacturers of hardware security modules (HSMs, which are dedicated cryptographic devices used to generate keys, protect the keys and perform various encryption operations) use solid sources of entropy. However, things get much quirkier when we deploy as pure software, like today in cloud – we don’t always know how good this randomness is. Just to be clear, our cloud offerings, both IaaS and SaaS, do use real HSMs, that is to say the best available technologies that deliver proper randomness. In addition, customers may choose to have externally located HSMs, in a specific geographical location or with a specific provider.
In-depth robustness requires sold implementation
Next, let’s look into the cryptographic mechanisms - algorithms and protocols. As mentioned earlier, it is probably the best to use mechanisms with open specifications. That unfortunately is not enough - there can be untoward gaps between what the intention was and how something is implemented. Very often, hackers and other adversaries look for errors in implementation.
You may know this already - much of the internet traffic is protected with the TLS (Transport Layer Security, formerly known as SSL) protocol. Well, the so called Heartbleed attack was successful for the most part due to particular implementation flaws in the OpenSSL library, and not due to the protocol itself. Truth to be told, if the organizations that used OpenSSL for free for years, had chimed in a millicent each towards the OpenSSL developers, many implementation flaws would have been prevented. Nevertheless, the attack literally affected the whole world and this shows us what (lack of) in-depth robustness means. Some things are better today than back when the Heartbleed happened (including the millicents towards OpenSSL developers); but it’s not close to good enough yet - recall the recent incidents with consumer goods supplies or gas pipelines, and most all can be traced to implementation errors.
As humans, we all do errors sometimes. As an IT security vendor, we try our best to avoid error situations by having dedicated product teams, and using modern best practices for product development, including ISO certifications (9001, 14001, 27001), to make sure we are on top of the game.
Long live the Bill – transparency for the win
Yeah, we like the British police, but the Bill here is another Bill – the Software Bill of Materials (SBoM). I am old enough to have worked in this industry before some of our bright young engineers were born. That kind of gives me a perspective – including things I wish the young lions and lionesses would have gotten from us, the old farts. One big thing I really regret in the software industry is the lack of transparency when it comes to which components are used. While open source gives us implicitly the SBoM, the same should apply for any software licensing model.
We want to know that no substandard construction material was used for the bridge we drive over frequently. Sure, we would like to know that our taxpayer money was used properly, but honestly speaking we care more that someone has checked that the nuts and bolts were not metal-painted plastic. That gives us piece of mind and we can think about something else than pondering if the darn thing will collapse just as our humble constituencies are there.
The same thing should apply for software, right? Well, strong lobbyist forces in the USA and the EU have made it very hard for security researchers to check the nuts and bolts, claiming the need for protection of trade secrets and IP rights. I am not taking a political stance here - but from security perspective, that is complete bull-you-know-what. Security of an IT system is often compared to a chain – the weakest link will break, and for your taxpayer money or for your company money, you should be entitled to see the links used.
What goes around, comes around – the great successes of criminals and government-sponsored actors (most all outside of US or EU jurisdictions) to hijack data, interrupt production or delivery of goods and services, have made policy makers think again. At PrimeKey, we are committed to deliver best possible security products and services to our governmental and enterprise customers. In that mission, we support requirements for the SBoM to become standard practice by vendors. We have worked hard to assure that we address supply chain security from our end – as a vendor – and we have all the relevant security software components under the common roof of PrimeKey and Keyfactor.
Robustness is an important quality of software and services, sometimes misunderstood as just over-engineering. Yes, we do over-engineer some products, and we do it on purpose – even when our competition sometimes is fixated on price points and saving money on components. Is having a little beefier CPU or more memory overkill, given that it improves the security posture and gives longer usable lifetime of a product? Everybody is entitled to make their own decisions. For the market segments where we deliver our products, doing more engineering than less is clearly appreciated. There are situations when over-engineering, perhaps combined with a lack of foresight, can turn to be a poor decision – when a system, product or service is “too rigid” and badly reacts to stress conditions. In a way, we will address how to handle rigidness in the next blog about crypto agility. In the meantime, I would wholeheartedly recommend the book “Antifragile: Things That Gain from Disorder” by Nassim Nicholas Taleb (the dude that wrote the Black Swan and “predicted” stock market crashes).