A few conclusions about Heartbleed

Enrique Dans
Enrique Dans
Published in
4 min readApr 10, 2014

--

Much has been written about Heartbleed, one of the most serious security lapses in the history of the internet: anybody looking for more information would be better looking at specialist sites rather than mine. Security is not a subject to be taken lightly or to speculate on unless you’re an expert. That said, an analysis of the issue can provide some interesting conclusions about the nature of the internet and how it came about.

We are talking here about a bug discovered in the authenticating layer of OpenSSL, an open code installed in some four million servers, of which a certain proportion use the affected version. Heartbeat refers to a procedure within the management of encrypted or secure connections that the server uses to verify that the connection remains open after having carried out the password exchange, or handshake. This is a way of avoiding the connection closing and having to start the password process all over again.

The problem occurs when an apparently spontaneous error in the code that specifies how the heartbeat is to be developed allows a bug to access 64 kb of the server’s memory, which means that through successive calls, somebody can access a lot of the information in a server via tiny bites of 64 kbs. This can be anything in the server memory, from user accounts with their passwords, private passwords of the servers’ digital certificates… anything, given that at some point, everything passes through the server. This is very serious stuff, particularly if we remember that the problem could have been around for some time (the breach took place between December 31 of 2011, and the use of the affected code spread from the release of Version 1.0.1 of OpenSSL on March 14, 2012), and affects services that we all use. Aside from the possibility that criminals may have used it, which is not very likely, we are probably talking here about vulnerabilities that have been used systematically by some security agencies to access online encrypted information along the lines of the issues revealed by Edward Snowden about the NSA’s ability to penetrate encrypted servers.

Aside from recommending that you use one of the many tests available before carrying out any transaction to make sure that the server you are using has been updated to avoid this risk, and that you pay attention to the password change recommendations of the services that you use (it might not be a bad idea to start using an automated password app), I am more interested in the nature of the internet and how we use it.

The web has its weaknesses, dangers, and problems of all types. Everything that we do in life involves risks of all kinds. In the case of the web we are talking about very complex issues based on technology developed by an infinite number of people from very different places. There is technology that was developed by businesses, others by programmers, sometimes alone, sometimes in groups, and sometimes by volunteers who came up with approaches that were later taken up widely. Much of the web we know was developed in this way.

We generally assume that code failures are more apparent and easier to correct the more open they are, the more eyes are on them. In this case, we are talking about a mistake that nobody spotted, or if they did, they preferred to keep it to themselves, giving them a master key that allowed them unpermitted access to sites.

The equivalent in the offline world might be when it is discovered that a component in the food chain has been discovered that presents a health danger; somebody might suspect as much, but there is a lot of money at stake, and so it remains a secret. We know that these kinds of things go on all the time.

The internet is no different to the rest of the tools we use. Its success is based precisely on its being open, and that anybody can develop a code to add to the system, which makes it a very dynamic and adaptable environment, which has led it to its present state. Perhaps we now need to think about how we reward those people who have developed functions that we have all ended up using and that were not developed commercially. This procedure, the constant development by a range of different people, is subject to weaknesses. These weaknesses might be more or less serious, and on occasions can appear to be amendments to the totality, but this doesn’t change the overall reality that the internet’s strength lies precisely in the fact that it has been developed by a huge number of people with very different reasons for doing so.

Heartbleed is not the first serious weakness to have been discovered on the internet, and it won’t be the last. I have always had the impression that much of what we use on the internet is held together with string and paper clips. After this latest revelation, we need to examine our procedures and assess the damage, while taking advantage of this to reconsider our security procedures and improve them. But we should let all this blind us to the truth: the internet is subject to the same dangers and risks as all other systems and tools developed by humans. We need to continue working with them, and also identify those who knew about this but who failed to tell us, and they must be made responsible for what happened. We also need to remember that the chaotic nature of the internet is also part of its greatness.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)