HTTPS and dependency on certificate authorities
Expanding HTTPS instead of plain HTTP experiences a big boom in last years providing secure data transfer and limiting the possibilities of various types of attacks. This is good for sure. However at the same time, the dependence of website owners on certification authorities and the mechanisms of certificates issuance and validation raise. What can be the consequences? And is there an alternative?
You could have seen me at several events promoting migration from plain HTTP to HTTPS, how to do it, why to do it and also how we implement Let’s Encrypt on our hosting. But you know, nothing is always completely one-sided. Today I would like to point out another point of view.
We spent a lot of time in ACTIVE 24 to implement HTTPS with Let’s Encrypt certificates as a default setting for every Linux hosting in our environment. It was not that simple, but it happened and we are proud of it. But at scale of shared hosting environment and with more and more sites redirecting to HTTPS you have to think about some consequences.
Today, there is no true alternative to Let’s Encrypt.
There is no true alternative offering trusted certificates using ACME protocol, so we cannot simply change the provider, if LE doesn’t work properly. That means, we as a service provider, are more and more dependent on Let’s Encrypt as a key supplier and if it doesn’t work properly, our services could be affected.
Let’s Encrypt is service as any other with it’s bugs and outages.
As an example - we faced several times the issue, when API returned incomplete or empty certificate with success return code. And you know, that nginx or apache doesn’t like invalid certificate files. It stops reload, or worse - causes complete service down after restart.
Let’s Encrypt refuses to issue a certificate, when your domain is on reputation blacklist.
So even if ACME handling is implemented correctly and robustly, it can fail to obtain a certificate for your domain just because you run an outdated CMS somewhere on your whole website. Malware content and presence on reputation database is a serious problem as such, but still you remain without valid certificate for a whole domain in that case.
In shared hosting environment there are hunderdes of websites running on one IP and one nginx/apache instance. You can simply monitor the health of service, but will you monitor expiration of all certificates on all websites? You should, because the website displays warnings without valid certificate and can go completely offline with HSTS. But you simply cannot, because there is so many false positives - e.g. domains with corrupted DNS or domains on reputation blacklist. You can monitor, that the process of renewal runs correctly, but this is not ideal.
Large scale long term DDoS attack
Imagine the large scale long term DDoS attack on Let’s Encrypt API. In that case you won’t be able to renew expiring certificates, so websites could display errors, or go down with HSTS. For sure we renew certificates in advance, so we have several weeks to renew it. But Let’s Encrypt announced, that the 90 days lifetime of certificates is subject to change (to reduce). With shorter time frame this could be a serious risk.
The community is pushing certificate revocation mechanisms as OCSP stapling. This is reasonable request while having so many CAs with questionable credibility. But that means, that your service is again dependent on CA’s infrastructure. For example, in Czech peering center NIX.CZ we have a project FENIX, which offers a private VLAN for emergency peering of trusted subjects in case of large scale DDoS across Czech Republic. Its purpose is to be able to access most of local services (e.g. media, online banking, ATM’s or credit card terminals..) by local people as a last resort to prevent situation similar to Estonia in 2007. In that case OCSP stapling would be unavailable and could disable functional services.
The service providers like independency
We want to have the insfastructure under our control to be able to guarantee a quality of service. If dependency is inevitable, we want to have more alternative supliers to choose from. With large scale deployment of Let’s Encrypt certificates there is no alternative nowadays - you cannot replace them quickly. And even if you can, you are still dependent on reputation of CA of your choice, as it can be removed from trusted in browsers.
SPOF #2 ?
There is already one service we are all dependent on and which has no alternative. Yes, it’s DNS. And as such it is designed as very robust, decentralised as possible, transparent and it’s time-proven stable. Do we want another service being potential SPOF in global context? And do we think about Let’s Encrypt in such way? Today, there are more then 40 millions websites running on Let’s Encrypt and the number is still growing.
In my eyes Let’s Encrypt is a great workaround trying to solve problems of current CAs and PKI. It makes it very well (better than any CA before), I like it very much and will promote it further. But still - it’s just a workaround that does not solve the root cause and can bring new problems to the system. I see it as a temporary measure which should be replaced in future. Before it happens, it would be great, if more trusted CAs will implement ACME.
Is there a system solution?
I can see one and it already works very well for SMTP. It’s DANE protocol. With DANE you neither need CAs nor any complicated revocation mechanism. You simply generate your own certificate and publish its fingerprint to DNS. And when you are suspicious about a compromise, you simply generate a new key/cert pair and update the record. We just need two things happen - expand deployment of DNSSEC and its validation and implement support in browsers. This is not as simple as it looks like, but we should go this way. Or do you see any other alternative?