Published 5 December 2014 on Freedom to Tinker.
Today, the vulnerable state of electronic communications security dominates headlines across the globe, while surveillance, money and power increasingly permeate the ‘cybersecurity’ policy arena. With the stakes so high, how should communications security be regulated? Deirdre Mulligan (UC Berkeley), Ashkan Soltani (independent, Washington Post), Ian Brown (Oxford) and Michel van Eeten (TU Delft) weighed in on this proposition at an expert panel on my doctoral project at the Amsterdam Information Influx conference.
Why are electronic communications vulnerable in the first place? And to what extent is law an appropriate tool to augment communications security? While the U.S. regulator seems to entrust the market and the N.S.A. to augment communications security, the European Union has launched or debated sweeping regulatory initiatives in all five subareas of communications security policy in 2013: data protection, electronic communications, encryption, cybercrime and ‘network and information security’ (includes critical infrastructure protection). Add to that political dynamics such as the opening of the legislative season after summer, as well as the E.U. elections last May spurring a rotating European Parliament and Commission. And then there’s the Snowden revelations, the release of the U.S. Cybersecurity Framework, and daily headlines about the next big communications security breach. A perfect time for reflection and debate.
Michel van Eeten (Delft University) kicked off the discussion by stressing that we’re only starting to understand why and how communications security is, or often isn’t, produced by the market. Only upon understanding the incentives for internet security, regulation may a useful tool. He illustrated his observations by taking us through an article on HTTPS governance, recently published in the Communications of the ACM (disclaimer: I’m a co-author on the paper). The main technical vulnerability of HTTPS, that aims to offer end-to-end encryption signaled by a padlock in your web browser, is well-known: any certificate authority can sign an encryption certificate for any website, so if someone (say, a company, government or cybercriminal) breaches one of the hundreds of certificate authorities around, the security of the entire HTTPS ecosystem on the internet may collapse (the ‘weakest link’ problem). Its most important practical implication, is that from a communications security perspective, it doesn’t matter that much where you buy your certs.
What is not well-understood, however, are the market dynamics of HTTPS: three companies control 75% of a lucrative market in issuing certs, and the multi-billion dollar E-Commerce market that depends on it. These large corporations in HTTPS security have, not unlike big financial institutions, become too big to fail. The ultimate measure to leverage security is to revoke trust in a breached CA. Big and small are being breached all the time, but revoking trust in a large CA is close to impossible: to distrust all the certificates issued by a large CA, would render large parts of the encrypted web inaccessible. Confidentiality and integrity interests conflict with availability interests, and the market prefers business continuity over everything else. Smart sites infer that buying from large CAs is an insurance policy for the availability of a (seemingly) securely encrypted site, further exacerbating a ‘too big to fail’ dynamic.
To make securing HTTPS more complicated, some of the larger CAs have close ties to governments (and most large countries even control their own root CAs), implying that government oversight against communications surveillance – one of the objectives of the encryption – is a thorny issue as well. As long as these fundamental issues are not addressed technically, the effect of oversight and regulation are limited to providing the right incentives for all the stakeholders in the CA ecosystem to actually do something about the fundamentally flawed security model of HTTPS. Van Eeten posits that large CAs, central players in this debate, have little interest in actually strengthening security, an insight the proposed E.U. regulation fails to address.
Ashkan Soltani extrapolated from Michel van Eeten’s observations, arguing that the technical vulnerabilities we face all boil down to connectivity as a core design goal in internet protocols, software and hardware; rather than security. Moreover, the conventional limits on information are disappearing: access to information has become ubiquitous and low-cost for users and intelligence agencies alike. He also pointed at the parallel incentives of government and the big data industry in collecting data. All quite fundamental drivers that are hard to change and relate to communications confidentiality and security.
From working with the Snowden documents, Soltani used the MUSCULAR revelations as a case study to show how the U.S. government carefully chooses its locations for surveillance of global communications, to circumvent legal safeguards for U.S. persons. And in the aftermath of the ANGRYBIRDS revelations, the Government defended its operations claiming that they merely piggybacked on surveillance done by online advertisers anytime you play popular games or visit websites. Another disturbing practice is how IP-addresses are used to determine ‘foreignness’ of affected internet users in dragnet surveillance operations, with which Soltani in fact forecasted later Snowden revelations he helped release with the Washington Post.
Here, a fascinating audience debate ensued, eventually leading to the formulation of a number of systematic communications security vulnerabilities from a technical, market, legal and human security perspectives. Apart from those usually agreed on and already mentioned, usability, the lack of sound legal definitions for communications security and the lack of regulation on exploiting vulnerabilities by governments were added to the mix.
Next up was Ian Brown. He posited the question that if (adding this is a big ‘if’), regulation is needed to augment security, what types of regulator can have a go at it, and what are their strengths and weaknesses? Ian discussed four conventional communications security regulators – national security agencies, telecoms regulators, standards bodies and data protection authorities (including the FTC) – and identified three pre-conditions for their effectiveness: the ability to i) realize technical competence and nuance, ii) facilitate public interest consensus among stakeholders, and iii) change their behaviour.
Snowden, Brown argues, has reminded us of how central communications surveillance is to national security. As a core executive function, we see a great reluctance, even today, to concede its regulation to Parliament; never mind the judiciary or regulators. Telecoms regulators are primarily concerned with addressing monopolies, rather than ensuring security. At times, standards bodies have been excellent at managing (i) complexity and (ii) consensus, but can only change (iii) behaviour through deployment by hardware, software and infrastructure stakeholders. However, their culture is vastly different and critical as to their competences, comparing NIST, ITU, ETSI and IETF. Data protection authorities, greatly aided by the European Court of Justice Google v. Spain (‘Right to be Forgotten’) and Data Retention Directive rulings, will move into this space as well, and have been hiring tech specialists in recent years. Finally, Brown argued that legislators need to reduce the pervasive negative externalities in communications security, notably through introducing liability for ISPs, banks and software providers for security breaches, coupled with strong enforcement. Likely, he concluded, a mix of all four regulators is necessary.
Deirde Mulligan set out to re-orient our focus from ‘what systems fail’, to ‘how to manage insecurity’. Rather than prevention, she defended an approach of accountability post-breach, as we don’t know how to secure networked systems upfront. The emphasis of regulators and systems designers on creating the right process could be wrong, because threat models evolve all the time. She shared her ideas on public health as a doctrine for cybersecurity with the audience, pointing at the parallels of securing communications with curing patients, for example in defense of quarantining systems just as you would with humans upon infected by serious viruses. Mulligan also pointed out that as it stands, the definitions in computer science around confidentiality, integrity and availability (and beyond) don’t map well onto law, and that this area needs a serious cross-disciplinary conversation, as it causes people across academic and policy communities to mean something completely different, even when they are using the same words.
In the final conversation with the audience, several additional insights came up, all ending with a rather fundamental reflection on security. First, participants wondered if the real question at hand is not so much how to secure communications, but what level of insecurity society will tolerate? Currently, regulatory development in this space is incident-driven. For example, the Snowden revelations (esp. MUSCULAR) have incentivized large U.S. internet companies to encrypt traffic between their servers to prevent backdoor access by intelligence agencies, calling to limit access to user data via the front door through regulated access regimes. Here, a former senior IBM employee shared with the audience that Snowdens revelations had already been ‘business as usual’ within industry for decades. The disclosures have apparently changed the cost-benefit analysis of these companies for the benefit of communications security.
Another insight came from general systems engineering, which distinguishes ‘precluded event security’ (breach prevention at all costs, as in aviation) and ‘marginal security cost’ (cost-benefit analysis whether prevention is viable). What specific communications security issues map where on this scale? For one, the Data Retention Ruling of the highest E.U. Court recently established that ISPs and telecoms companies have to do more than simple cost-benefit analysis in their risk-assessment of securing databases with customer metadata for surveillance purposes. When datasets or communications are as sensitive as in this case, it seems a constitutional requirement nowadays in Europe to secure communications to prevent breaches. While the ruling is quite vague, the constitutional dimension of the explicitly named confidentiality, integrity and availability triad is an important topic for further research.
Finally, the conversation morphed into a somewhat sociological evaluation whether the usual risk-assessment and accountability focus in communications security should be attributed to risk-obsessed societies, a lack of creativity or rather a failure in legislative and democratic governance. Here, CITP’s Joel Reidenberg proposed whether the latter is the case in the U.S. The panel and audenice agreed that regulatory dreadlock abounds across the board. But the jury is still out whether and how the proposed E.U. regulations will materialize, and in what form. Unfortunately, former Member of European Parliament Amelia Andersdottir called in sick on the day of the panel, as she probably would have a thing or two to say on the matter.
Certainly, the panel brought some clarity both with regard to what vulnerabilities need to be addressed, and what we can expect from the law to augment security. As cliché as this might sound, communications systems security may appear technically complex and hard or impossible to secure. But aviation, automobiles and the telegraph face(d) similar questions; it always took time to address security properly, and failures will alway happen. Communications now need somewhat of an overhaul to address systemic vulnerabilities, especially in todays networked environment and tomorrows ‘the internet of things’: when your home appliances are connected to the web and can be controlled with your smartphone from a distance, communications security will truly become tangible for consumers.
Complex technical systems have always been embedded in economic, social and cultural environments that impact our understanding of security, from who or what we want to be secured, and our tolerance for insecurity. Obviously, understanding and tolerance change over time, sometimes quite rapidly as the momentum heats up. In the end, even in what seems a complex technical space of communications security, new governance models for communications security emerge when money, power and perhaps a constitutional requirement or two enter the room. Even in communications security, we remain human after all.