Home / Blogs

Thoughts on the Open Internet - Part 5: Security

Any form of public communications network necessarily exposes some information about the identity and activity of the user’s of its services. The extent to which such exposure of information can be subverted and used in ways that are in stark opposition to the users’ individual interests forms part of the motivation on the part of many users to reduce such open exposure to an absolute minimum. The tensions between a desire to protect the user through increasing the level of opacity of network transactions to third party surveillance, and the need to expose some level of basic information to support the functions of a network lies at the heart of many of the security issues in today’s Internet.

Security and Authenticity Requirements

The public sector is as acutely aware of the need for accessible online security as the private sector. Internet users need to be assured that the material they retrieve from an online service is genuine, and any data that they provide in response is also handled with appropriate levels of confidentiality and care.

The security frameworks used on the Internet are devolved frameworks as distinct from tightly interdependent and mutually constrained frameworks. The algorithms used to encrypt and decrypt data, and the algorithms used to generate and validate digital signatures are open algorithms that can be readily implemented, and an application wishing to use encryption can chose a mutually agreed algorithm. The manner in which security is applied to a network transaction can vary according to the motivation to use a secure solution, such as the desire to prevent third party inspection of the network traffic, or the desire to support a transaction that cannot be tampered with buy third parties, or where the identities of the parties are assured. The selection of trust points to allow parties to validate the authenticity of digital signatures is an independent consideration, as is the processes used to support awareness of revocation of otherwise valid credentials. At the heart of the implementation of secure systems is a software library of security tools and procedures, and the current mainstay of many of the online security systems is the open source OpenSSL library.

The “heartbleed” bug of early 2014 that affected OpenSSL hosts illustrated the extent of use of Open SSL in today’s world.

“A serious overrun vulnerability in the OpenSSL cryptographic library affects around 17% of SSL web servers which use certificates issued by trusted certificate authorities… 17.5% of SSL sites, accounting for around half a million certificates issued by trusted certificate authorities.”

http://news.netcraft.com/archives/2014/04/08/half-a-million-widely-trusted…

The trust and confidence such security mechanisms engender underpins much of the trust in moving a large diversity of economic activity into the digital realm. Online banking, business-to-business transactions, public services provided to citizens, all rely on these mechanisms. These mechanisms are under continual pressure, and the points of vulnerability are exploited from time to time.

Address and Routing Security

The integrity of the Internet’s IP address and routing systems are essential to the operation of the Internet. Two of the fundamental assumptions of the Internet’s architecture are that every point of attachment of a device to the network has a unique IP address, and that every routing element in the network has a mutually consistent view of how to direct a packet towards its addressed destination. When an IP packet is passed into the network, the packet’s destination address should be a unique and unambiguous value, and irrespective of where the packet is passed into the network it should be forwarded to the same intended destination.

The Internet uses a “self learning” process of understanding the location of attached devices. This self-learning process is achieved via the operation of routing protocols. The language of routing systems includes the announcement (or “advertisement”) of an address to a network, and the propagation of this announcement to all other networks across the Internet. The security question here concerns the ability to insert of false information into this self-learning system.

The question is how the integrity of this system is managed. What is to prevent a malicious party from announcing someone else’s address into the routing system? Indeed that is the very nature of the IP address filtering process, where deliberately false forwarding directions are inserted into parts of the network. What is to prevent an attached device presenting a packet into the network with a deliberately falsified source IP address?

This is a more subtle form of exploitation of weaknesses in address and routing security. Every packet has in its header the intended destination of the packet (the “destination address”) and the identity of the device that created the packet (the “source address”). Why would any party want to lie about its source address? There are some use cases that relate to IP mobility, but the overwhelming issue here is the use of this technique to mount hostile attacks. If the attacker is able to pass in simple query packets using the UDP protocol to a conventional server, then by using a packet that contains the source address of the intended victim, the server sends its response to the victim. By replicating this query across many servers the attack volumes that can be bought to bear on the victim can be measured in the 100’s of gigabits per second.

http://blog.cloudflare.com/technical-details-behind…

The approach used by the Internet has been largely one of conventions of behaviour. The common interest is to ensure that packets reach their intended destination, which aligns with local interest. The original models of addressing and routing were based largely on mutual trust. Each network operator was expected to announce only those addresses that were assigned to that network, and each network was expected to propagate routing information that accurately reflected the local connectivity and routing policies used by the local network. The system as a whole operated correctly if each component network operated according to these simple principles. While these conventions continue today (such as the “Routing Manifesto”) the scope and diversity of today’s network means that such conventions can be abused from time to time.

The onus is placed on each network to defend itself from believing incorrect route advertisements. One approach here has been in the use of “route registries”, where each network records in a common registry its local addresses that it originates, and the local connections to adjacent networks, and the routing policies that it applies to each such adjacent network. If every network operator diligently maintained such data in a routing registry, then other operators could use this information to generate a local filter to apply to incoming routing advertisements. The contents of the routing information would be constrained by the information in the route registry, and any additional information could be rejected as a bogus route.

Some local communities have used the route registry approach, and it has proved useful in minimizing certain forms of routing attack, but these approaches have not translated into the entirety of the Internet, and more recent efforts have been directed towards a more rigorous form of processing routing information that will distinguish between receiving genuine routing information and synthetic (or false) information.

The more recent approach has borrowed the conventions used in Public Key Infrastructure (PKI) models, and enrolled the address registry operators as Certification Authorities (CA). Allocations of addresses are accompanied by a certificate, where the address registry attests that the entity who holds a certain public/private key pair is the current holder of a particular collection of IP addresses. The address holder can use their private key to sign various attestations about the use of their addresses, and third parties can validate these attestations using the collection of published certificates. The routing system is a form of circulation of such implicit attestations, where networks are advertising reachability to addresses, and the secure routing model calls for digitally signatures to be attached to these route advertisements, making these implicit attestations of reachability to be explicit and testable as to their validity. Receivers of such routing updates that describe the reachability of an address block can validate the authenticity of the routing update by validating the digital signatures that are associated with the update.

There are two significant issues with this approach, which warrant noting in the context of an open and coherent Internet.

The first is that digital signatures and the associated set of credentials that permit such signatures to be validated do not intrinsically identify erroneous material. Such signatures can only validate that the received information is authentic and has not been tampered with in flight. The use of this approach in a manner that would reliably identify instances where addresses and route advertisements are used in an unauthorised manner is only possible in an environment where every network and every address holder generates, maintains and publishes their signed attestations and associated certificate-based credentials. The issue here is that this approach to address and routing security has a prerequisite condition of universal adoption in order to be comprehensively effective.

Any scheme that uses positive attestations can only identify what’s “good.” If everyone who can generate positive attestations does so, then attestations that cannot be validated can be safely assumed to be “bad.” But in an environment where not everything has an associated attestation, or digital signature in this case, then the situation is not so clearly defined. In the case of digitally signed route attestations partial deployment infers that you have a category of routes that are “not signed”. These routes are not provably authentic and not provably not authentic as they are simply not signed at all. But if validation fails for a route attestation then this is semantically equivalent to “not signed”. It is not possible in such a partial adoption space to distinguish between falsified information and unsigned information.

Orchestrating an outcome of universal adoption of any technology is extremely challenging in the diverse and loosely coupled environment of the Internet, and particularly more so when the motivation for individual adoption of a technology is based on perceptions of the value of risk mitigation. Differing risk appetites and differing perceptions of liability and loss lead to fragmented piecemeal adoption of such a technology, and in the case where the major benefit of the technology is only attainable if there is universal adoption. The prospects of achieving a critical mass of local adoption that would influence the remaining parties to also adopt the technology are challenging, particularly when there is no public policy framework that encourages individual actors.

Domain Name Based Security

Much of the current system of security is that used by the Web, and in particular the system of the issuance and validation of domain name certificates used by the Secure Sockets Layer protocol (SSL) for channel encryption. This form of security is based on the attestation of a third party that a particular operational entity controls a given domain name. Lists of collections of third parties who are trusted to publish such attestations are packaged with popular browsers. These lists are similar, but can deviate from time to time between browsers. This deviation can cause user confusion, where a site will report itself as “secure” when using some browsers and generate a security exception alert when using other browsers.

The second issue with this approach is that browsers are generally unaware which third party Certification Authority (CA) has actually attested which domain name. This leads to the vulnerability that is a trusted third party is compromised then any fake attestations it may subsequently generate would be trusted by users’ browsers.

The Diginotar CA Compromise

In 2011 a European Certification Authority, Diginotar, had its online certification system hacked, and 344 false certificates were minted for a number of popular domain names, with both the public and the private key of these false certificates published as part of the compromise.

Because the private key of these fake certificates was published, and because many popular browsers were willing to trust all certificates that were issued by the Diginotar CA, then it was possible for a party who was in a position to intercept user traffic to any of the services named by these domain names to substitute the fake certificate and then successfully dupe the user’s browser and masquerade as the intended service. A number of parties were affected, including the Dutch tax authority who used Diginotar as the CA for the certificates used by its online taxation service. According to a report by Fox-IT commissioned after the incident, the false certificates appeared to have been used to spy on a large number of users in the Islamic Republic of Iran by performing a man-in-the-middle attack.

https://www.rijksoverheid.nl/bestanden/documenten-en-publicaties/...

The underlying issue with this model is that there are hundreds of Certification Authorities (CA) that are trusted by browser vendors (see https://wiki.mozilla.org/CA:IncludedCAs for one such list, also https://www.eff.org/files/colour_map_of_cas.pdf), and any CA can issue a certificate for any domain name. The implication here is that this is an asymmetric trust model. While a certificate subject is trusting on the integrity of a single CA to correctly mint a certificate for the domain name in question, the consumer of the certificate, namely the end user of the service, is trusting the entire collection of CAs. The user is taking as a matter of implicit trust without explicit validation that none of these CAs, and none of their registration agents (RAs) have been compromised and none of them have issued a certificate for the same domain name under false premises. The CA system is only as strong as the most vulnerable CA and has only as much integrity as the CA that performs the minimum of integrity checks. The standards of the entire CA system are only as strong as the lowest individual set of standards from any single CA.

This form of domain name certification is a form of third party commentary on the actions of others. The certificate issuer for such certificates is neither the domain name holder nor the domain name registrar, and the integrity of the entire system depends on the robustness of the checks a Certification Authority and its Registration Agents perform to ensure that the domain name certificate is been issued to the correct party, namely the party that “owns” the associated domain name. Over time the system has been subject to various forms of abuse by applicants and erosion of integrity of validation by some CAs. In response, the CAs issued a variant of the domain name certificate, termed an “Extended Validation” Certificate that was intended to demonstrate that the CA had performed their validation of the applicant with some further vigour. The competitive pressure for CAs, and the inability of the CA system to create a sustaining market for certificate based on integrity of the product, is a common weakness for this system.

As the certificate is an undistinguished product, then it makes sense to use the cheapest CA to obtain the certificate. But the cheapest CA can become the cheapest CA by reducing its expenses by reducing the number and efficacy of the tests it undertakes to confirm that the applicant does indeed “own” the domain name. Competitive market pressures between CAs create pressures that erode the integrity of the product they produce.

Alternate approaches being considered by the technical community are directed towards removing the concept of a third party commentary on Domain Name registrations and instead use DNS security (DNSSEC) and place the domain name public keys directly into the DNS alongside existing data and rely on DNSSEC to provide this key information to users in a robust and secure manner. Such a framework for name-based security would remove the shared fate risks of the current CA model, and mitigate the broad consequences of compromise of an individual CA’s operation.

However, such an approach places even more pressure on the DNS. This approach relies on the widespread adoption of DNSSEC, both in signing zones and in validating DNS responses, and progress toward this objective goal is not thought to be overly impressive to date. Studies of validation indicate that some 1 in 7 users, or some 13% of the Internet’s total user population pass their DNS queries via resolvers that use DNSSEC to validate the authenticity of the signed response that they receive. Within individual countries the proposition of users who use DNSSEC-validating resolvers varies from 70% (Sweden) to 2% (Republic of Korea).

As with the considerations of the vulnerabilities associated with a single trust anchor for the PKI that is proposed to be used for the address and routing infrastructure, a similar consideration applies to the approach used by DNSSEC, and a brief comparison of the existing third party certification model and a DNSSEC-based model is useful here. A distributed third party certification model appears to offer a robust approach, in so far as the system is not reliant on the actions of any individual CA, and the provision of security services is subject to competitive pressures as any CA can certify essentially any domain name. There is no requirement for universal adoption, and incremental adoption creates incremental benefits to both the service provider and the consumer of those services. Unfortunately this is not quite the entire story, and, as already pointed out, compromise of an individual CA can lead to compromise of the integrity of any domain name certificate, and the robustness of the entire system is in fact critically reliant on the robustness of each and every CA, and failure of one CA leads to potential failure of other elements of the system. The system creates strong interdependencies, and has no mechanism for limiting potential damage. The DNSSEC model is strongly hierarchical, and at each point in the name delegation hierarchy the administrator of the zone file is in a defacto monopoly position of control over both the zone and all the subzones from that point in the name hierarchy. The root of the name space is also the root of the trust model of DNSSEC, and this root zone and it’s associated key signing key represents a single point of vulnerability for the entire framework. The accountability of the body administering the root zone of the DNS to apply the best possible operating practices to ensure the absolute integrity and availability of the keys that are used to sign the root zone of the DNS is a matter of legitimate public interest.

DNSSEC and DANE

DNSSEC allows an application to validate that the responses it receives from DNS calls are authentic, and precisely match the data that is held in the authoritative name servers from the zone. The DNSSEC signatures ensure that no third party can tamper with the response in any way without this tampering being clearly evident.

DANE is a technology that allows a DNS zone administrator to provide information related to the SSL encryption key value used in conjunction with that DNS name to be placed into the DNS.

The combination of DANE and DNSSEC allows this secure channel bootstrap procedure to operate at a far greater level of robustness than is the case with the model of third party CAs. The service provider has a number of options as to how to insert the key information into the DNS, but the result is similar, in so far as rogue CAs are in no position to mislead the application with a false key information.

There is one side effect from this structure that impinges on the opening up of more generic top level domain names (gTLDs) in the DNS. Validation of a DNS response require that the client performs a “signature chase” to the key of the root zone. This means that to validate the signed zone “service.example.com”, then the zone and key signing keys for “example.com” also need to be validated, as to the zone and key signing keys for “com”, and these need to be validated against the root key. If the zone management of either “example.com” or “com” were to fail to maintain correct key signing state then “service.example.com also fails validation. The closer a domain name is to the root of the DNS the fewer the number of intermediaries, or the fewer the number of external dependencies for ensuring name validation. Also the fewer the number of intermediate zones the faster the entire DNS validation process as undertaken by the application.

In an Internet that makes extensive use of both DNSSEC and DANE, and in an Internet that relies on DANE to securely transmit session encryption keys to the client application, then it would be anticipated that were the gTLD space to be available for use by service providers, then there are clear incentives for such providers of service who which to use secure channels a manner that is as reliable and robust as possible to make use of name spaces that are located in the root zone itself, namely as a gTLD.

Block Chain Security and Bitcoin

A conventional view of security is that trust relationships are explicitly visible, and the task of validating an attestation is to find a transitive trust path from a trust anchor to the material being tested. The implicit trust relationship here is that trust in party A implies trust in all parties trusted by A, and so on. In a large and diverse environment such as the Internet there is a critical reliance on such trust relationships, either as an explicit hierarchy with all trust ultimately being placed in the operator of the apex point of the hierarchy, or as a common pool of trust with trust being placed in a collection of entities, few (or often none) of whom have any direct relationship with the user and the user has no rational grounds to invest them with any degree of trust.

An alternative model of trust was originally developed in concepts of the “web of trust” where trust was established by a process of engaging others to sign across an attestation, and the larger the pool of such third party signings the greater the level of trust could be inferred in the original attestation. This avoids the use of hierarchies and single trust anchors for an entire system.

Bitcoin uses a similar approach in its blockchain, where records of individual transactions are widely distributed, and meshed into an interconnected chain linked by cryptographic signatures. Validation of a ledger entry is effectively a process of consultation with the collective wisdom of a set of holders of these blockchains and there is no need for distinguished points of the origination of trust relationships.

To what extent the increased complexity of such blockchain models obfuscates inherent vulnerabilities of such an approach is of course open to further consideration and debate, but it does represent a secure system of trustable attestations that does not require the imposition of trusted points of authority to seed the entire system.

Denial of Service Attacks

It is a common assumption in many aspects of networking that the transactions that are presented to the network are authentic. This means, for example, that the packet contents in each IP packet are genuine, or that the initial steps of a protocol handshake will be followed by the subsequent steps, or that the payloads presented as part of an application level interaction represent an effort to work within the framework of the intended scope of the application. This trust in the authenticity of what is presented to a network could be said to be part of the Robustness Principle (RFC791) that protocol implementations should “be liberal in what [they] accept from others”.

Later experience with the diversity of motivations on any large public network lead to a refinement of this principle that developers should “assume that the network is filled with malevolent entities that will send in packets designed to “have the worst possible effect”” (RFC1122). This was written in 1989, but it has been extremely prophetic.

The Internet has proved to be a surprisingly hostile environment, and almost every aspect of both application, operating system, protocol and network behavior has been exhaustively probed, and vulnerabilities and weaknesses have been ruthlessly exploited. Much of the intent of this exploitation is not to gain unauthorized access to an IT system, but to ensure that other legitimate users of an online service cannot gain access, or, in other words to deny the service to others.

The internet has a rich history of exploitation of various weaknesses, including, for example, TCP SYN attacks designed to starve a server of active connection slots, preventing legitimate users from accessing the service, or the injection of TCP RESET commands into long-held TCP sessions to disrupt their operation, such as is the case for the BGP routing protocol.

Some of the more insidious attacks involve the combination of deliberately falsified source addresses in IP packets, the UDP protocol, and applications where the response is far larger than the query. This is the case for the DNS protocol and certain forms of command and control in the NTP network time protocol. Denial of service attacks have been seen that generate 100’s of gigabits per second. These attacks do not explicitly require that systems be infected with a virus, or otherwise enlisted into a bot army to mount the attack. The nature of the attack actually requires that system operate entirely as normal, and even the servers that unwittingly coopted into being part of the attack are assumed to be functioning perfectly normally and are simply responding to what they believe are perfectly normal queries. What allows such attacks to take place is the ability to inject packets into the network that use an incorrect (or “spoofed”) IP source address.

The initial technical response to this form of source address spoofing is a document that was published some 15 years ago in 2000. All the evidence suggests a general reluctance by network operators to equip their networks with additional points of control and filtering. Given that this has proved challenging the next steps have been to look at the DNS protocol itself to see if it is possible to prevent the DNS for being used as an unwitting vehicle for this form of attack. Again, this remains an extremely challenging exercise, and changes to infrastructure protocols are not one that can be made and adopted with any speed, so other forms of response have been used in the face of such attacks.

The most pragmatic response so far has been to equip service points and networks themselves with sufficient capacity to absorb attack volumes and maintain the service. Such systems are engineered for peak attack loads rather than peak service load, and are able to maintain a service consistency and quality even in the face of an attack.

While this is a pragmatic response, it does have its own consequences. If online services and facilities need to be provisioned not just to meet anticipated service loads, but provisioned to the extent to meet peak abuse loads then the scale and associated cost of mounting an online service rises. This escalation of cost implies that the barriers to entry for such services rises and established service providers who operate significant footprints within the network are in an advantaged position. The rise of the Content Distribution service business is partially a result of the escalation in the requirements for online services as a result of these form of service abuse. This represents a subtle change in the picture of the online world. The model of the service provider directly using information technology solutions to connect with customers over the network is evolving to a model that includes an intermediate party who takes content from the service provider and operates a business of standing that content online in such a manner that it is resilient to most forms of hostile attack and attempts to disrupt the service. Consumers of the service still interact with the original content, but do so via the content distribution system and the distribution provider, rather than more directly with the original service provider. There is the potential for concerns here about the extent of alternative supply and competition in this aggregated world of content distribution networks, and the extent to which such intermediaries can exercise control and influence over both consumers and users and the providers of services over the network.

Going Forward

There is no doubt that effective and efficient security in the Internet is of paramount importance to the digital economy, and the public interest is served by having a widely accepted mechanism for making trustable digital attestations that support this function.

It is also in the same public interest to ensure that this system operates with integrity, and that vulnerabilities are not exploited to the extent that they erode all trust. There are aspects of the current security framework in both the address and routing infrastructure and in the naming infrastructure that could be changed, and it is possible to propose secure systems that set a higher standard of security than what we use today. However, this exposes further questions that relate to public policy. Who should underwrite the costs associated with such a change? Should certain levels of adoption of secure practices in online services be mandated? Should certain security standards in consumer equipment and services be enforced? Is it even feasible for countries to set their own domestic standards for online security without reference to the larger Internet? Is the level of interdependence in the Internet now at such a level that such national determination of codes of practice in security and standards for online products neutralised by the globalization of this open digital economy?

Although the topic of competition reform in the communications services sector absorbs a huge amount of attention by policy makers and public sector regulators. The studies on the evolution of the mobile sector, with the introduction of mobile virtual network operators, the concepts of spectrum sharing and the issues of legacy dedicated voice services and voice over data and similar topics dominant much of the discussion in this sector. Similarly there is much activity in the study of broadband access economics, and grappling with the issues of investment in fibre optic-based last mile infrastructure, including issues of public and private sector investment, access sharing, and retail overlays again absorbs much attention at present.

You’d think that with all this effort that we be able to continue the impetus of competitive pressure in this sector, and to continue to invite new entrants to invest both their capital and their new ideas. You would think that we would be able to generate yet further pressures on historically exorbitant margins in this sector and bring this business back to that of other public sector utility offerings. But you would be mistaken.

Open competitive markets depend on common platforms for these markets that support abundant access to many resources, and in terms of todays communications networks abundant access to protocol addresses are a fundamental requirement. But over the past decade, or longer, we have consistently ignored the warnings from the technology folk that the addresses were in short supply and exhaustion was a distinct possibility. We needed to change protocol platforms or we would encounter some serious distortions in the network.

We have ignored these warnings. The abundant feed of IP addresses across most of the Internet has already stopped, and we are running the network on empty. The efforts to transform the Internet to use a different protocol sputter. A small number of providers in a small number of countries continually demonstrate that the technical task is achievable, affordable, and effective. But overall the uptake of this new protocol continues to languish at levels that are less than 3% of the total user population.

The ramifications of this are, in general, completely invisible so far to the consumer. Web pages still appear to work, and we can all shop online from our mobile devices. But the entrance of new players, and the competitive pressure that place on the market is drying up. The lack of protocol addresses is an extremely fundamental barrier to entry. Only the incumbents remain.

Shutting down access to the Internet to all but existing incumbents should be sending chilling messages to regulators and public policy makers about the futility of generating competitive infrastructure in copper and in radio spectrum if the same cannot be maintained in the level of provision of Internet access and online services.

This is not a comfortable situation and continued inaction is its own decision. Sitting on our hands only exacerbates the issue and todays situation is assuming a momentum that seats incumbents firmly in control of the Internet’s future. This is truly an horrendous outcome. Its not “open”. Whatever “open” may mean, this is the polar opposite!

By Geoff Huston, Author & Chief Scientist at APNIC

(The above views do not necessarily represent the views of the Asia Pacific Network Information Centre.)

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

Cybersecurity

Sponsored byVerisign

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign

Brand Protection

Sponsored byCSC

Threat Intelligence

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com