Home / Blogs

The Early History of Usenet, Part IX: Retrospective Thoughts

Usenet is 40 years old. Did we get it right, way back when? What could/should we have done differently, with the technology of the time and with what we should have known or could feasibly have learned? And what are the lessons for today?

A few things were obviously right, even in retrospect. For the expected volume of communications and expected connectivity, a flooding algorithm was the only real choice. Arguably, we should have designed a have/want protocol, but that was easy enough to add on later—and was, in the form of NNTP. There were discussions even in the mid- to late-1980s about how to build one, even for dial-up links. For that matter, the original announcement explicitly included a variant form:

Traffic will be reduced further by extending news to support “news on demand.” X.c would be submitted to a newsgroup (e.g., “NET.bulk”) to which no one subscribes. Any node could then request the article by name, which would generate a sequence of news requests along the path from the requester to the contributing system. Hopefully, only a few requests would locate a copy of x.c. “News on demand” will require a network routing map at each node, but that is desirable anyway.

Similarly, we were almost certainly right to plan on a linked set of star nodes, including, of course, Duke. Very few sites had autodialers, but most had a few dial-in ports.

The lack of cryptographic authentication and hence control mechanisms is a somewhat harder call, but I still think we made the right decision. First, there really wasn’t very much academic cryptographic literature at the time. We knew of DES, we knew of RSA, and we knew of trapdoor knapsacks. We did not know the engineering parameters for either of the latter two and, as I noted in an earlier post, we didn’t even know to look for a bachelor’s thesis that might or might not have solved the problem. Today, I know enough about cryptography that I could, I think, solve the problem with the tools available in 1979 (though remember that there were no cryptographic hash functions then), but I sure didn’t know any of that back then.

There’s a more subtle problem, though. Cryptography is a tool for enforcing policies, and we didn’t know what the policies should be. In fact, we said that quite explicitly:

  • What about abuse of the network? In general, it will be straightforward to detect when abuse has occurred and who did it. The uucp system, like UNIX, is not designed to prevent abuses of overconsumption. Experience will show what uses of the net are in fact abuses, and what should be done about them.
  • Who would be responsible when something bad happens? Not us! And we don’t intend that any innocent bystander be held liable either. We are looking into this matter. Suggestions are solicited.
  • This is a sloppy proposal. Let’s start a committee. No thanks! Yes, there are problems. Several amateurs collaborated on this plan. But let’s get started now. Once the net is in place, we can start a committee. And they will actually use the net, so they will know what the real problems are.

This is a crucial point: if you don’t know what you want the policies to be, you can’t design suitable enforcement mechanisms. Similarly, you have to have some idea who is charged with enforcing policies in order to determine who should hold, e.g., cryptographic keys.

Today’s online communities have never satisfactorily answered either part of this. Twitter once described itself as the “free speech wing of the free-speech party”; today, it struggles with how to handle things like Trump’s tweets and there are calls to regulate social media. Add to that the international dimension, and it’s a horribly difficult problem—and Usenet was by design architecturally decentralized.

Original Usenet never tried to solve the governance problem, even within its very limited domain of discourse. It would be simple, today, to implement a scheme where posters could cancel their own articles. Past that, it’s very hard to decide in whom to vest control. The best Usenet ever had were the Backbone Cabal and a voting scheme for the creation of new newsgroups, but the former was dissolved after the Great Renaming because it was perceived to lack popular legitimacy, and the latter was very easily abused.

Using threshold cryptography to let M out of N chosen “trustees” manage Usenet works technically but not politically, unless the “voters”—and who are they, and how do we ensure one Usenet user, one vote?—agree on how to choose the Usenet trustees and what their powers should be. There isn’t even a worldwide consensus on how governments should be chosen or what powers they should have; adding cryptographic mechanisms to Usenet wouldn’t solve it, either, even for just Usenet.

We did make one huge mistake in our design: we didn’t plan for success. We never asked ourselves, “What if our traffic estimates are far too low?”

There were a number of trivial things we could have done. Newsgroups could always have been hierarchical. We could have had more hierarchies from the start. We wouldn’t have gotten the hierarchy right, but computers, other sciences, humanities, regional, and department would have been obvious choices and not that far from what eventually happened.

A more substantive change would have been a more extensible header format. We didn’t know about RFC 722, the then-current standard for ARPANET email, but we probably could have found it easily enough. But we did know enough to insist on having “A” as the first character of a post, to let us revise the protocol more easily. (Aside: tossing in a version indicator is easy. Ensuring that it’s compatible with the next version is not easy, because you often need to know something of the unknowable syntax and semantics of the future version. B-news did not start all articles with a “B”, because that would have been incompatible with its header format.)

The biggest success-related issue, though, was the inability to read articles by newsgroup and out of order within a group. Ironically, Twitter suffers from the same problem, even now: you see a single timeline, with no easy way to flag some tweets for later reading and no way to sort different posters into different categories (“tweetgroups”?). Yes, there are lists, but seeing something in a list doesn’t mean you don’t see it again in your main timeline. (Aside: maybe that’s why I spend too much time on Twitter, both on my main account and on my photography account.)

Suppose, in a desire to relive my technical adolescence, I decided to redesign Usenet. What would it look like?

Nope, not gonna go there. Even apart from the question of whether the world needs another social noise network, there’s no way the human attention span scales far enough. The cognitive load of Usenet was far too high even at a time when very few people, relatively speaking, were online. Today, there are literally billions of Internet users. I mean, I could specify lots of obvious properties for Usenet: The Next Generation—distributed, peer-to-peer, cryptographically authenticated, privacy-preserving—but people still couldn’t handle the load, and there are still the very messy governance problems like illegal content, Nazis, trolls, organization, and more. The world has moved on, and I have, too, and there is no shortage of ways to communicate. Maybe there is a need for another, but Usenet—a single infrastructure intended to support many different topics—is probably not the right model.

And there’s a more subtle point. Usenet was a batch, store-and-forward network because that’s what the available technology would support. Today, we have an always-online network with rich functionality. The paradigm for how one interacts with a network would and should be completely different. For example: maybe you can only interact with people who are online at the same time as you are—and maybe that’s a good thing.

Usenet was a creation of its time, but around then, something like it was likely to happen. To quote Robert Heinlein’s Door into Summer, “you railroad only when it comes time to railroad.” The corollary is that when it is time to railroad, people will do so. Bulletin Board Systems started a bit earlier, though it took the creation of the Hayes SmartModem to make them widespread in the 1980s. And there was CSnet, an official email gateway between the ARPANET and dial-up sites, started in 1981, with some of the same goals. We joked that when professors want to do something, they wrote a proposal and received lots of funding, but we, being grad students, just went and did it, without waiting for paperwork and official sanction.

Usenet, though, was different. Bulletin Board Systems were single-site, until the rise of Fidonet a few years later; Usenet was always distributed. CSnet had central administration; Usenet was, by intent, laissez-faire and designed for organic growth at the edges, with no central site that in some way needed money. Despite its flaws, it connected many, many people around the world, for more than 20 years until the rise of today’s social network. And, though the user base and usage patterns have changed, it’s still around 40 years later.

This concludes my personal history of Usenet.

By Steven Bellovin, Professor of Computer Science at Columbia University

Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds several patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs.

Visit Page

Filed Under

Comments

That was hugely enjoyable and accurate (from my own experience/memory of the time) George Michaelson  –  Jan 13, 2020 6:08 AM

From a european perspective, bound into systems in the A/B crossover period, and the great re-naming, that was a blast from the past. UUCP was predominantly cost bound, but ubiquitous. There were some odd quirks in the european experience, X.25 network systems which were fully funded by science and engineering councils in Europe, but which didn’t formally want to accept the UUCP based protocols “on top” so there were parallel paths where some information went over X.25, and some over modem. Very odd, but perhaps not odd given the tension over funding and formalism.

There were also gateways from lists to USENET and back, which had interesting consequences: list exploders could receive USENET, then redirect them back into USENET by mistake. Oh dear.

Jacob Palme, Sweden worked on systems which linked Dec TOPS-10 and Vax/VMS in ways remarkably similar to the lived experience inside USENET.

UCL-CS operated gateways which bonded early Internet into JANET, and that included cross-linking news and mail lists, flowing bi-directionally through gateway systems.

I very much enjoyed reading your recollections Steve. Well done for writing it down, before it faded in the neurons!

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global