It’s time to rebuild the web

The web was never supposed to be a few walled gardens of concentrated content owned by a few major publishers; it was supposed to be a cacophony of different sites and voices.

By Mike Loukides
April 3, 2018
Stone wall Stone wall (source: Pixabay)

Anil Dash’s “The Missing Building Blocks of the Web” is an excellent article about the web as it was supposed to be, using technologies that exist but have been neglected or abandoned. It’s not his first take on the technologies the web has lost, or on the possibility of rebuilding the web, and I hope it’s not his last. And we have to ask ourselves what would happen if we brought back those technologies: would we have a web that’s more humane and better suited to the future we want to build?

I’ve written several times (and will no doubt write more) about rebuilding the internet, but I’ve generally assumed the rebuild will need peer-to-peer technologies. Those technologies are inherently much more complex than anything Dash proposes. While many of the technologies I’d use already exist, rebuilding the web around blockchains and onion routing would require a revolution in user interface design to have a chance; otherwise it will be a playground for the technology elite. In contrast, Dash’s “missing building blocks” are fundamentally simple. They can easily be used by people who don’t have a unicorn’s worth of experience as web developers and security administrators.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Dash writes about the demise of the View Source browser feature, which dispays the HTML from which the web page is built. View Source isn’t dead, but it’s sick. He’s right that the web succeeded, in part, because people with little background could look at the source for the pages they liked, copy the code they wanted, and end up with something that looks pretty good. Today, you can no longer learn by copying; while View Source still exists on most browsers, the complexity of modern web pages have made it next to useless. The bits you want are wrapped in megabytes (literally) of JavaScript and CSS.

But that doesn’t have to be the end of the story. HTML can be functional without being complex. Most of what I write (including this piece) goes into a first draft as very simple HTML, using only a half-dozen tags. Simple editors for basic web content still exist. Dash points out that Netscape Gold (the paid version of Netscape) had one, back in the day, and that there are many free editors for basic HTML. We’d have to talk ourselves out of the very complex formatting and layout that, after all, just gets in the way. Ask (almost) any designer: simplicity wins, not a drop-dead gorgeous page. We may have made View Source useless, but we haven’t lost simplicity. And if we make enough simple sites, sites from which viewers can effectively copy useful code, View Source will become useful again, too. You can’t become a web developer by viewing Facebook’s source; but you might by looking at a new site that isn’t weighed down by all that CSS and JavaScript.

The web was never supposed to be a few walled gardens of concentrated content owned by Facebook, YouTube, Twitter, and a few other major publishers. It was supposed to be a cacophony of different sites and voices. And it would be easy to rebuild this cacophony—indeed, it never really died. There are plenty of individual sites out there still, and they provide some (should I say most?) of the really valuable content on the web. The problem with the megasites is that they select and present “relevant” content to us. Much as we may complain about Facebook, selecting relevant content from an ocean of random sites is an important service. It’s easy for me to imagine relatives and friends building their own sites for baby pictures, announcements, and general talk. That’s what we did in the 90s. But would we go to the trouble of reading those all those sites? Probably not. I didn’t in the 90s, and neither did you.

We already have a tool for solving this problem. RSS lets websites provide “feeds” of news and new items. Applications like Feedly and Reeder let you build a collection of sites that interest you, and show you what’s changed since the last time you visited. While I’d never check a dozen sites each day, I use Feedly to monitor hundreds of websites. I would never check those sites by hand, but I scan Feedly every morning. And, unlike Facebook, Feedly doesn’t know anything about its users except for the sites they read.

Feedly has a decent user interface, though it could be improved; it would have to be better to become popular with people who aren’t technically literate. (Sorry.) Still, though, the UI gap for RSS is much smaller than for technologies like TOR. And if we’re going to rebuild the net, we’ll probably be better off choosing simple rather than bright, shiny, and complex technologies. Could someone build an RSS reader that made the web of independent sites as approachable as Facebook? I don’t see why not—and users would have complete control over what they see. That’s important; in a recent tweet, Dash says:

Google’s decision to kill Google Reader [their RSS client] was a turning point in enabling media to be manipulated by misinformation campaigns. The difference between individuals choosing the feeds they read and companies doing it for you affects all other forms of media.

Yes, there would still be plenty of sites for every conspiracy theory and propaganda project around; but in a world where you choose what you see rather than letting a third party decide for you, these sites would have trouble gaining momentum.

I don’t want to underestimate the difficulty of this project, or overestimate its chances of success. We’d certainly have to get used to sites that aren’t as glossy or complex as the ones we have now. We might have to revisit some of the most hideous bits of the first-generation web, including those awful GeoCities pages. We would probably need to avoid fancy, dynamic websites; and, before you think this will be easy, remember that one of the first extensions to the static web was CGI Perl. We would be taking the risk that we’d re-invent the same mistakes that brought us to our current mess. Simplicity is a discipline, and not an easy one. However, by losing tons of bloat, we’d end up with a web that is much faster and more responsive than what we have now. And maybe we’d learn to prize that speed and that responsiveness.

We’d also need to avoid many of the privacy and security flaws that were rampant in the early internet, and for which we’re still paying. That technical debt came due a long time ago. Paying off that debt may require some complex technology, and some significant UI engineering. All too often, solutions to security problems make things more difficult for both users and attackers. Crowdflare’s new 1.1.1.1 service addresses some basic problems with our DNS infrastructure and privacy, and their CEO proposes some more basic changes, like DNS over HTTPS. But even simple changes like this require non-technical users to change configuration settings that they don’t understand. This is where we really need the help of UX designers. We can’t afford to make “safe” difficult.

And we’d have to admit that our current web, with all its flaws, evolved from these simple building blocks. To some extent, then, it’s what we wanted—or, perhaps, what we deserved. It’s certainly what we accepted, and begs the question: “why wouldn’t we accept the same thing again?” Starting over means little if we’re destined to repeat the mistakes we’ve already made. So, we would need to develop and incorporate technology for preventing abuse; we would need to build a public space that really is a public space, not someone else’s private property; and above all, we would need to divest ourselves of the arrogance that assumes “because we’ve built it, it is good.” As Dash said six years ago, well before Facebook’s Very Bad Month, we would need to “take responsibility and accept blame.”

Regardless of how it happens, it’s time to start thinking about rebuilding the web. That project is only likely to succeed if the rebuilt web is compatible with what we have today, including Facebook and YouTube. And it’s only likely to succeed if it’s simple enough for anyone to use. Anil Dash has outlined a way forward. It’s not what I would have suggested, but it has a much higher chance of succeeding. Time to (re)build it.

Post topics: Software Engineering, Web Programming
Post tags: Commentary
Share:

Get the O’Reilly Radar Trends to Watch newsletter