Trying to Control a Future We Never Get Right

"Technology is the villain," Wilbur H. Ferry wrote in the March 2, 1968 edition of the Saturday Review. "I do not say that technology will be regulated, only that it should be."

Here at Paleofuture we often look at techno-utopian predictions for the future—the people who were developing artificial meat to feed the world's hungry, or the people who thought flying cars would solve our traffic congestion problems. The techno-utopian holds up technological advance as not just a positive force in the world, but as the only answer to bringing about the best things in life. History (20th century America, in particular) is filled with techno-utopians: people who believe that technology is the fundamental ingredient to solving every problem.

But here at Paleofuture we also look (less commonly these days, I'll admit) at the people who warned that technology was the enemy; that scientific developments were going to undoubtedly make our lives worse. The techno-reactionary believes that technology must be controlled so as not to overwhelm the very people it ostensibly exists to help. Wilbur H. Ferry worked at a think tank called the Center for the Study of Democratic Institutions, and Ferry was very much in the latter camp. He thought technology was going to make our lives unbearable.

In 1968 Ferry warned that the technological developments that Americans were seeing at the end of the 1960s had the potential to make our lives worse. Much, much worse. Ferry pointed to both dystopian literature and real-world examples of this throughout his essay, "Must We Rewrite The Constitution to Control Technology?"

The world was changing, and America would have to adapt quickly if it wanted to properly control that change. Ferry cited unnamed works of fiction that showed just how terrible the future could be: "Humans, poor folk, are the objects of the exercise, never the subjects." The only title he calls out by name would have been familiar to everyone of the late 60s, just as it is today: George Orwell's 1984.

"Not many years ago it was considered regressive and ludditish even to suggest the need for control of technology," Ferry wrote. "Now a general agreement is emerging that something must be done. But on what scale, and by whom?"

By whom, indeed.

Ferry thought that the American judicial system was ill-equipped to deal with the latest technologies of the day. It's fair to say that Ferry's two largest concerns about high-tech had to do with things we're still very much alarmed about today: pollution and privacy.

From Ferry's 1968 essay:

My first example is privacy, today a goner, killed by technology. We are still in the early days of electronic eavesdropping, itself an offshoot of communications research, and at first celebrated as a shortcut to crime control.

Many people today are justifiably concerned that institutions like the Supreme Court are ill-equipped to handle the realities of our modern age. Many were shocked when SCOTUS recently had the good sense to recognize that searching a cell phone was like searching through someone's entire life and that the least police could do was obtain a warrant before searching one.

But Ferry couldn't even dream of the extent of what was to come. Back in 1968 there was no internet; the average American didn't have a camera and microphone pointed at him nearly 24 hours a day by his own choosing—as many now do with those little things we carry around in our purses and have conveniently built in to our computers.

Ferry's essay also predates the establishment of the Environmental Protection Agency, another concern of his being the degradation of the natural environment by technology. But he seemed even more concerned about things like noise pollution from passing jets. Supersonic transport was the great promise of the future, but what happened to all those poor souls on the ground and their eardrums?

The right to peace and tranquility was nearly as important as the right to privacy from technology's prying eyes, as he saw it. New technologies and all its byproducts would have to justify their existence in a Ferry utopia.

Up to now the attitude has been to keep hands off technological development until its effects are plainly menacing. Public authority usually has stepped in after damage almost beyond repair has been done: in the form of ruined lakes, gummed-up rivers, spoilt cities and countrysides, armless and legless babies, psychic and physical damage to human beings beyond estimate.

Ferry wasn't be hyperbolic when he talked of armless and legless babies. Thalidomide, a drug prescribed to women in the late 1950s and early 1960s to treat morning sickness, caused about 10,000 babies to be born without some of their limbs. Many of the children died.

Trying to Control a Future We Never Get Right

Above, we see a four-year-old kid named Brett Nielsen in 1964, who's practicing to write with a prosthetic arm. His mother took Thalidomide during her pregnancy. The only way to protect ourselves, Ferry argued, was through radical constitutional changes. Because as the U.S. Constitution stood then, he recognized there was no way to limit these tech changes:

The measures that seem to me urgently needed to deal with the swiftly expanding repertoire of toxic technology go much further than I believe would be regarded as Constitutional.

The Founding Fathers could not foresee supersonic air travel or advanced communications technologies. And therefore, some revisions of the laws governing us were in order. But "technology," as Ferry saw it, was something that could destroy the very freedoms, liberties and ideals that the Founding Fathers sought to protect.

Technology's scope and penetration places in the hands of its administrators gigantic capabilities for arbitrary power. It was this kind of power the Founding Fathers sought to diffuse and attenuate.

While Ferry certainly makes some legitimate points about technology and the role of regulation, he falls into the same trap as just about every other techno-reactionary of any generation. Fundamentally, Ferry believes that technology is developing much more rapidly than it used to and that this tremendous growth is a new thing.

Technology is not just another historical development, taking its place with political parties, religious establishments, mass communications, household economy, and other chapters of the human story. Unlike the growth of those institutions, its growth has been quick and recent, attaining in many cases exponential velocities.

We're still debating these ideas today. It's a bit depressing to acknowledge that none of this is new; how could anyone argue that the control of the railroads and their impact on the development of the United States was any less important than the development of the internet? Or what about electricity? Or even just TV for that matter?

Imagine for a moment the title of this essay passing your lips within earshot of 21st century Silicon Valley. Venture capitalist Tim Draper's plan for carving California into six different states has the explicit goal of doing quite the opposite of rewriting any laws to "control" technology.

In fact, Draper would like to remove what he sees as burdensome regulation from technological development by making the Bay Area its own state, creating laws in the Libertarian mold, the "just the cops and courts, ma'am" kind of entrepreneur's utopia. And the high-tech power brokers (while not altogether delusional enough to think that his Six Californias plan will actually happen) are at the very least applauding the idea.

Recognizing that we're not grappling with fundamentally different problems, just problems wrapped in shinier technologies is a bit uncomfortable. But we also have to resist the urge to overhype the impact of technology and the regulation of it on our world.

Do we need to amend the U.S. Constitution or establish new government agencies to address the concerns brought on by new technologies? Let's first acknowledge that we have in the past (see the FCC, the EPA, etc). But secondly, let's remind ourselves that this is not a binary issue.

If I've learned anything in my seven-odd years studying the history of futurism, it's that the future is never as great as the utopians promise you and never as bad as the doomsdayers warn. It's not a very sexy or headline-grabbing thing to say, but the future is generally pretty lukewarm for the average American. That's not going to stop commenters below, though from telling me just how amazing or shitty the future is going to be for any number of reasons, absent or because of certain tech regulations.

The world changes, and whether we're talking about net neutrality, so-called ridesharing, or amateur radio, there will always be people who want to control it, and those who want to thrust it upon the world, no matter the consequences. The unsexy truth is that the futurist-extremists rarely win.

Our great big sloshing dystopia hurtling through space (I believe that's the technical term for "Earth") requires laws and regulations to keep people safe. But it also requires a bit of danger to keep things interesting, and hopefully improve people's lives with technology. We must remain wary of the self-professed saviors on either end of the spectrum (both techno-utopian and techno-reactionary) and just do our best to promote both innovation and safety.

Same as it never was, I suppose.


Image: Photographed from the March 2, 1968 issue of Saturday Review; Kid practicing writing with a prosthetic arm on March 25, 1964 via Getty