fb-pixel Skip to main content
IDEAS | DOUG HILL

Staying ahead of technology’s curves

Ben Luna

America’s relationship with technology has always been marked by excitement with an undercurrent of unease. We’re exhilarated by the powers granted by new machines and techniques, while remaining worried about their effects — on the economy, on the environment, on the society, on ourselves.

As the pace of innovation increases, so too does the scope of its repercussions. Familiar declarations of promise and risk are evoked by artificial intelligence and robotics, Big Data, automated cars, drones, fracking, virtual reality, nuclear power, nanotechnology, genetic engineering, and pretty much everything involving the Internet.

Given this inherent duality, we’d like to think that somebody’s watching to make sure we capitalize on the good things offered by new technologies while avoiding the bad. But is that the case? Is anyone minding the store?

As it turns out, a lot of people are. In fact, alongside the explosion of new technologies over the past 20 years has come a conspicuous flowering of the discipline known as technology assessment. The term describes the attempt to understand what the economic, social, and environmental effects of new technologies might be. A vast array of technologies is being analyzed by an even vaster array of private and public agencies, universities, academies, think tanks, committees, and commissions. Studies are being conducted, papers issued, conferences and workshops held. The goal is to alert policy makers in government, and by extension the people they govern, to the opportunities and risks at hand, and to suggest, where appropriate, alternative ways of moving forward.

The question is whether the people responsible for acting on all those assessments are paying any attention.

Advertisement



One of the earlier attempts to take stock of the collective forces of technological change was published in 1933, when a survey commissioned by President Hoover warned that “a policy of drift” was no longer an acceptable strategy for coping with those forces. A more ambitious study appeared four years later, commissioned by Hoover’s successor, Franklin D. Roosevelt. “Technological Trends and National Policy” weighed in at 450,000 words and held out hope, Roosevelt said, “that we can anticipate some of the effects of major inventions and make plans to meet new situations that will arise as these inventions come into widespread use.”

Advertisement



Technological anxiety focused mainly on the atomic bomb during the first decades of the Cold War, but the emergence of the environmental and counterculture movements during the 1960s focused attention once more on the broader effects of major inventions. Inspired in part by those concerns, Congress created the Office of Technology Assessment in 1972.

Over its 23-year lifetime, the OTA produced some 750 reports on issues ranging from electronic surveillance and genetically modified food products to solar energy, acid rain, and the management of nuclear waste. By mandate, the OTA’s role was to advise rather than direct; its reports didn’t include recommendations, but they did suggest a range of potential directions in which the Congress could move. The office managed to ruffle the occasional political feather nonetheless, most notably with a report on President Reagan’s Strategic Defense Initiative, better known as “Star Wars.” The report said the proposed system was so dependent on unreliable or nonexistent software that it would produce, if deployed to thwart nuclear attack, only “catastrophic failure.”

Congress defunded the OTA in 1995, when Newt Gingrich and the new Republican majority launched their campaign to eliminate every ounce of perceived excess in the federal government. The office has since attained almost legendary status in the community of those who believe in the value of technology assessment, many of whom have lobbied repeatedly for its restoration. Chief among those believers is Rush Holt, a physicist turned politician who introduced legislation to reinstate the OTA during almost every one of the 16 years he served in Congress. As late as last December, 15 representatives (13 Democrats and two Republicans) picked up the charge. “The expertise provided by the OTA saved taxpayers billions of dollars by identifying effective areas for future investment and avoiding wasted money on technologies and policies that did not and could not work,” they wrote in a letter to House Speaker Paul Ryan. “As technology continues to advance and budgets continue to shrink, this kind of trustworthy, non-partisan analysis is no less necessary today than when the OTA was first started 43 years ago.”

Advertisement



Like all the previous proposals, that one went nowhere.

The legacy of the Office of Technology Assessment continues to be felt nonetheless, not least in the profusion of its veterans who now occupy prominent positions in and out of government. These include Marjory Blumenthal, until last April the executive director of the President’s Council of Advisors on Science and Technology, and Ashton Carter, current secretary of defense. (Carter wrote a preliminary, dismissive report for the OTA on Reagan’s Strategic Defense Initiative.) The OTA also served as a model for dozens of technology assessment institutions in Europe, which have since adapted and expanded upon its approach. “The Europeans were perplexed beyond belief when the OTA was abolished,” says Richard Sclove, an activist and author who has worked in the assessment trenches for more than 30 years. “That the US would pioneer it and then abolish it was beyond their comprehension.”

Advertisement



The closest equivalent to the OTA to be found in Washington today is the Government Accountability Office, which in 2002 began issuing assessment reports on issues requested by committees or members of Congress. What began as a pilot program has since expanded into an ongoing operation that is consciously modeled after the OTA. The main difference is its size. Whereas the OTA completed about 20 assessment reports a year, the Government Accountability Office completes just two or three.

Even so, there’s no shortage of assessment information available to whomever wants to find it. It’s not possible to provide anything close to a comprehensive list of the actors who populate the assessment landscape today, but a representative sample would start with the National Academies of Sciences, Engineering, and Medicine. The academies are commissioned by Congress, the White House, and independent foundations to conduct an average of 300 studies a year, appointing from their memberships volunteer panels of experts in a given field. At least half of those studies fall under the rubric of technology assessment and include everything from pesticides and geoengineering to nuclear power and bioterrorism.

The White House Office of Science and Technology Policy isn’t as prolific as the academies, but the power of the presidency makes it singularly influential. President Obama has pursued technology assessment more aggressively than any of his predecessors, forming, a year after taking office, a committee with representatives from 20 agencies that is specifically assigned to consider technologies so new that their policy implications have yet to be determined. One of the committee’s projects is an updating of the entire federal mechanism for regulating biotechnology, which is advancing in ways that make the old rules obsolete.

Advertisement



From outside government, a disparate, often clamorous collection of voices weighs in on technology’s opportunities and dangers. Think tanks from the Wilson Institute to the Heritage Foundation issue position papers and reports, as do professional organizations like the American Association for the Advancement of Science, now headed by Holt, the OTA advocate and former congressman. There’s a Center for Nanotechnology in Society at Arizona State University, an Initiative on the Digital Economy at MIT, and a Genetic Engineering and Society Center at North Carolina State University. Hundreds of corporate and citizen groups, from the American Petroleum Institute to Friends of the Earth, offer data and opinions, while the Institute for the Future, the Future of Humanity Institute, and the Future of Life Institute all contemplate where technology might be taking us in, yes, the future.

One could ask — as many undoubtedly do — if all of this isn’t enough, or more than enough. Perhaps it is, but significant questions remain.

Among them is a concern that the federal agencies responsible for regulating technologies are more involved in promoting them as engines of American economic power than in making sure they’re safe. The National Nanotechnology Initiative, for example, is responsible for coordinating the nano-related work of 20 federal departments, agencies, and commissions. Its proposed budget for 2017 is $1.4 billion, just 10 percent of which is earmarked for investigating the technology’s potential impact on health and the environment. Meanwhile, the activist group the Center for Food Safety has identified hundreds of products currently on the market that contain nano particles, among them baby blankets, chocolate syrup, and instant mashed potatoes.

Critics have long expressed the concern that federal agencies are in the pocket of the vested interests whose products they monitor. Whether or not that’s the case, there’s widespread agreement that those agencies often lack the resources to conduct adequate research on their own, thus making them unavoidably beholden to outside interests. This may explain why the National Highway Traffic Safety Administration was willing to accept Tesla’s assurances that its Autopilot feature was safe until Joshua Brown’s Model S drove him under a truck in June.

Nor is academia immune from potential conflicts of interest. Some of the universities that house the most active technology assessment centers also accept hundreds of millions of dollars’ worth of public and private grants to develop the very technologies their assessment centers are trying to assess.

This points to what may be the most pressing assessment conundrum of all — the relentless pace of technological development. New technologies are being discovered, adapted, and introduced faster than regulatory or assessment organs can respond. For Timothy Persons, codirector of the technology assessment program at the Government Accountability Office, finishing a report in a year to 14 months is a race against time. “It’s like trying to hit a moving target,” he says. “There’s always something we could throw in at the last minute. Something just happened, some change, some new update.”

A current example is the new genetic engineering technology known as CRISPR, which makes it easier and cheaper than ever before to alter the genomes of living organisms, including human beings. CRISPR’s ability to edit DNA sequences was first demonstrated in 2012; by 2014 more than 150 CRISPR-related patent applications had been filed in hopes of reaping both scientific breakthroughs and breathtaking profits. Jennifer Kuzma, codirector of the Genetic Engineering and Society Center at North Carolina State, says that any one of several federal agencies could be responsible for regulating different applications of this new technique. Some companies are rushing to take advantage of those gaps, she adds, intentionally editing the DNA of various agricultural products so that the resulting organism falls between the cracks of existing regulations.

This is one of the uncertainties the Obama administration is working to resolve, but it’s impossible to say what the next president’s policies toward technology will be. The inconstancy of political leadership is one of the arguments for a revival of the Office of Technology Assessment, or something like it. Such an independent institution would theoretically be able to serve as a trustworthy collector and evaluator of assessment information, no matter who occupies the Oval Office or holds seats in Congress. “We need to invest much more heavily in an infrastructure for tracking technological change,” says Erik Brynjolfsson, director of MIT’s Initiative on the Digital Economy. “Not understanding the problem doesn’t make it go away. It just makes it worse.”

Of course, even if something akin to the OTA were created today, it would almost certainly remain an advisory body, without the power to enforce any recommendations it might make. Given the current political climate in Washington, there’s little expectation that Congress would take forceful action aimed at regulating technology, regardless of the advice it receives.

Kuzma addressed this “elephant in the room,” as she put it, in a letter to the journal Issues in Science and Technology. “A coordinating mechanism needs sharp political teeth, as well as independence from undue influence,” she wrote. “Currently, the vast majority of power lies in the hands of technology developers who can fund political campaigns. Until high-level policy makers are willing to consider alternatives to [an approach] in which technological development and progress take precedent above all else, the rest of us will be resigned to watch the world change in ways that we do not necessarily want.”

As clogged and conflictual as the legislative process in Washington may be, the pattern Kuzma describes isn’t new. Despite our misgivings, Americans have historically considered technological development and progress to be so closely intertwined as to be virtually identical. As a result, we’ve tended to introduce technologies first and ask questions later. The art of assessment acknowledges that while we can never completely predict the future, we have a responsibility to do what we can.

The technologies coming on the scene today are possessed of extraordinary disruptive powers. Throwing them into the culture without trying to anticipate and prepare for their potential consequences seems, more than ever, like a bad idea.


Doug Hill is the author of the forthcoming book “Not So Fast: Thinking Twice About Technology.”