Should Big Tech own our personal data?
By Steven Hill, Wired, February 13, 2019
Or should consumers sell their data — receive a “data dividend”?
FACEBOOK, TWITTER AND Google seem to take turns making the wrong kinds of headlines. Last month it was Google’s turn. The company was fined $57 million by a French regulatory agency, the first time a large Silicon Valley company has been penalized for violating the European Union’s new privacy rules known as the General Data Protection Regulation (GDPR).
According to the ruling, Google failed to act transparently to obtain valid consent for the personalization of its ads. Among other things, Google checked off some hidden consent boxes, which violated the GDPR principle that users must OK each specific use of their data. European privacy campaigner Max Schrems, one of the plaintiffs in the French complaint, maintains that corporations such as Google and Facebook “have often only superficially adapted their products” to the requirements of the GDPR.
Even a $57 million fine may not compel compliance, since that amount is pocket change for a company valued at three-quarters of a trillion dollars. The constant stream of data privacy scandals from Google, Facebook, Twitter, Amazon, and others gives the unmistakable impression that trying to rein in these abuses is like trying to stop water with a net. The US is one of the few developed nations that has no basic consumer privacy law, leaving the Federal Trade Commission with little institutional mandate for enforcement.
So, what to do? A historical perspective provides insight into this puzzle. Ever since retailer Aaron Montgomery Ward launched his catalog and mail-order business in the 1870s, Americans have made an uneasy peace with the idea of being “tracked.” Initially, Ward mailed unsolicited advertising flyers and one-page catalogs to targeted potential customers living in rural areas and small towns. The business grew and competitors adopted his direct mail tactics.
By the mid-1890s the Sears Roebuck catalog featured hundreds of products and was distributed to over 300,000 addresses in the US. The new direct marketing and sales methods used in the mail-order business took advantage of advances in the technology of the times, including improvements in railways and shipping, better postal service delivery, and cheaper printing costs.
Over the ensuing decades, direct mail to targeted customers was followed by telemarketing, broadcast faxing, demographically targeted infomercials, and email spam. Most recently, the mad science has been transformed by web-based display ads, search engine optimization, and social media targeting. Each technological iteration has allowed ever more gathering of our personal data, as well as more scientific targeting and delivery of advertising, news, and information.
Now, internet-based companies like Google and Facebook have added an entirely new wrinkle to this business model: Instead of charging for their products, they give them away in exchange for vacuuming up our personal data and monetizing it in various ways. Initially this business model seemed benign—beneficial even—because it provided some useful services for free.
Increasingly, though, the public has become aware of the numerous downsides and hidden costs. Some are mere annoyances, like being constantly tracked by online advertisers (which keep showing you the same pair of shoes you purchased three weeks ago). Others—such as facilitating hate speech, allowing leaks of personal data, facilitating Cambridge Analytica-style political targeting, and skewing public discourse through the amplification of fake news—strike at the very heart of personal privacy, societal health, and democratic governance. Such complaints were never leveled at the Sears Roebuck catalog. A fundamental shift has occurred.
European competition commissioner Margrethe Vestager, who has emerged as a key global regulator, recently stated, “This idea of services for free is a fiction… people pay quite a lot with their data for the services they get.” She says, “I would like to have a Facebook in which I pay a fee each month. But I would have no tracking and advertising and the full benefits of privacy.”
In June 2018, California became the first US state to pass a form of GDPR-lite. The California law provides new rights to consumers and aims for more transparency in the murky commerce of people’s personal data. For example, consumers can request that data be deleted and initiate civil action if they believe that an organization has failed to protect their personal information. But the GDPR requires explicit consent from consumers, while California still allows implicit consent, which companies can exploit. Nevertheless, Silicon Valley’s new business model appears to be in the crosshairs.
But we have been here before, too. In 2003 the National Do Not Call Registry was created to offer consumers a choice whether to receive telemarketing calls at home. That year, Congress also enacted a law to curb unwanted email spam. In 2005, President George W. Bush signed the Junk Fax Prevention Act, which allowed opting-out of receiving spam faxes. In 2013 the federal government made it illegal to use an automatic telephone dialer or a prerecorded message to deliver telemarketing messages.
Previous governments have acted to provide relief from abusive practices. What might regulation for internet-based companies look like?
Some Silicon Valley leaders have proposed that individuals should become “data shareholders,” able to sell their data to companies which then would have unlimited access to mine our personal information. That’s market-friendly and sounds innovative, but, in fact, each individual would receive a pittance for their data. Facebook’s 2 billion monthly users would each receive about $9 a year if the company proportionally distributed its profits. Given that, economist Glen Weyl’s concept of “data-labor unions,” which would negotiate on behalf of individuals—with the companies holding our personal data—is not a solution.
Others have proposed a “privacy as paid service” business model, in which companies like Facebook and Google would create a second, premium service that charges for a privacy-friendly, ad-free user experience, similar to the online subscription model of Netflix and Amazon Prime.
But this dodges the real question: whether these companies should continue to control the personal data of their billions of users at all. Silicon Valley’s “service for data” model is a devil’s bargain that seems unworkable in any scenario.
That’s because our personal data is not merely a form of individual property. Increasingly, it’s a core part of our personhood, following us throughout our lives. Personal control over our own data ought to be regarded as a human right that cannot be taken or given away. Selling that information amounts to “a kind of digital prostitution,” according to tech entrepreneur Andrew Keen.
A more salutary alternative vision would be to reconceptualize our private information as an important digital resource that is protected as part of a “data commons.” That would be overseen by an independent watchdog agency and guided by sensible regulations on privacy and the development of artificial intelligence and machine learning.
The US has entered into a technological race with China to see who will lead in harnessing the power of AI. To develop AI applications, algorithms have to be trained to plow through massive data feeds, identifying patterns and images. The efforts by Google and Facebook to amass a data-opoly, in order to maximize their advertising profit, does little to help solve the big challenges of the 21st century.
Just as the Tennessee Valley Authority in the 1930s was able to harness power generation and regional economic development, a Data Oversight Agency could ensure the availability of open-source data sets. This would allow smaller companies and university labs to have as much access as large Silicon Valley and Chinese companies, spurring competition and better ensuring that more AI research will be conducted on behalf of the public interest.
There is an innovative alternative to the Frankenstein future that Facebook and Google are pushing. These companies have demonstrated repeatedly that they cannot be trusted to self-regulate. It is time for the government to step up, as it has in the past.
[Steven Hill is a Silicon Valley–based journalist and author of the books Raw Deal: How the Uber Economy and Runaway Capitalism Are Screwing American Workers and Startup Illusion: How the Internet Economy Ruins Our Welfare.]