I have written again about the ways artificial intelligence and algorithms are driving the end of democracy as we know it. But it seems that things are much worst that I understood few weeks ago.
If you have a Facebook account, then you know the deal: you get to connect with your friends, family, loved ones, and those people from high school you never talk to, all for free. In return, Facebook collects information about you—your profile information based on online behavior and preferences – and uses that to sell ads. And Facebook isn’t alone. Google is engaged in essentially the same practice of marrying online information with our offline lives. Google’s new ability to match people’s offline credit card purchases to their online lives is a stunning display of the unexpected and often illegible mechanisms of extraction, commodification, and control mechanisms that are in use for producing new markets of behavioral prediction and modification.
The new ability, which Google unveiled a few days ago, allows the company to connect the dots between the ads that it shows its users and what they end up actually buying. This is a crucial link for Google’s core business that, despite the company’s inventiveness, remains a matter of attracting users to its predominantly free services, collecting user data, and leveraging that data to sell advertising. Of course, Google has been able to track your location using Google Maps for a long time. Since 2014, it has used that information to provide advertisers with information on how often people visit their stores. But store visits aren’t purchases, so, as Google said in a blog post on its new service for marketers, it has partnered with “third parties” that give them access to 70 percent of all credit and debit card purchases.
So, if you buy stuff with a card, there’s a less than one-in-three chance that Google doesn’t know about it.
Needless to say, that information is supposed to be processed by complex, patent-pending mathematical formulas to protect the privacy of consumers when they match a Google user with a shopper who makes a purchase in a brick-and-mortar store. The mathematical formulas convert people’s names and other purchase information, including the time stamp, location, and the amount of the purchase, into anonymous strings of numbers. The formulas make it impossible for Google to know the identity of the real-world shoppers, and for the retailers to know the identities of Google’s users, said company executives, who called the process “double-blind” encryption.
You might be thinking that we users willingly hand our personal data over to Google—that’s part of the service agreement. Similar agreements are made with banks, online shopping companies and more. But whether all these parties conclude to an agreement allowing them to share private information is a completely different matter, to which we haven’t necessarily consented. When ProPublica started investigating Facebook’s advertising platform to see what parameters ad buyers could use to target an ad, it found close to 600 categories that were described as “provided by a third party.” Most of those had to do with users’ financial attributes, and none of them showed up in the crowdsourced list that users sent in. It turns out that Facebook’s transparency has its limits.
From watching users to programming their behavior?
A couple of years ago, Vladan Joler and his colleagues in Belgrade began investigating the inner workings of Facebook. The team, which includes experts in cyber-forensic analysis and data visualization, had already looked into what he calls “different forms of invisible infrastructures” behind Serbia’s internet service providers. He reels off the familiar, but still staggering, numbers: the barely teenage Silicon Valley firm stores some 300 petabytes of data, boasts almost two billion users, and raked in almost $28bn (£22bn) in revenues in 2016 alone. And yet, Mr. Joler argues, we know almost nothing about what goes on under the bonnet – despite the fact that we, as users, are providing most of the fuel – for free. Joler and his colleagues have created great flow charts that take hours to absorb fully, but which show how the data we give Facebook is used to calculate our ethnic affinity (Facebook’s term), sexual orientation, political affiliation, social class, travel schedule and – lately – financial status, thanks to your bank. I wonder how important is that an algorithm transcribes my identity to numbers when my personal data and single credit card number is already out there?
Towards a new form of governance?
Some might argue that none of the information “provided by third parties” can actually influence their daily lives or shopping preferences; all it does is making Google’s advertising revenues bigger because of data –driven efficiency. But we must not neglect the fact that Big Data giants like Google and Facebook not only observe and collect, but they are in control of the information flow overall. In 2016, we all learned that Facebook’s algorithms were feeding users with fake news in many cases. In the great article “Will democracy survive Big Data and Artificial Intelligence” that was published in Scientific American, a bunch of scientists describes what’s coming for all of us. “Today, Singapore is seen as a perfect example of a data-controlled society. What started as a program to protect its citizens from terrorism has ended up influencing economic and immigration policy, the property market and school curricula. China is taking a similar route. Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called ”Citizen Score”, which will determine under what conditions they may get loans, jobs, or travel visa to other countries. This kind of individual monitoring would include people’s Internet surfing and the behavior of their social contacts.”
Their main conclusion is that the advanced applications of artificial intelligence for commercial reasons isn’t only a tool of observation and categorization but an instrument of governance. “The more is known about us, the less likely our choices are to be free and not predetermined by others. But it won’t stop there. Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people.”
Some researchers go even further and believe that we are entering what is called “Surveillance Capitalism”. The term was invented by Shoshana Zuboff in her excellent article Big other: surveillance capitalism and the prospects of an information civilization that was published in the Journal of Information Technology (2015, 30, 75–89). Instead of any conclusions, I prefer to finish this post with a small part of her work:
“These developments became the basis for a fully institutionalized new logic of accumulation that I have called surveillance capitalism. In this new regime, a global architecture of computer mediation turns the electronic text of the bounded organization into an intelligent world-spanning organism that I call Big Other… To the question ‘who participates?’ the answer is – those with the material, knowledge, and financial resources to access Big Other. To the question ‘who decides?’ the answer is, access to Big Other is decided by new markets in the commodification of behaviour: markets in behavioural control. These are composed of those who sell opportunities to influence behaviour for profit and those who purchase such opportunities…Surveillance capitalism is immune to the traditional reciprocities in which populations and capitalists needed one another for employment and consumption. In this new model, populations are targets of data extraction. This radical disembedding from the social is another aspect of surveillance capitalism’s anti- democratic character. Under surveillance capitalism, democracy no longer functions as a means to prosperity; democracy threatens surveillance revenues.”