Following the events of September 11, 2001, efforts to develop internet privacy legislation were put on hold. There are now very few regulations in place to prevent your personal data from being gathered, sold to advertising, and utilized to create more powerful predictive smart gadgets. This data includes surfing history, phone numbers, email addresses, location history, biometric data, and even a psychological profile based on your social media profiles. This data is getting more detailed and granular as more powerful “smart” gadgets reach the market, reducing the amount of area that is not being monitored for behavioral data.
Let’s face it: it would be incredibly difficult to get by in life these days if you did not interact with the digital world. And this is the ideal environment for today’s surveillance capitalists.
Surveillance capitalism is the business of profiting from people’s personal information. Location monitoring, search history, contacts, browser history, biometric data, when you sleep and wake up, how frequently you charge your battery — the list goes on and on. This data is then examined for behavioral tendencies and sold to marketers to help them better target customers.
Surveillance capitalism converts all elements of human experience into data and sells it to a variety of corporations for a number of purposes.
Do you realize how companies like Google, Facebook, Microsoft, and Amazon process and sell your movements, voice, activities, experiences, and behaviors? Few of us do, and the proponents of surveillance capitalism would want to keep it that way.
To begin with, your personal information can assist firms in better targeting their advertising efforts. You’re getting near to a McDonald’s, aren’t you? Here’s an advertisement for a Big Mac.
However, it may also aid in the development of predictive goods, such as virtual assistants such as Amazon’s Alexa, which are subsequently utilized to collect more profitable data.
Google was the first to pioneer surveillance capitalism, and it continues to be the leader. However, it didn’t take long for other firms to grasp the significance of this new personal data market. After all, Google went from losing money to witnessing a 3,590% gain in income in only four years after starting to use data to improve the accuracy of targeted advertisements.
Facebook was the first to follow in Google’s footsteps, and they are the only ones that can compete with Google in terms of total data gathered. Researchers from the University of Pennsylvania examined the top one million most popular websites in a 2015 study. They discovered that 90% of them leak personal data to an average of nine other sites where it is monitored and exploited for commercial purposes. 78 percent of the websites that leak data transmit information to Google-owned outside domains, while 34% send to Facebook-owned domains.
Facebook, like Google, offers targeted data to advertisers that includes email addresses, contact information, phone numbers, and website visits from across the internet. In 2012, Facebook included a brief reference of this new monitoring policy in a new terms-of-service agreement that was so long that few individuals were likely to read it all. This type of illegible contract is a common strategy used by surveillance capitalism.
However, such tracking is not restricted to internet browsing. Other research has discovered that many apps marketed for Google Android smartphones have trackers that leak personal information even when they are not being utilized. Unsurprisingly, Google Android smartphones, like other “smart” gadgets on the market today, give a continual stream of location and behavioral data.
How did we end up here? Why does using the internet or digital products now effectively imply consenting to intrusive surveillance by unknown parties?
Prior developments in capitalism aided in the relaxation of restrictions and the transformation of mindsets for the online age.
Surveillance capitalism is a contemporary story. However, in order to comprehend its growth and current supremacy, we must go back to the 1970s and 1980s. During this period, the laws of capitalism changed dramatically.
Prior to the 1970s, capitalism was accompanied by a set of laws and regulations known as the twofold movement, which were intended to safeguard society against capitalism gone wild.
According to historian Karl Polanyi, the twofold movement was included in the capitalist system to ensure that the institutions involved did not hurt labor, land, or money. As Adam Smith and countless economists before him, Polanyi recognized capitalism’s potentially harmful impulses. Greed and power-mongering may have disastrous consequences if left unchecked, and the twofold movement was created especially to fight these inclinations.
Nonetheless, two important voices rose to the forefront of economic policy in the 1970s, both arguing that the twofold movement was unnecessary. They were Friedrich Hayek, an Austrian economist, and Milton Friedman, an American economist. These two guys preached the gospel of a self-regulating free-market economy, free of vexing things like rules and regulations that only served to limit the capitalist enterprise’s limitless potential.
Hayek and Friedman both earned Nobel Prizes. This acknowledgment confirmed their ideas, which is most likely why they were soon implemented throughout the world. In the United States, double movement rules were gradually repealed, first under Jimmy Carter’s administration and later under Ronald Reagan’s. In Europe, free-market capitalism was viewed as the ideal cure to communism and tyranny.
However, it is no accident that social and economic inequality has reached dangerously high levels in the years after the breakdown of the double movement. Unprecedented sums of money have been moved to the top income categories in recent decades. The International Monetary Fund went so far as to label this unequal accumulation of wealth a danger to stability in a 2016 study.
Surveillance capitalism flourishes in an uncontrolled corporate environment. Thomas Edison, the inventor, understood what others, notably sociologist Emil Durkheim, had noticed: the principles of capitalism become the principles of society as a whole. Google must be right and good if it is successful. And if surveillance capitalism succeeds under the self-regulating principles of free-market capitalism, it must be moral and beneficial as well.
Early fears about online privacy were put to rest in favor of lax monitoring regulations.
The rise of surveillance capitalism has not gone unnoticed. Many well-informed individuals are concerned. What’s remarkable is that, when we look back, we can see that these fears may rapidly vanish and be replaced with acceptance.
Let’s look into cookies to have a better understanding of the problem. Cookies on our desktops, unlike the delectable baked delicacies, are nothing to be proud of. They follow us throughout the internet and are not welcomed with warm arms. The Federal Trade Commission (FTC) began taking efforts in 1996 to limit the amount of personal information that cookies leaked. Against the interests of marketers, the FTC proposed a simple automated system that would place personal information within the hands of the user by default.
When it came to creating and safeguarding internet privacy, the FTC recognized that self-regulation was not ideal. And, in 2000, they were on the verge of enacting legislation that would make the laws of internet business identical to those of offline trade. Unfortunately, those intentions were thwarted by the events of September 11, 2001.
Following the attacks, the US government did not tighten internet privacy rules; instead, it went the other direction, enacting the Patriot Act and the Terrorist Screening Program, which substantially eased monitoring controls. The CIA and the NSA, in particular, stepped increased their attempts to monitor internet traffic. And, of course, they went to Google for assistance.
Google collaborated with the NSA and the CIA in 2003 to improve search technology for the organizations. The tools provided by Google enabled them to analyze massive amounts of data, discover behavioral trends, and forecast future behavior.
As it turns out, Google’s rich mine of personal data contains the type of information that both advertisers and law enforcement organizations are willing to pay top cash for. Google maintained a mutually advantageous relationship with the intelligence community after securing specific contracts with the NSA and the CIA in 2003. In 2010, NSA Director Mike McConnell talked on the necessity for a “seamless” partnership with Google to ensure that data flowed freely.
This takes us back to the topic of cookies. According to a 2015 research, browsing the top 100 most popular websites would result in your computer collecting almost 6,000 cookies. The survey also discovered that 83% of the cookies came from third parties, rather than the websites that were actually viewed. What makes this possible? Google’s “tracking infrastructure” was shown to be operational on 92 of the top 100 websites.
Google’s Street View and Glass operations are excellent illustrations of how indignation may be transformed into acceptance.
Initial fears about cookies’ ability to follow users throughout the internet have obviously faded. And when we examine how surveillance capitalism came to be, we can see that this is a repeating theme. When the invasive tactics of surveillance capitalists are revealed, there is immediate anger. However, this gradually gives way to reluctant acceptance.
Unfortunately, this plays straight into the hands of businesses like Google and Facebook, which expressly want the public to feel that their tactics are unavoidable.
You may have noticed the strange-looking Google vehicle with a 360-degree camera protruding like a periscope. But perhaps you weren’t aware that such vehicles were capable of more than just snapping photographs.
A German federal agency discovered in 2010 that Google Street View cars were silently monitoring WiFi networks and gathering personal information from any unencrypted communications they encountered. Naturally, this sparked widespread outrage. And, following examinations in 12 countries, Google was found to have violated laws in at least nine of them.
Prosecuting instances like this, on the other hand, is not so simple. The major difficulty is that surveillance capitalism’s activities are unprecedented, therefore there are typically no regulations that expressly address privacy and border concerns in the digital world. As you may be aware, Google’s Street View product has only grown in scope.
There was also a public outrage in 2012 when Google Glass, a wearable device that allowed Google to look into private places, was introduced. The negative response prompted a rebranding and the release of the “Glass Enterprise Edition” in 2017, which positioned the device as being created just for the office, where users may already have decreased privacy expectations.
However, Google has already figured out a wonderfully effective way to infiltrate the nooks and crannies of private life. Niantic, a gaming company owned by Google’s Alphabet Inc, released Pokémon Go in 2016. The game locates virtual Pokémon creatures that users may capture using the device’s camera and GPS data. These Pokémon can be discovered in people’s backyards and inside businesses, where Street View cameras may not have yet recorded them.
The game became an overnight sensation. However, it is a fantastic method of gathering personal information. The game’s requirement for contact information and the ability to “detect accounts on device” has nothing to do with gaming and everything to do with surveillance capitalism.
Surveillance capitalism’s data collecting is becoming increasingly granular.
You could be thinking at this point, “Sure, Google gathers all kinds of data, but I don’t have anything to conceal, so why should I care?”
Even if you’re prepared to live your life as an open book, if you believe in democracy or free choice, you should be concerned. As we’ll see, gathering information about people’s whereabouts and browsing patterns is simply one stage in the process.
Google’s goals are broad. The business wants to know everything about your past and current circumstances so that, instead of asking Google a question, Google can “know what you want and tell you before you ask the question.” At least, this is how Google’s top economist, Hal Varian, characterized the company’s goals.
This entails delving into specifics about your desires and requirements, as well as your emotional condition. The area of emotional analytics, often known as “affective computing,” has advanced to the point that even your facial microexpressions may be detected and instantaneously identified as expressing a certain emotional state. Of course, a single photograph of your face may disclose your age, race, and gender as well.
Realeyes is one of the most advanced organizations in this sector, with a data set of over 5.5 million annotated frames of over 7,000 people from all over the globe — all in an effort to construct the world’s greatest collection of expressions, emotions, and behavioral signals.
All of these variables provide a wealth of information for advertising. According to a market research analysis on the issue, “knowing real-time emotional state can assist firms sell their product and hence generate income.” Alternatively, as stated on the Realeyes website, “the more individuals feel, the more they spend.”
Body posture and gestures may also provide information about what someone is doing and how they are feeling. This is why Google is working on digitally augmented textiles that can be made into clothing and worn by humans. This will add a new level of granular behavioral data to Google’s ever-expanding database.
However, if a person is active on social media, their personal postings and news feed may be evaluated to provide an accurate estimate of how they are feeling. And if advertising and other surveillance capitalists know what you’re doing and feeling, they’ll know when to shove you in the right way.
But how can surveillance capitalists actually change people’s behavior?
Surveillance capitalists seek to discover critical points of sensitivity in order to improve the likelihood of purchase and behavior change.
Given that a sizable section of Silicon Valley is dedicated to studying behavioral data, it seems to reason that corporations such as Google and Facebook would be interested in the obscure area of behaviorism.
After all, behaviorism argues that free choice is an illusion, and that every behavior can be explained by the events that precede it. When you expose someone to certain stimuli, they will respond in a specific way.
B. F. Skinner, a Harvard University professor and a pioneer in both behavioral analysis and utopian thought, is a towering figure in behaviorism. In Skinner’s worldview, there is no such thing as freedom or free will, and anyone who believes there is is simply ignorant.
Every action, according to Skinner’s style of extreme behaviorism, can be mathematically explained by behavioral data. And if someone’s behaviors appear to defy explanation, it’s simply because we haven’t gathered enough relevant data.
Skinner died in 1990, so he did not get to see the day when so many people carried smartphones, lived with smart speakers, and used virtual assistants. These are the kinds of equipment Skinner wished he could employ to observe and experiment on his subjects.
Make no mistake: Google and Facebook are already conducting tests and according to Skinner’s recommendations. The optimum circumstance for reliable behavioral analysis, according to the professor, is when the subjects are ignorant of those performing the experiment and collecting the data.
Facebook has acknowledged to experimenting with the content of people’s news feeds, and one accurate way to see Pokémon Go is as a Google-run experimental test to determine if people can be digitally influenced to go where they are directed and then spend money.
At the height of the game’s popularity, companies could pay to become hotspots — locations where players could be guaranteed to locate the virtual animals they were looking for. These companies claimed revenue increases of up to 70%.
Surveillance capitalism’s invasive, all-powerful future does not have to be regarded as unavoidable.
Two novels were published in 1948. B. F. Skinner’s Walden Two was one of them. This depicted his vision of a utopian society in which radical behaviorism was understood and embraced, and people ceased worrying about the ridiculous idea of personal freedom. The second book was George Orwell’s 1984, which likewise depicted a world devoid of human freedom. However, rather of portraying it as a paradise, Orwell plainly regarded it as a dystopia.
One of these novels, Walden Two, was harshly criticized by critics at its first release, while the other remains a brutally current warning for what our society may look like if we allow people in positions of authority too much power.
Despite Orwell’s warnings, the proponents of surveillance capitalism want to be in our homes, vehicles, stores, and workplaces, listening in on everything we say and do. From their point of view, this would provide for a plethora of conveniences.
One of the more well-known instances of Google’s utopian goal is their new vehicle contract. If you fail to make a car payment, your vehicle will be rendered inoperable. There’s no need for tedious documentation or the bother of sending someone to check on you. Everything is automatable.
Never mind the obvious concerns about the driver and how a sudden halt like this might separate a parent from her kid or prevent someone from fleeing a perilous situation. Consider how much red tape we’d be able to avoid!
Surveillance capitalists prefer to depict these types of automated contracts as unavoidable. But, in reality, none of these things are unavoidable.
We just got a clearer look at what Facebook considers regular operating practice. It was discovered in 2018 that they had handed significant quantities of personal data to Cambridge Analytica, a business that utilized the data to microtarget voters with a disinformation campaign.
This has prompted some disturbing issues about the condition of democracy today, as well as the risks that come when information keepers are allowed free reign to gather anything they want from us and utilize it anyway they see appropriate.
Surveillance capitalism isn’t “inevitable,” and people aren’t eager to give up their privacy for the sake of convenience.
So, how can we combat surveillance capitalism?
First and foremost, people must understand the actual scale of what is going on behind the scenes and that there are other possibilities.
According to surveys performed in 2009 and 2015, between 73 and 91 percent of consumers reject the concept of tailored advertising when informed about how their personal data is obtained.
There is now a vastly unbalanced balance of information. This includes how businesses acquire personal information, what types of data are collected and processed, and what that information is used for. When this becomes obvious, fury ensues.
It’s also critical to fight back right now. A generation of individuals is growing up who have never known a world without cellphones. This generation is not just more prone to adopting surveillance capitalism’s activities; they are also particularly sensitive to the psychological repercussions of these practices.
Former Facebook CEO Sean Parker revealed in 2017 that Facebook, like other social media sites, used behaviorist methods such as variable reinforcement to keep consumers chasing after dopamine spikes — and, more crucially, hooked to their news stream.
Surprisingly, this results in the same negative psychological symptoms as addiction and withdrawal. In addition to addiction, the near-constant internet exposure that today’s teens are subjected to has been shown to create feelings of perplexity, discomfort, boredom, and loneliness.
According to studies, “Facebook use does not improve well-being,” and the same may be said of surveillance capitalism in general. This, however, does not have to be the case.
Georgia Tech researchers were working on the Aware Home around 2000. This was a vision of “ubiquitous computing” that was not unlike to the “smart house” that surveillance capitalists are bringing to fruition. The main distinction is that the Aware Home was created with user privacy in mind.
Users would have control over the data they generate. It upheld the age-old idea of a person’s home serving as a sanctuary and a place where they may be free from monitoring.
Unfortunately, the events of September 11th uprooted that notion a year later. But that doesn’t imply we should abandon this great goal.