Nowadays, when we reflect on the ethical issues linked with big social networks, we mostly think about confidentiality and data collection. However, this is not the only problem and is also unlikely to be the most important one. What we fail to grasp is that IT companies use data to create models that predict our actions, influencing emotional states and behaviours. In fact, the digital infrastructures of Facebook and Google might have already taken over our brain processes without us even realising it. How did we get here? Maybe Edward O. Wilson, Professor at Harvard, was right when, ten years ago, he wrote: “The real problem of humanity is the following: we have Palaeolithic emotions, medieval institutions, and god-like technology”. While the god-like powers of technology have significantly grown since then, the Palaeolithic impulses of our brain have remained the same.
The evolution of social media
Those who watched The social network, the 2010 movie by David Fincher about the story of Facebook and its inventor Mark Zuckerberg, will remember that everything started when a brilliant Harvard student, left by his girlfriend, created a software overnight that took every female student’s picture that the university ever uploaded online, put them online and asked people to rate their attractiveness. It was exactly the success of Facemash that convinced Zuckerberg to carry on with the idea of offering Harvard students an instrument to socialise. The first milestone of the most visited social network in the world took place in January 2004, when Zuckerberg registered the ‘thefacebook.com’ domain.
Initially social media were very different than they are today though. The platforms were born to help users connect with friends. This changed as they tried to improve the user’s experience with a series of developments that impacted the way news spread, facilitated the circulation of fake news and caused anger and polarisation. 2006 brought Twitter, which created a constant flow of 140-character updates, turning social media into a source of information. Between 2009 and 2012 the implementation of features such as the Like and Share buttons of Facebook and the Retweet of Twitter created a popularity standard for the contents and increased the speed at which news travelled. “When we were making the Like button our entire motivation was ‘Can we spread positivity and love in the world?’ The idea that fast forward to today and teens would be getting depressed when they don’t have enough likes or it could be leading to political polarisation was nowhere on our radar,” stated its inventor, Justin Rosenstein, years later .
Are we the ones choosing or do they choose for us?
These control and communication processes are profoundly changing the rules of the world and the codes of conduct of human beings. This is exactly what Shoshana Zuboff means when she talks about surveillance capitalism: it is something that “takes ownership of the human experience, using it as a raw material and transforming it into behavioural data. This is an unprecedented market and the fact that the Internet companies are among the richest in the history of humanity is confirmation of that. They know if people are sad, if they are watching pictures of their ex partner and with whom they interact or wish to interact. They have more information about us than we could have ever imagined being possible” .
The social dilemma, a Netflix documentary, tried to reconstruct how social media operate through interviews with former employees of the Silicon Valley companies. The undisputed protagonist is Tristan Harris, a former Google design ethicist and the president of the Center for Humane Technology, a non-profit organisation focused on the ethics of consumer technology. He discusses how, as a member of the Gmail team, he realised that there was a particularly frustrating aspect to his job: no one was dealing with the fact that the Google mail customer plan created addiction to emails. Something as simple as choosing a specific graphic design over another would impact the lives of billions of people. Graphics are not the only things that keep us glued to the screen. The action of scrolling, the three dots that tell us someone is replying to us and the tags inside pictures are all functions that exploit the cognitive processes of our brain.
“In the same way drugs act, the use of social media activates the reward system in the brain.” – Stefano Canali
According to a principle of economics when we don’t have to pay for a product it means that we are the product. When it comes to social networks the product in question is our attention. Attention is indeed a limited resource and as such it can be priced and sold, just like any other good. The vast majority of social media- and other types of media too, like several television channels and newspapers- centre their business model on selling portions of our attention to advertisers. As a result, social networks have to ensure that there is a continuous and constant level of attention for their business model to work. In other words, they have to make sure that people spend as much time as possible on their platforms.
However, this goal can be achieved in different ways, some of which are rather problematic. As Stefano Canali, the Coordinator of the Neuroethics School at the International School of Advanced Studies (SISSA) in Trieste, explains, “These machines, just like slot machines, are intentionally designed to create addiction”. “In the same way drugs act, the use of social media activates the reward system in the brain, the mechanism that sparks interest towards things and stimulates the desire to work towards obtaining them”. When we are rewarded for a certain action our brain creates an association between the two things. Actually, the connection is not just between those two factors – the reward and the action – but rather it links all the elements present in the circumstance in which we were rewarded: the feelings we experienced, the features of the environment we were in, the people we were with and much more. The mechanism is the same, whether it is about the taste of a steak, the pleasure of an orgasm, the euphoria caused by a drug or the satisfaction that comes from a Like received on Facebook.
“This learning process causes the presence of the stimuli which were associated to a behaviour that triggered the reward process of my brain to make me want to repeat that same behaviour,” explains Canali. If in the past the notification of a Facebook Like resulted in a gratification – for example in terms of social approval – every time I will see that type of notification my brain will suggest to look into it, expecting to be rewarded again. The algorithms of social networks, which are designed to repeatedly feed us with content that is rewarding to us, take care of the rest. Finally, another element of this vicious cycle is the phenomenon of the so-called cognitive overload: the continuous exposure to new and potentially unlimited stimuli, such as those provided by social platforms, causes a sort of fatigue that affects our cognitive functions, thus eroding our ability to self-control. “The result,” concludes Canali, “is the inability to consciously manage our attention, hence our perception and cognitive systems”.
“They tell you that we will adapt, just like with everything else,” explains Tristan Harris. “However in this case it’s easy not to see that there is something new going on. Nowadays algorithms control us more than we control them,” . One of the consequences of this aspect is that if you Google “Climate change is” you see different results depending on where you live and what Google knows about you, based on your search history. Regardless of the truth, you might see the search box fill in with the words “Climate change is a hoax” or “Climate change is disrupting the planet”. The same mechanism takes place on social media. For example, on Facebook, even two very close friends that share hundreds of contacts on the platform see completely different news feeds. The reason for this is that the choice of content that is shown to a person is based on their behaviour on that social platform. If on various occasions you clicked Like or shared a certain type of content the algorithm will tend to show you similar ones. The end result, given how everyone creates their own reality also based on the content they see, is an increased polarisation, which is also an extremely effective way to keep people online.
Fake news and polarisation
“We created a system that favours fake news because they bring companies more money than real news. We got to the point that we no longer know what is real and what is fake,” states Tristan Harris in The social dilemma . A piece of information that does not necessarily have to be true might in fact have all the characteristics that help satisfy the emotional needs of its final receivers, who – taking them as true – will tend to appreciate them and share them, facilitating their circulation. For example, it was demonstrated that fake news on Twitter spread six times faster than real news. A 2017 study conducted by William J. Brady and other researchers from New York University analysed half a thousand tweets and discovered that any “emotional” word used in a tweet increased its likelihood of being shared by 20% . Another 2017 study by the Pew Research Center showed that the posts expressing disagreement or indignation received almost twice as much engagement in terms of interactions- such as likes, comments and sharing .
In order to understand how we got here we should take a step back. “The arrival of the Internet and social networks revolutionised the way we communicate with others and how we access information. Firstly, while in the past the editor, the journalist and the expert would filter any news before they reached the bigger public, with the arrival of social media each of us is not just able to access a great variety of information but can even create our own. Furthermore, we should keep in mind that we access this information through digital devices in an extremely fast manner, thus the mode of interaction with the content changes. Lastly, another big change is that there is a lot of information. The advent of the Internet was a big landmark because it allowed a huge part of the world population to access information on different topics, but usually there is little time available to process these data and as human beings we have a limited attention span,” explains Fabiana Zollo, researcher at the Ca’ Foscari University in Venice.
Social media users tend to seek information to support the thesis they believe in, ignoring the opposing view. This creates the so-called echo chambers, spaces in which opinions get confirmed and conform to one another. The inevitable result is a progressive polarisation. According to Davide Bennato, a digital media sociologist, polarisation is indeed one of the central problems of social media. “The issue isn’t just with the platforms, but also with the people who utilise them without being aware or those that are even too knowledgeable and in fact misuse them in an unethical manner. We cannot just blame algorithms when there are Anti Vaxxers or flat Earthers who get enchanted by other conspiracy theories because that is down to how this type of people views the world in the first place. If they received information on new conspiracies in another way they would still be interested, independently from algorithms.”
According to various studies conducted by the Ca’ Foscari University in Venice our tendency to self-confine in echo chambers or to interact with those who are similar to us, ignoring the information that clashes with our world view, happens mostly in social media that have the algorithm which selects the information and content based on the users’ past choices and preferences. “Algorithms shouldn’t be demonised though. They are there to show us the content we like and that’s something we demand as users,” clarifies Zollo. “We should try to understand whether it is a mechanism that we want to gradually reduce, interrupt or not. It would be important for users to receive more transparency from the platforms about the way these algorithms operate and how they filter content, but I don’t think that the whole system should necessarily be demonised.”
“We got to the point that we no longer know what is real and what is fake.” – Tristan Harris
Is it too late to go back?
Considering that the main existential threat is not technology itself but its ability to get the worst out of society we might actually still be able to change things. “Think about plastic for example. Nowadays it is demonised when we discuss pollution, but the issue isn’t with plastic itself, but the fact that we have been utilising it to create single use products. If it were utilised with a view to save or reduce waste materials that have a higher social cost, then suddenly it would cease to be an issue,” explains Bennato. “The same thing stands for social media. People don’t seem to be aware that social media – and all other technological tools – have the potential to damage their lives. Individuals could be more empowered if we were better informed about this. Obviously, this is also a political matter. What I find strange is that there is no public debate going on about this.”
Apart from some limitations, The social dilemma should certainly take the credit for starting a discussion on the topic. A survey conducted by Davide Bennato, with 550 respondents, showed that many people liked the documentary despite already being aware of the topic (73%) and that this allowed a small group of people to familiarise with the topic (7%). Overall, more than half of those who watched the documentary (56%) state they are pessimistic regarding the social media impact on society. The majority of those who did not watch it (91%) instead declared that they wish to because of the curiosity sparked from the debate that happened on social media platforms (52%). On the contrary, among those who did not watch it yet, there is a large group of people looking at the future with optimism (45%) and there is also a more pessimistic group (41%).
Schools should be a critical space to have debates about this. Stefano Canali agrees: “If social media suddenly disappeared there would be serious consequences, also in terms of the communications that we need. However, given that politically we are not looking at great prospects- especially because of the enormous power that these big companies have acquired- what I try to do is work on personal resources. Schools should teach strategies to improve self-control tactics, help to navigate choices around what really interests us and train on how to manage mental resources, tapping into them when needed.”
“What I find strange is that there is no public debate going on about this.” – Davide Bennato
What is the real problem with social networks then? When asked this seemingly simple question all the The Social Dilemma interviewees reacted with an embarrassed smile, as if the answer were so complex and multifactorial that it just couldn’t be explained in a single sentence. Is it the algorithms? Is it the technological and design features that are designed to created addiction? Is it the business model? Is it polarisation? Is it misinformation? Is it all these factors together or their interaction with each other? The only aspect that the majority seems to agree on is the lack of regulations and the absence of a needed public discussion that helps highlighting the risks linked with the use of these platforms and that aims to seek possible solutions. Finally, what are at stake might not only be our data and privacy but also our freedom and democratic values.