The real products of social media

Recent developments in the world of social media have raised questions regarding their power and influence. Big corporations like Google and Facebook were summoned to court for privacy breaches and there has been a lot of discussion about the circulation of misinformation on Twitter and YouTube. For those wondering about what these developments are and who they affect, we have gathered a small overview below.  


social-networks-4169387_1280.png

The issue with ‘free social media’

The main issue with social media is that it appears objective, but the fact is: it’s not. Most social media platforms, search engines and e-mail addresses don’t cost us money, so instead, we pay them in a different way: with our personal information. As stated in the Netflix documentary The Social Dilemma: 'If you're not paying for it - you are the product'. We might not always realize it, but what makes social media platforms and search engines profitable is their distribution of personalized content. A business like Google could be seen as operating based on a double agenda: on the one hand their online service does what it’s made to do —show you the search results you’re looking for—, but on the other hand, personal user data is sold to advertisers, which makes the search results different from person to person. To give an example: If one person searches for ‘tigers in the wild’ they might get results that are focused on where tigers can be found in the wild, while someone else might only get to see results that are related to wildlife endangerment. The same goes for information about politics, climate change, and even wars (and war crimes).

Why should we worry?

The problem lies in the fact that we don’t explicitly get to see how our content is curated for us, and many who use the internet as a main source of information, think they get to see what everyone gets to see. This is not the case. You might have heard the term “filter bubbles”. These bubbles should be imagined as personalized information ecosystems or ‘digital echo chambers’ in which users get to see similar content they have looked at or clicked on before. These are created in order to keep them using a web-service, or social media platform, as much as possible. The end goal is to have the user - you - click on ads and therefore become profitable for the platform.

The rise of pervasive, embedded filtering is changing the way we experience the internet and ultimately the world
— Eli Pariser

Something that deserves more emphasis here is that besides being designed to have you register your personal data, social media platforms also borrow from cognitive studies. In case you weren’t aware: it’s not just your likes and shares that get registered. Big platforms can analyze how much time you take to read certain articles, what you look at when you search, how long you look at it, and where in the screen your attention is focused. While it might seem like this metadata is relatively ‘non- sensitive’, research has shown that when combined with user identity (built up from the content they produce) users within in a huge network can be identified with over 95% accuracy. These technologies can identify you by the way you use a website, the amount of time you spend on there and what kind of information you are interested in. The result is that the frequency of notifications, the placing of buttons, and the infinite scroll are trying to play on the senses. When you refresh the screen and new content pops up every time you do, it creates a feeling as if you are missing out when not spending time on this site or app. This activates part of your brain’s reward system and makes it more likely for you to keep engaging with the site, getting you addicted to checking the updates. The same thing happens with advertisements, which are developed to blur the lines between personalized ads and content and might persuade you into buying something.

On a social media platform, money can be made on every action you take. Big media companies thus try to influence our behaviour to maximize those profits. The most important thing is that this is done in an implicit way, so you as a user do not realize it. If you go into a physical store, you know you have gone in there with a big likeliness of buying something, or because you need something. But when scrolling on a social media platform, your actions on this platform don’t have that obvious price tag. Your (meta)data is worth a lot to these companies, because they can sell it to 3rd parties. But what impacts does this influencing do to our beliefs?

brain-2845862_1280.png

Rabbit holes, misinformation and public sentiments

As personalized content can be used by companies with ‘free’ services it is important to link this to contemporary issues. As an example we could try to find some information on YouTube about for instance the Corona-virus, as was done in this Dutch TV show. You click on a video that explains that not all tests are accurate. After you finish watching the video, you look at the recommended section —or if you have auto play enabled just wait until the next video starts. You are then referred to a video about how wearing mouth masks makes no difference, on to a video of how the state is lying to you and using the virus as a way to control you, and finally on to a video that falsely alleges a vast conspiracy of child abuse by politicians etc.

This phenomenon is referred to as a Rabbit Hole. It is easy to get sucked into the recommended section, and after watching a dozen of videos, you get more and more specific content based around the same theme. Taking it one step further every time. The biggest issue with this is that not all information out there is based on facts. It seems as if facts have become debatable, depending on who’s team you’re on. Research has shown that sentiments can go from extreme to radical over the course of time.

Another way of spreading false information happens through fake news posts on social platforms. For instance, when a false claim gets shared by a lot of others. News channels can even get bots to circulate this type of content. These then produce fake accounts and reactions that can manipulate the content people get to see and influence their beliefs. As mentioned before this can be done in discussions around the corona virus, but it also happens on state level, like it happened during the American elections and the referendum that led up to Brexit both in 2016. On national and global level, such functions of the Internet are seen as causes of increased polarization.

social-networks-4169387_1280 kopie.png

Responsibility and actions

But what can we do to prevent this sort of thing? Earlier this year, a collective of big brands among which Unilever, Starbucks, Ben & Jerry’s, Honda and Coke recognized the power of social media platforms as a way to create segregation and polarization, and thus called for a boycott of these platforms. Consumers were informed about the brands collaborating through the “Stop hate for profit” campaign which was used to pressure Facebook and other social media outlets to better regulate against offensive content. During the US elections of 2020 and the conflicts President Donald Trump caused before Joe Biden’s inauguration, big media platforms finally understood their power when Twitter and Facebook took measures to put warning signs on the tweets that were believed to make misleading claims about the electoral process and finally close down his accounts. But can we expect these companies to take their responsibility in a day to situation, outside of the spotlight? And, as German Chancellor Angela Merkel has been wondering: is it even desirable that Big Tech are taking these decisions instead of law-makers?

An answer to the question of what we can do better often ends up being linked to education about these subjects. José van Dijck who is a new media author and distinguished university professor in media and digital society at Utrecht University, argues for “media literacy” as an antidote against polarization. In an interview with The Network Pages she states:

Everyone should know a little about how social media works, how they work, how they manipulate users to some degree. You must realize that free is never free, but that you always pay with your own data and that has consequences for your privacy. This is what I call “media literacy”. Actually, I see education and media literacy education as a vaccine against many forms of hate speech and polarization.
— José van Dijck

An example of an initiative that has tried to combat online hate speech and polarization is the Dutch initiative DROG. Their team consists researchers, journalists, designers and innovators, who are trying to combat misinformation and looking for the most effective ways to stop the spread of fake news and misinformation. They are focused on everyone that would like to protect themselves against misinformation. Besides organizing workshops, developing education and giving lectures they have also developed a in which you can put on the mask of a fake news producer to mislead your audience, build a following and exploit societal tensions to achieve a political goal. This teaches people to recognize the effects of misinformation.

What do you think?

Are initiatives like these doing enough to create awareness and solve issues with social media? How should social platforms contest misleading information, conspiracies and segregation initiated by users? Should they be programmed differently? What should be boundaries be for what you can and cannot post? And who should determine these boundaries?  

Help us and think along by commenting below!


Robin Jane Metzelaar, 15/01/21.



Gotten curious about more initiatives like the ones mentioned in the text?

Check out the ones below!

-      #thinkbeforesharing - EU campaign on identifying misinformation

-      Metadebunk - debunking and discussing conspiracy theories by David

-      Query Design (using your search engine for research) - by Richard Rogers

-      Fix je Privacy, Increasing your online safety (in Dutch) - by Bits of Freedom

Sources and further reading

Previous
Previous

Interview: Lotte Houwing on Facial Recognition

Next
Next

How is your privacy handled?