Does media manipulate ordinary people or not

Against digital manipulation

Algorithms are currently penetrating all areas of life faster and faster. I would like to approach this historical upheaval in twelve steps. The first is about how to hack the attention industry. In 2008, paranoia about sex offenders on the Internet had reached an all-time high. The same alarm sound boomed from all media: The Internet is dangerous for children. Whether in the USA or in Europe: There was no escape from this topic. First MySpace was the focus, then Facebook. The message was clear: These newfangled and popular online services pose a serious threat to our children.

In the United States, Congress responded with bills like the Stopping Online Predators Act. Internet security task forces have been established. For me as a scientist, these were extremely annoying times. I had put into reports countless studies of the dangers that can threaten young people. I knew the data, was able to describe in filthy detail the risks that young people face and what can be done to prevent the greatest atrocities. But the debate that broke out was not about facts or evidence. It was about fear. And for some, it was a matter of stirring up fear of technology at the expense of reason. (Does that sound familiar to German ears?) I will never forget how a youth welfare activist in the USA reacted to my hazard report: I should look for other data, he said.

The mood was mixed among teenagers. Some were very concerned about the lack of security on the Internet. They heard the rumors. They had seen the TV show "To Catch a Predator" and believed that there was somehow kidnapping children. They wanted to be safe online and weren't sure how to protect themselves. But the vast majority did not dwell on this moral panic for long and just rolled their eyes. After all, adults are always warning them of this or that danger.

In the meantime, discussions arose in an online community called 4chan about how to “troll” adults with this topic. You have to know the genesis of 4chan - as the product of a fifteen-year-old American boy obsessed with anime and pornography who wanted to create a tool for other boys his age. It was in this community that thememe culture came up, and she had a lot of fun mocking adults' fears about network security.

In 2008 there was Oprah Winfrey's daily talk show that reached millions, especially middle-aged women and the parents of many 4chan kids. If she worried about anything, so did her parents - and vice versa. Issues of sexual violence, youth practices and worrying internet stories were discussed regularly on the show. Your producers liked to use Oprah's discussion forum to get new program ideas from the fans. Somebody in the 4chan world had the idea of ​​placing threatening messages on Oprah's "bulletin board" in order to troll the producers of the show. How it went on is not entirely clear. Anyway, on her national broadcast, Oprah finally said live, “Let me read something posted by someone who claims to be a member of a well-known network of pedophiles. It is said that he does not forgive and does not forget anything. His group has over 9,000 penises and they rape children. So I want you to know: They are organized and systematically deal with hurting children, and they use the Internet to do so. "

Concerned parents may have felt the cold, but teenagers collectively burst out laughing on their online networks. Because "Over 9000" is a well-known meme allusion. And “We don't forgive. We don't forget ”is the motto of a network called“ Anonymous ”. Finally, the idea that online pedophiles were organized - with a teddy bear as a mascot, as it was later called - struck the youngsters as funny, even absurd.

Whoever had trolled Oprah, he or she had managed to hack the attention economy. A TV star could be tricked into ridiculing herself live on a national broadcast. At the same time, however, the ability to manipulate the mass media had also been demonstrated.

Who controls Artificial Intelligence?

The second step is about accountability: Many of the technical systems that we deal with are based on machine-learning algorithms. Search engines organize content by deciding what might be most important in any given Internet search. Countless ranking and recommendation systems work in the background of our online experiences. The influence of algorithmic systems is not limited to our information ecosystem. Machine learning systems are making their way into fields as diverse as criminal justice, credit checking, housing brokerage, insurance, education, and medicine. In a business world where everyone raves about Artificial Intelligence (AI), too many people see data accumulation as the solution to any problem.

AI is the hit. Almost every start-up presents itself as the AI ​​solution for something. Nobody tries to evaluate what is proposed there first. In the end got to find some solution with all this data - right? Let's not kid ourselves: AI is the same as "Big Data", just in new costumes. Because too many Europeans have made too much noise about the surveillance practices of “Big Brother” - aka “Big Data” - people prefer to speak of AI, which apparently hides the hunger for data of any technology that deserves the name AI. On the other hand, as one member of the board of directors of a technology company explained to me, all the world calls for AI because the opposite of artificial intelligence is “natural stupidity”. Understood that way, AI sounds great, no matter what it is made of!

But if we really want to equip anything and everything with AI, the question is who will control these systems. What should accountability and liability be like? Here in Germany and in the EU, these important questions are at least being asked. Thank you so much to the artists and journalists, scientists, politicians and administrators who draw the public's attention to the urgency of these issues. At the same time I would like to point out a shortcoming to you. You think of institutional power - governments and corporations, laws that can structure development. But you also have to deal with those with sinister intentions - all the digital cowboys who ostensibly hold this technology accountable by manipulating these systems. It's no longer just about something like the ability to get Oprah to look ridiculous nationwide. It's about dirty political campaigns, about advertisements disguised as journalistic reports, about the strategic use of every available tool to manipulate any system just to prove that one is capable of it. Cost what it may.

What has to happen in order to be able to hold not only states and companies responsible, but also huge networks of people who work across different jurisdictions and use different techniques in order to systematically undermine social cohesion?

Strategic silence

In the third step, I want to talk about amplification effects. In 2009 Terry Jones, pastor of a fifty-member congregation in Florida, wanted to spread the message that Islam was the devil. So he wrote them on a sign that he put up in front of his church. As for the global response, well - not too many people come through Gainesville, Florida. And those who knew the man's reputation groaned and rolled their eyes. He, in turn, wanted more attention. So he threatened to burn the Koran.

What he was about to do leaked out and civil society organizations issued press releases calling on the pastor to stop the obnoxious act. Journalists got involved in the local press. Things started to escalate, especially when the local stories became Yahoo news. The spiral of attention finally carried the pastor into CNN's program, and Hillary Clinton, then Secretary of State, implored him not to take part. All of this got Jones worldwide attention, at least for a while. When that subsided again, he renewed his threat. But this time the news media, having gotten smarter, paid him no attention. So in 2011 he acted on his threat and actually organized the public burning of a Koran.

No representative of any relevant medium appeared on the scene to watch the spectacle, but a little-known blogger was. And he opened a lock in the regional papers. When the topic was discussed there, the national papers felt compelled to report on the controversy. And since national newspapers are translated into numerous languages, the news of the repulsive action of an insignificant pastor, who was hardly supported by anyone, soon spread worldwide. A spectacle always seems bigger when the "New York Times" pays attention to it. Eventually riots broke out around the world. One of them, in Afghanistan, resulted in at least twelve deaths, including seven UN workers.

Journalists claim an exceptional professional status as the fourth estate. When describing how they make their decisions, they are clamoring for their ethical standards and values. They claim to have to report everything that is newsworthy for the sake of an informed civil society. But who decides what is newsworthy?

With my colleague Joan Donovan, I followed the story of “strategic silence”. As soon as I utter these two words, newsrooms hang on to the second - "silence" - and immediately take a defensive position: they should never, ever keep silent about important voices, they say. The publisher of the “New York Times” even announced that strategic silence was out of the question for him because the US government was constantly asking him to remain silent. I countered that he was stressing the wrong word. Most of the news media - including the New York Times - are extremely strategic when it comes to handling information that the government does not want to disclose. They pay attention to the last detail, make sure that their facts are nail-baked, and carefully weigh possible harm or benefit to society. And then they meet onestrategic Decision.

Strategic silence isn't about silence per se. It's about acting strategically. And as long as you don't have a strategy, you should be silent. A lot depends on it when you are dealing with opponents who are trying to manipulate you into getting their message across. News media know how to parry economic and government attempts at manipulation, but networked movements take them far less seriously.

Reinforcing the message can cause it to go in circles and cause damage. Joan investigates how extremists operate. When she talks about strategic silence, she likes to point out how it isWhite supremacistsaround the Ku Klux Klan succeeded in manipulating journalists in such a way that they spread their message. I look at the subject from a different angle. Countless sociologists and psychologists have investigated how sensational reports on the suicides of prominent personalities can trigger imitation effects. The World Health Organization even offers a recommendation list for journalists and advises, among other things, not to go into details of the suicide method, which is rarely newsworthy. When editorial offices turn down this advice, it is because of the shock effect and because people enjoy the creepy details. Reporters and editors act against their better judgment, but in a world in which one aligns with what the others do, they seem to forget their ethical principles. After the suicide of American actor Robin Williams, the suicide rate rose by ten percent, and that of suicides according to his method by as much as 32 percent.

Our information ecosystem is being exploited and abused. Nobody wants to be to blame, but precisely the industry that constantly swears that ethical behavior is paramount to it is apparently unable to take responsibility for its own contribution to this system.

The trolls on Twitter

We come to the fourth step, the information chains: At the beginning of his career, the American PR consultant and bestselling author Ryan Holiday manipulated information chains by detecting weaknesses and using the limited advertising and marketing budgets of his customers in such a way that unexpectedly large attention margins were achieved . He couldn't afford expensive ads, but he knew how to produce spectacles that story-hungry bloggers who were driven to focus on quantity over quality would take up. As soon as a blogger took a bite, he took the message up the information chain, just as it did in the Terry Jones case. He used Twitter to fake increased attention. Playing different groups off against each other means free messaging, so Holiday invented imaginary countermovements.

Online trolls know how to abuse these very strategies. You have figured out how to spread messages through information chains. Inevitably there are some who are funded by foreign powers or by billionaires, but others are simply ideological perpetrators who want to push a message into the information channels. Regardless of the goals one or the other may have, trolls employ a number of well-known tactics. They set up so-called sock puppet or fake accounts on Twitter under false identities, with the help of which they, for example, address journalists who are supposed to believe that the common “man on the street” has spoken up. All the troll has to do is ask the journalist a question that the journalist might find interesting enough to start an internet search. If he does that, he inevitably ends up with content specially produced for him. You edit a few Wikipedia entries and you succeed in luring people who like to research on their own into a rabbit hole à la Alice, where they get deeper and deeper into the wonderland.

The power of trolls is not that they can reach masses. Rather, it consists in the ability to use established "amplifiers" to spread your messages with the help of fake accounts.

How Google is being manipulated

In the fifth step, I'd like to talk about how algorithms are trolled: On November 5, 2017, a 26-year-old man walked into a church in Sutherland Springs, Texas and shot around. Before all cell phones sounded the alarm, most people had never heard of this small town. I know that the gunmania that is rampant in the United States is met with incomprehension here in Germany, but in the United States it has become routine. Someone is running amok, the media gets wind of it, and alarm messages are shrill on everyone's smartphone.

When I heard the news, it was immediately clear to me: Various online networks would exploit this opportunity by all means to manipulate public opinion. As in previous rampage attacks, Internet communities enjoyed associating the shooter's name and the city with "Antifa" so anyone who Googled for additional information could assume the act of violence was attributed to left-wing groups. You have to know that right-wing extremist movements stylized the Antifa as their archenemy and opened supposedly left fake accounts that “take responsibility” for acts of violence. Some journalists have quoted these alleged anti-fascist accounts in search of “balance” - for a left-wing equivalent to violent right-wing extremism.

Such manipulation attempts were not targeted at journalists as such. In the face of sensational news, people are looking for more information. You rely on Google. And so those who in such a situation want to influence public opinion in their own way try to influence what people find on Google search.Before churchgoers were shot there, online searches for “Sutherland Springs” led to algorithmic websites with demographic information. Or to property offers from internet brokers like Zillow. Probably no one had looked for this town for a long time. But now people suddenly wanted search results. Google needed fresh material. So the company is using Reddit and Twitter information to get newer sources as quickly as possible and contextualize the news.

Not surprisingly, in the Sutherland Springs case, those who wanted to spread false reports about the shooter's motives also instrumentalized Twitter and Reddit and staged discussions that would bring Google into play. With the help of fake accounts, they also reached out to journalists with questions that should influence the narrative. Asking a journalist whether the shooter belongs to the Antifa may encourage them to get involved with the question and spend research time on it. It is even better if the journalist writes about it, even if he exposes the story of lies. Because that triggers the boomerang effect and the counter story will attract attention.

Stories from respectable sources are particularly desirable because Google places them high in the list of search results, well before the "content" that is regularly promoted for days. A Newsweek reporter who saw through the game of these networks decided to write about it. His story detailed how far-right communities drew attention to the incident in order to link it to artificially hyped left-wing violence. His editor opened the story with a click-conducive headline: "Antifa responsible for the Sutherland Springs murders, as reported by far-right media". Either Newsweek doesn't care or these people are stupid - or they enjoy spreading wild rumors. You should know that Google doesn't display long headlines in full. Be that as it may: Anyone who googled the first day after the crime for more detailed information saw a headline that played right-wing extremist inciters in the hands: "Antifa responsible for the Sutherland Springs murders".

And so every time sensational events make headlines, we observe concerted efforts to contaminate the information landscape. After the school massacre in Parkland, the suggestion was made that the surviving victims were in truth "actors in the crisis". This version was circulated on all news channels and defamed the students at a time when they were particularly in need of collective support.

Once you understand how information is linked, it is not difficult to exploit information flows for fun, profit, ideology and politics. Unfortunately, that is exactly where we are today. What is particularly important in this fifth part of my presentation is the observation that (supposedly or actually) disenfranchised people gain power in large numbers by resorting to strategic communication. It gives them the feeling of being able to become part of something that is bigger than themselves.

More than just fake news

This brings us to the sixth step, to instigate wars of interpretation: In 2010 “Russia Today” (RT) ran a series of highly controversial advertisements as part of its “question more” campaign. Their message is just brilliant. It sounds exactly like the advertisement for a media customer campaign. Let's take the following eye-catcher, for example: “Is climate change more science fiction than science fact?” The small print underneath reads like this: “How reliable is the evidence for the assumption that human activity contributes to climate change? The answer is not always clear. A balanced judgment can only be formed by those who are better informed. By challenging the prevailing belief, we are revealing a side of the news that you normally don't get to see. We believe that the more you ask, the more you know. "

The message suggests that there is more to climate research than you know - and, more importantly, that most of the media try to withhold information from you so that you can adopt their point of view. The ads in question wouldn't be particularly controversial if they weren't exactly intended to spark controversy. Imagine a poster asking, "How reliable is the evidence to suggest that exercise and weight loss are related?" RT hit the hearts of Westerners with similar campaigns on a number of hotly contested issues across the political spectrum Culture of debate. These posters were banned in London, whereupon RT complained with new posters that they had been censored.

The impact of the campaign results from the fact that it is intended to deliberately shake insights by making false equations. It appeals to journalistic instincts. It's a fantastic strategy for anyone waging interpretive wars.

We talk about “fake news” all the time, but the term doesn't quite fit what is actually happening. Sure, there are actors who produce wrong or inaccurate content in order to make money. But you know what to make of the relevant media. I like UFO stories and gossip about famous people's relationship boxes, but I don't think they're legitimate news, even if it's on newsprint.

When we talk about "fake news" with concern, what it really means is that we are appalled by the highly politicized campaigns that seem to be following a very specific agenda. But we should also be aware that a large part of American society considers CNN and the New York Times to be “fake news” media. To understand this, one has to be familiar with epistemology.

Epistemology is the mechanism by which we construct knowledge, individually and collectively. Our view of the world - or our understanding of the world - emerges over a number of different processes. Some people emphasize the importance of science, others prefer experience, and still others adhere to religious teachings. There are many ways to make sense of the world and the environment, but which point of view is considered legitimate in the respective society remains prone to conflict. If you want to unleash a culture war, your best bet is to position epistemologies against one another. This is exactly how religious wars tend to begin.

Against this background, read the following sentences by the Canadian author Cory Doctorow: “We don't fight about facts, we fight about epistemology. The 'establishment' version of epistemology is this: We rely on fact to discover the truth, on facts backed up by independent verification (but just trust us when we tell you that everything has been verified by independent people who have the necessary Be skeptical and not be the buddies of the people who are supposed to check them out). The 'alternative facts' epistemology works like this: the 'independent' experts supposedly supposed to verify the 'evidence-based' truth are actually bedfellows of the people who are supposedly being checked by them. Ultimately, everything is a matter of belief, because: Either you believe that 'their' experts are telling the truth, or you believe that we do it. Listen to your gut feeling, which version seems more trustworthy to you. "

Look what is going on in our society. We see people who quarrel with their families and neighbors about how they construct their view of the world. We see people who refuse to study epistemology. Others seek control over the world interpretation. Ultimately, however, one of the most difficult questions facing society is who should control how knowledge is produced and what kind of knowledge is legitimate. And if you don't control the whole thing, you can try to drive a wedge into it instead.

Black and white names

Let's move on to step seven, the commonplace prejudices: Latanya Sweeney is a Harvard professor. One day she was searching the internet for one of her texts and Googled her name. A journalist was present who noticed ads with messages like "Latanya Sweeney: criminal record?" Flashing. When he pointed this out to her, she said she had never come into conflict with the law. But both had become curious. It dawned on Latanya that such ads promoting criminal law-related products would preferentially be associated with African American names. She checked this by searching for relevant first names. In fact, it was found that searches for first names that indicated black people, or African American people, were statistically more likely to find ads related to criminal law, while the same company was more likely to respond to “white” first names with advertisements for background research.

As a computer scientist doing research on systems, Latanya knew that Google doesn't sell ads based on ethnicity for first names. She was also familiar with the design of the ad system, which is designed to evolve based on people's reactions to the ads they are shown. Eventually she found out what was behind her ad experience: Google had authorized a company to associate specific names with six different ads, and if an ad got more clicks on one name than another, that link would be focused on. It looked like people searching for “black” or African-American sounding first names preferred to click on ads related to criminal records. Accordingly, the system reinforced this trend. In other words, Google registered the prejudices of certain search engine users and in return transferred them to all over the world.

But this shows that the data with which algorithmic systems work is often contaminated by the prejudices that exist in society. Worse still: when algorithmic systems evaluate human interactions, they adopt the prejudices expressed and reinforce them.

In our media-dominated world, we encounter prejudices everywhere. Take, for example, “baby” or “CEO” as search terms. As could hardly be expected otherwise, if you look for the former, you will be presented with a strange collection of technically perfect, perfectly staged and perfectly illuminated white babies. Is that because people don't take their own baby photos? No. The pictures are simply taken from the stocks of picture agencies. The same goes for “CEO”. Those looking for it will find middle-aged white men wearing suits and looking healthy. Agency photos.

It goes without saying that both search results can be read through the glasses of social prejudice, but what is the basis of them? First of all, you have to consider why someone is using search terms like "baby" or "CEO". Very few people should be interested in evaluating the prejudices of search engines. Rather, such keywords appear in image searches by people who are preparing a PowerPoint presentation for their company. They are looking for perfect images and the photo agencies want them to buy full size. That is why they put tons of pictures in low resolution or protected by watermarks online. Their business model leads them to put a lot of money into SEO - search engine optimization - to increase the likelihood that their images will be liked. These label them based on assumptions about what people might be looking for. They evaluate what has been asked for in previous searches and put together corresponding image collections. So if the majority of people who use the keyword "baby" to search for images for their PowerPoint presentations buy photos of six-month-old babies - and white men if the keyword is "CEO" - then agencies will look to the Production and search engine optimization focus on such offers.

And here is the real problem. Services like Microsoft's Bing are constantly fighting against the prejudices - or even preferences - that influence their databases: They want more diverse search results because they believe that their diverse customers will be repelled by overly homogeneous results. If you are looking for specific images, the search engine should offer a number of options. But what should Google, Bing etc. do when most people only click on the images that correspond to their inclinations when searching for images? So what if users don't stop teaching the search engines their prejudices and preferences?

Machine learning (ML) has a fundamental flaw. In principle, it is based on discrimination - understood in the technical sense of “distinction”, not as prejudice-based disadvantage - namely to bundle and classify data. In this way, large amounts of data are to be segmented, i.e. sorted into data groups that can be used by the system. But when these clusters are laden with cultural biases and preferences, they ultimately reinforce the cultural discrimination that we actually want to eliminate. Unfortunately, figuring out how a technology reinforces prejudice is much easier than plugging such cultural sources of error. Worse still, divisions are much easier to exploit than overcome.

End of the first part, the second part will follow in the September issue.

The contribution is based on the keynote that Danah Boyd gave at the re: publica 2018 in Berlin. The translation is by Karl D. Bredthauer.