news archives >>


Headlines August 2016 Headlines July 2016 Headlines June 2016 Headlines May 2016 Headlines August 2016

Finland: Court throws out motion to close down anti-immigrant website MV-Lehti

A Helsinki district court has rejected a petition by police to shut down the anti-immigrant website MV-Lehti. However the court has sealed the arguments used in arriving at its decision.

19/8/2016- On Friday the Helsinki District Court overturned a petition by Helsinki police to shut down the MV-Lehti, an alternative news website that police suspect of disseminating false information and encouraging hate speech. The Helsinki police department called on the court to terminate online communications coming from a certain IP address owned by OVH Hosting Ltd, Net9 Ltd, and the sole trader NP Networking, and which is responsible for publishing MV-Lehti and Uber Uutiset, a sister site to MV-Lehti with similar content. The court did not disclose the arguments behind its decision.

Inaccuracies, distortions, suspected copyright infringements
Police had previously received dozens of criminal complaints about MV-Lehti. They determined that several of the site’s articles may have been inaccurate, distorted or fulfilled the criteria for copyright infringement. The inflammatory website was founded in 2014 by Spain-based Ilja Janitskin, who also owns a number of other websites. MV stands for "Mita vittua" (in English What the f***?) and the website became a talking point after publishing a series of vitriolic articles about migration and other subjects. The site gained a wider following in Finland since large numbers of asylum seekers began arriving in Europe and media began reporting on crimes committed by some of the new arrivals. The website’s articles were published without attribution, so none of the contributors were known. In July Finnish media reported that the both the MV-Lehti and Uber Uutiset websites were no longer available. At the time Janitskin had posted a notification on his Facebook page indicating that the site’s Finnish servers had been taken down and would be reinstated elsewhere in due course.
© YLE News


How Trolls Are Ruining the Internet

They’re turning the web into a cesspool of aggression and violence. What watching them is doing to the rest of us may be even worse
By Joel Stein

18/8/2016- This story is not a good idea. Not for society and certainly not for me. Because what trolls feed on is attention. And this little bit–these several thousand words–is like leaving bears a pan of baklava. It would be smarter to be cautious, because the Internet’s personality has changed. Once it was a geek with lofty ideals about the free flow of information. Now, if you need help improving your upload speeds the web is eager to help with technical details, but if you tell it you’re struggling with depression it will try to goad you into killing yourself. Psychologists call this the online disinhibition effect, in which factors like anonymity, invisibility, a lack of authority and not communicating in real time strip away the mores society spent millennia building. And it’s seeping from our smartphones into every aspect of our lives.

The people who relish this online freedom are called trolls, a term that originally came from a fishing method online thieves use to find victims. It quickly morphed to refer to the monsters who hide in darkness and threaten people. Internet trolls have a manifesto of sorts, which states they are doing it for the “lulz,” or laughs. What trolls do for the lulz ranges from clever pranks to harassment to violent threats. There’s also doxxing–publishing personal data, such as Social Security numbers and bank accounts–and swatting, calling in an emergency to a victim’s house so the SWAT team busts in. When victims do not experience lulz, trolls tell them they have no sense of humor. Trolls are turning social media and comment boards into a giant locker room in a teen movie, with towel-snapping racial epithets and misogyny.

They’ve been steadily upping their game. In 2011, trolls descended on Facebook memorial pages of recently deceased users to mock their deaths. In 2012, after feminist Anita Sarkeesian started a Kickstarter campaign to fund a series of YouTube videos chronicling misogyny in video games, she received bomb threats at speaking engagements, doxxing threats, rape threats and an unwanted starring role in a video game called Beat Up Anita Sarkeesian. In June of this year, Jonathan Weisman, the deputy Washington editor of the New York Times, quit Twitter, on which he had nearly 35,000 followers, after a barrage of anti-Semitic messages. At the end of July, feminist writer Jessica Valenti said she was leaving social media after receiving a rape threat against her daughter, who is 5 years old.

A Pew Research Center survey published two years ago found that 70% of 18-to-24-year-olds who use the Internet had experienced harassment, and 26% of women that age said they’d been stalked online. This is exactly what trolls want. A 2014 study published in the psychology journal Personality and Individual Differences found that the approximately 5% of Internet users who self-identified as trolls scored extremely high in the dark tetrad of personality traits: narcissism, psychopathy, Machiavellianism and, especially, sadism. But maybe that’s just people who call themselves trolls. And maybe they do only a small percentage of the actual trolling. “Trolls are portrayed as aberrational and antithetical to how normal people converse with each other. And that could not be further from the truth,” says Whitney Phillips, a literature professor at Mercer University and the author of This Is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture. “These are mostly normal people who do things that seem fun at the time that have huge implications. You want to say this is the bad guys, but it’s a problem of us.”

A lot of people enjoy the kind of trolling that illuminates the gullibility of the powerful and their willingness to respond. One of the best is Congressman Steve Smith, a Tea Party Republican representing Georgia’s 15th District, which doesn’t exist. For nearly three years Smith has spewed over-the-top conservative blather on Twitter, luring Senator Claire McCaskill, Christiane Amanpour and Rosie O’Donnell into arguments. Surprisingly, the guy behind the GOP-mocking prank, Jeffrey Marty, isn’t a liberal but a Donald Trump supporter angry at the Republican elite, furious at Hillary Clinton and unhappy with Black Lives Matter. A 40-year-old dad and lawyer who lives outside Tampa, he says he has become addicted to the attention. “I was totally ruined when I started this. My ex-wife and I had just separated. She decided to start a new, more exciting life without me,” he says. Then his best friend, who he used to do pranks with as a kid, killed himself. Now he’s got an illness that’s keeping him home.

Marty says his trolling has been empowering. “Let’s say I wrote a letter to the New York Times saying I didn’t like your article about Trump. They throw it in the shredder. On Twitter I communicate directly with the writers. It’s a breakdown of all the institutions,” he says. “I really do think this stuff matters in the election. I have 1.5 million views of my tweets every 28 days. It’s a much bigger audience than I would have gotten if I called people up and said, ‘Did you ever consider Trump for President?'” Trolling is, overtly, a political fight. Liberals do indeed troll–sex-advice columnist Dan Savage used his followers to make Googling former Pennsylvania Senator Rick Santorum’s last name a blunt lesson in the hygienic challenges of anal sex; the hunter who killed Cecil the lion got it really bad.

But trolling has become the main tool of the alt-right, an Internet-grown reactionary movement that works for men’s rights and against immigration and may have used the computer from Weird Science to fabricate Donald Trump. Not only does Trump share their attitudes, but he’s got mad trolling skills: he doxxed Republican primary opponent Senator Lindsey Graham by giving out his cell-phone number on TV and indirectly got his Twitter followers to attack GOP political strategist Cheri Jacobus so severely that her lawyers sent him a cease-and-desist order.

The alt-right’s favorite insult is to call men who don’t hate feminism “cucks,” as in “cuckold.” Republicans who don’t like Trump are “cuckservatives.” Men who don’t see how feminists are secretly controlling them haven’t “taken the red pill,” a reference to the truth-revealing drug in The Matrix. They derisively call their adversaries “social-justice warriors” and believe that liberal interest groups purposely exploit their weakness to gain pity, which allows them to control the levers of power. Trolling is the alt-right’s version of political activism, and its ranks view any attempt to take it away as a denial of democracy.

In this new culture war, the battle isn’t just over homosexuality, abortion, rap lyrics, drugs or how to greet people at Christmastime. It’s expanded to anything and everything: video games, clothing ads, even remaking a mediocre comedy from the 1980s. In July, trolls who had long been furious that the 2016 reboot of Ghostbusters starred four women instead of men harassed the film’s black co-star Leslie Jones so badly on Twitter with racist and sexist threats–including a widely copied photo of her at the film’s premiere that someone splattered semen on–that she considered quitting the service. “I was in my apartment by myself, and I felt trapped,” Jones says. “When you’re reading all these gay and racial slurs, it was like, I can’t fight y’all. I didn’t know what to do. Do you call the police? Then they got my email, and they started sending me threats that they were going to cut off my head and stuff they do to ‘N words.’ It’s not done to express an opinion, it’s done to scare you.”

Because of Jones’ harassment, alt-right leader Milo Yiannopoulos was permanently banned from Twitter. (He is also an editor at Breitbart News, the conservative website whose executive chairman, Stephen Bannon, was hired Aug. 17 to run the Trump campaign.) The service said Yiannopoulos, a critic of the new Ghostbusters who called Jones a “black dude” in a tweet, marshaled many of his more than 300,000 followers to harass her. He not only denies this but says being responsible for your fans is a ridiculous standard. He also thinks Jones is faking hurt for political purposes. “She is one of the stars of a Hollywood blockbuster,” he says. “It takes a certain personality to get there. It’s a politically aware, highly intelligent star using this to get ahead. I think it’s very sad that feminism has turned very successful women into professional victims.”

A gay, 31-year-old Brit with frosted hair, Yiannopoulos has been speaking at college campuses on his Dangerous Faggot tour. He says trolling is a direct response to being told by the left what not to say and what kinds of video games not to play. “Human nature has a need for mischief. We want to thumb our nose at authority and be individuals,” he says. “Trump might not win this election. I might not turn into the media figure I want to. But the space we’re making for others to be bolder in their speech is some of the most important work being done today. The trolls are the only people telling the truth.”

The alt-right was galvanized by Gamergate, a 2014 controversy in which trolls tried to drive critics of misogyny in video games away from their virtual man cave. “In the mid-2000s, Internet culture felt very separate from pop culture,” says Katie Notopoulos, who reports on the web as an editor at BuzzFeed and co-host of the Internet Explorer podcast. “This small group of people are trying to stand their ground that the Internet is dark and scary, and they’re trying to scare people off. There’s such a culture of viciously making fun of each other on their message boards that they have this very thick skin. They’re all trained up.”

Andrew Auernheimer, who calls himself Weev online, is probably the biggest troll in history. He served just over a year in prison for identity fraud and conspiracy. When he was released in 2014, he left the U.S., mostly bouncing around Eastern Europe and the Middle East. Since then he has worked to post anti–Planned Parenthood videos and flooded thousands of university printers in America with instructions to print swastikas–a symbol tattooed on his chest. When I asked if I could fly out and interview him, he agreed, though he warned that he “might not be coming ashore for a while, but we can probably pass close enough to land to have you meet us somewhere in the Adriatic or Ionian.” His email signature: “Eternally your servant in the escalation of entropy and eschaton.”

While we planned my trip to “a pretty remote location,” he told me that he no longer does interviews for free and that his rate was two bitcoins (about $1,100) per hour. That’s when one of us started trolling the other, though I’m not sure which:

From: Joel Stein
To: Andrew Auernheimer
I totally understand your position. But TIME, and all the major media outlets, won’t pay people who we interview. There’s a bunch of reasons for that, but I’m sure you know them.

Thanks anyway,

From: Andrew Auernheimer
To: Joel Stein
I find it hilarious that after your people have stolen years of my life at gunpoint and bulldozed my home, you still expect me to work for free in your interests.
You people belong in a f-cking oven.

From: Joel Stein
To: Andrew Auernheimer

For a guy who doesn’t want to be interviewed for free, you’re giving me a lot of good quotes!

In a later blog post about our emails, Weev clarified that TIME is “trying to destroy white civilization” and that we should “open up your Jew wallets and dump out some of the f-cking geld you’ve stolen from us goys, because what other incentive could I possibly have to work with your poisonous publication?” I found it comforting that the rate for a neo-Nazi to compromise his ideology is just two bitcoins. Expressing socially unacceptable views like Weev’s is becoming more socially acceptable. Sure, just like there are tiny, weird bookstores where you can buy neo-Nazi pamphlets, there are also tiny, weird white-supremacist sites on the web. But some of the contributors on those sites now go to places like 8chan or 4chan, which have a more diverse crowd of meme creators, gamers, anime lovers and porn enthusiasts. Once accepted there, they move on to Reddit, the ninth most visited site in the U.S., on which users can post links to online articles and comment on them anonymously. Reddit believes in unalloyed free speech; the site only eliminated the comment boards “jailbait,” “creepshots” and “beatingwomen” for legal reasons.

But last summer, Reddit banned five more discussion groups for being distasteful. The one with the largest user base, more than 150,000 subscribers, was “fatpeoplehate.” It was a particularly active community that reveled in finding photos of overweight people looking happy, almost all women, and adding mean captions. Reddit users would then post these images all over the targets’ Facebook pages along with anywhere else on the Internet they could. “What you see on Reddit that is visible is at least 10 times worse behind the scenes,” says Dan McComas, a former Reddit employee. “Imagine two users posting about incest and taking that conversation to their private messages, and that’s where the really terrible things happen. That’s where we saw child porn and abuse and had to do all of our work with law enforcement.”

Jessica Moreno, McComas’ wife, pushed for getting rid of “fatpeoplehate” when she was the company’s head of community. This was not a popular decision with users who really dislike people with a high body mass index. She and her husband had their home address posted online along with suggestions on how to attack them. Eventually they had a police watch on their house. They’ve since moved. Moreno has blurred their house on Google maps and expunged nearly all photos of herself online.

During her time at Reddit, some users who were part of a group that mails secret Santa gifts to one another complained to Moreno that they didn’t want to participate because the person assigned to them made racist or sexist comments on the site. Since these people posted their real names, addresses, ages, jobs and other details for the gifting program, Moreno learned a good deal about them. “The idea of the basement dweller drinking Mountain Dew and eating Doritos isn’t accurate,” she says. “They would be a doctor, a lawyer, an inspirational speaker, a kindergarten teacher. They’d send lovely gifts and be a normal person.” These are real people you might know, Moreno says. There’s no real-life indicator. “It’s more complex than just being good or bad. It’s not all men either; women do take part in it.” The couple quit their jobs and started Imzy, a cruelty-free Reddit. They believe that saving a community is nearly impossible once mores have been established, and that sites like Reddit are permanently lost to the trolls.

When sites are overrun by trolls, they drown out the voices of women, ethnic and religious minorities, gays–anyone who might feel vulnerable. Young people in these groups assume trolling is a normal part of life online and therefore self-censor. An anonymous poll of the writers at TIME found that 80% had avoided discussing a particular topic because they feared the online response. The same percentage consider online harassment a regular part of their jobs. Nearly half the women on staff have considered quitting journalism because of hatred they’ve faced online, although none of the men had. Their comments included “I’ve been raged at with religious slurs, had people track down my parents and call them at home, had my body parts inquired about.” Another wrote, “I’ve had the usual online trolls call me horrible names and say I am biased and stupid and deserve to be raped. I don’t think men realize how normal that is for women on the Internet.”

The alt-right argues that if you can’t handle opprobrium, you should just turn off your computer. But that’s arguing against self-expression, something antithetical to the original values of the Internet. “The question is: How do you stop people from being a–holes not to their face?” says Sam Altman, a venture capitalist who invested early in Reddit and ran the company for eight days in 2014 after one of its many PR crises. “This is exactly what happened when people talked badly about public figures. Now everyone on the Internet is a public figure. The problem is that not everyone can deal with that.” Altman declared on June 15 that he would quit Twitter and his 171,000 followers, saying, “I feel worse after using Twitter … my brain gets polluted here.”

Twitter’s head of trust and safety, Del Harvey, struggles with how to allow criticism but curb abuse. “Categorically to say that all content you don’t like receiving is harassment would be such a broad brush it wouldn’t leave us much content,” she says. Harvey is not her real name, which she gave up long ago when she became a professional troll, posing as underage girls (and occasionally boys) to entrap pedophiles as an administrator for the website Perverted-Justice and later for NBC’s To Catch a Predator. Citing the role of Twitter during the Arab Spring, she says that anonymity has given voice to the oppressed, but that women and minorities are more vulnerable to attacks by the anonymous.

But even those in the alt-right who claim they are “unf-ckwithable” aren’t really. At some point, everyone, no matter how desensitized by their online experience, is liable to get freaked out by a big enough or cruel enough threat. Still, people have vastly different levels of sensitivity. A white male journalist who covers the Middle East might blow off death threats, but a teenage blogger might not be prepared to be told to kill herself because of her “disgusting acne.”

Which are exactly the kinds of messages Em Ford, 27, was receiving en masse last year on her YouTube tutorials on how to cover pimples with makeup. Men claimed to be furious about her physical “trickery,” forcing her to block hundreds of users each week. This year, Ford made a documentary for the BBC called Troll Hunters in which she interviewed online abusers and victims, including a soccer referee who had rape threats posted next to photos of his young daughter on her way home from school. What Ford learned was that the trolls didn’t really hate their victims. “It’s not about the target. If they get blocked, they say, ‘That’s cool,’ and move on to the next person,” she says. Trolls don’t hate people as much as they love the game of hating people.

Troll culture might be affecting the way nontrolls treat one another. A yet-to-be-published study by University of California, Irvine, professor Zeev Kain showed that when people were exposed to reports of good deeds on Facebook, they were 10% more likely to report doing good deeds that day. But the opposite is likely occurring as well. “One can see discourse norms shifting online, and they’re probably linked to behavior norms,” says Susan Benesch, founder of the Dangerous Speech Project and faculty associate at Harvard’s Internet and Society center. “When people think it’s increasingly O.K. to describe a group of people as subhuman or vermin, those same people are likely to think that it’s O.K. to hurt those people.”

As more trolling occurs, many victims are finding laws insufficient and local police untrained. “Where we run into the problem is the social-media platforms are very hesitant to step on someone’s First Amendment rights,” says Mike Bires, a senior police officer in Southern California who co-founded, a tool for cops to fight on-line crime and use social media to work with their communities. “If they feel like someone’s life is in danger, Twitter and Snapchat are very receptive. But when it comes to someone harassing you online, getting the social-media companies to act can be very frustrating.” Until police are fully caught up, he recommends that victims go to the officer who runs the force’s social-media department.

One counter-trolling strategy now being employed on social media is to flood the victims of abuse with kindness. That’s how many Twitter users have tried to blunt racist and body-shaming attacks on U.S. women’s gymnastics star Gabby Douglas and Mexican gymnast Alexa Moreno during the Summer Olympics in Rio. In 2005, after Emily May co-founded Hollaback!, which posts photos of men who harass women on the street in order to shame them (some might call this trolling), she got a torrent of misogynistic messages. “At first, I thought it was funny. We were making enough impact that these losers were spending their time calling us ‘cunts’ and ‘whores’ and ‘carpet munchers,'” she says. “Long-term exposure to it, though, I found myself not being so active on Twitter and being cautious about what I was saying online. It’s still harassment in public space. It’s just the Internet instead of the street.” This summer May created Heartmob, an app to let people report trolling and receive messages of support from others.

Though everyone knows not to feed the trolls, that can be challenging to the type of people used to expressing their opinions. Writer Lindy West has written about her abortion, hatred of rape jokes and her body image–all of which generated a flood of angry messages. When her father Paul died, a troll quickly started a fake Twitter account called PawWestDonezo, (“donezo” is slang for “done”) with a photo of her dad and the bio “embarrassed father of an idiot.” West reacted by writing about it. Then she heard from her troll, who apologized, explaining that he wasn’t happy with his life and was angry at her for being so pleased with hers.

West says that even though she’s been toughened by all the abuse, she is thinking of writing for TV, where she’s more insulated from online feedback. “I feel genuine fear a lot. Someone threw a rock through my car window the other day, and my immediate thought was it’s someone from the Internet,” she says. “Finally we have a platform that’s democratizing and we can make ourselves heard, and then you’re harassed for advocating for yourself, and that shuts you down again.”

I’ve been a columnist long enough that I got calloused to abuse via threats sent over the U.S. mail. I’m a straight white male, so the trolling is pretty tame, my vulnerabilities less obvious. My only repeat troll is Megan Koester, who has been attacking me on Twitter for a little over two years. Mostly, she just tells me how bad my writing is, always calling me “disgraced former journalist Joel Stein.” Last year, while I was at a restaurant opening, she tweeted that she was there too and that she wanted to take “my one-sided feud with him to the next level.” She followed this immediately with a tweet that said, “Meet me outside Clifton’s in 15 minutes. I wanna kick your ass.” Which shook me a tiny bit. A month later, she tweeted that I should meet her outside a supermarket I often go to: “I’m gonna buy some Ahi poke with EBT and then kick your ass.”

I sent a tweet to Koester asking if I could buy her lunch, figuring she’d say no or, far worse, say yes and bring a switchblade or brass knuckles, since I have no knowledge of feuding outside of West Side Story. Her email back agreeing to meet me was warm and funny. Though she also sent me the script of a short movie she had written. I saw Koester standing outside the restaurant. She was tiny–5 ft. 2 in., with dark hair, wearing black jeans and a Spy magazine T-shirt. She ordered a seitan sandwich, and after I asked the waiter about his life, she looked at me in horror. “Are you a people person?” she asked. As a 32-year-old freelance writer for who has never had a full-time job, she lives on a combination of sporadic paychecks and food stamps. My career success seemed, quite correctly, unjust. And I was constantly bragging about it in my column and on Twitter. “You just extruded smarminess that I found off-putting. It’s clear I’m just projecting. The things I hate about you are the things I hate about myself,” she said.

As a feminist stand-up comic with more than 26,000 Twitter followers, Koester has been trolled more than I have. One guy was so furious that she made fun of a 1970s celebrity at an autograph session that he tweeted he was going to rape her and wanted her to die afterward. “So you’d think I’d have some sympathy,” she said about trolling me. “But I never felt bad. I found that column so vile that I thought you didn’t deserve sympathy.” When I suggested we order wine, she told me she’s a recently recovered alcoholic who was drunk at the restaurant opening when she threatened to beat me up. I asked why she didn’t actually walk up to me that afternoon and, even if she didn’t punch me, at least tell me off. She looked at me like I was an idiot. “Why would I do that?” she said. “The Internet is the realm of the coward. These are people who are all sound and no fury.”

Maybe. But maybe, in the information age, sound is as destructive as fury.
Editor’s Note: An earlier version of this story included a reference to Asperger’s Syndrome in an inappropriate context. It has been removed. Additionally, an incorrect description of Megan Koester has been removed.
© Time


Twitter suspends 360,000 accounts for terrorist/hate ties

The social network has suspended 235,000 in last six months alone, with rate of daily suspensions up 80%

18/8/2016- Twitter continues to fight to keep terrorist groups and sympathizers from using its service. The social network announced today that in the last six months it has suspended 235,000 accounts for violating its policies related to the promotion of terrorism. In February, Twitter reported that it had suspended 125,000 accounts since mid-2015 for terrorist-related reasons. That means Twitter has suspended 360,000 accounts since the middle of last year. "Since that [February] announcement, the world has witnessed a further wave of deadly, abhorrent terror attacks across the globe," the company wrote in a blog post. "We strongly condemn these acts and remain committed to eliminating the promotion of violence or terrorism on our platform."

Twitter also reported that daily suspensions are up more than 80% since last year, with spikes in suspensions immediately following terrorist attacks. "Our response time for suspending reported accounts, the amount of time these accounts are on Twitter, and the number of followers they accumulate have all decreased dramatically," the company said. "As noted by numerous third parties, our efforts continue to drive meaningful results, including a significant shift in this type of activity off of Twitter." There has been increasing focus on trying to keep terrorist groups, whether it's ISIS or homegrown white supremacists, from using social networks like Twitter and Facebook to communicate, call for attacks and to recruit new members. Democratic presidential nominee Hillary Clinton even raised the issue during her acceptance speech at the Democratic National Convention last month. "We will disrupt their efforts online to reach and radicalize young people in our country. It won't be easy or quick, but make no mistake - we will prevail," Clinton said.

Social media, including sites like YouTube and instant messaging service Telegram, have been used for years. Those sites are fighting back, too. Facebook previously reported that it has suspended accounts it found were associated with radicalized groups. Today, Twitter noted that it not only is suspending accounts, but is making it harder for those suspended to return to the platform. "We have expanded the teams that review reports around the clock, along with their tools and language capabilities," Twitter said. "We also collaborate with other social platforms, sharing information and best practices for identifying terrorist content... Finally, we continue to work with law enforcement entities seeking assistance with investigations to prevent or prosecute terror attacks."
© Computer World


Skype and WhatsApp face tougher EU privacy rules

16/8/2016- The EU wants to extend privacy rules to cover calls and messages sent over the internet, subjecting services such as WhatsApp and Skype to much greater regulation. Tech and telecom industries last month called for the EU to scrap the rules, contained in the Directive on Privacy and Electronic Communications, known as the e-privacy directive. Telecom companies have long complained that web-based competitors such as Google, Microsoft and Facebook - which offer communications services Skype, WhatsApp and Hangouts - enjoy an advantage because they are allowed to make money from traffic and location data, which telecoms operators are not allowed to keep. Scrapping the rules would encourage innovation and drive growth and social opportunities, telecoms lobby group GSM Association had said. Instead, the European Commission intends to bring in everyone under the same rules.

According to UK newspaper the Financial Times, the EU executive’s move is an attempt to rein in American companies that dominate the sector, undercutting EU telecoms providers. Whether the rules will strengthen consumers’ privacy is open for debate. Some internet companies offer end-to-end encryption on their services. Facebook, which uses full-scale encryption on WhatsApp, said in its response to the Commission's public consultation that extending the rules to online messaging services would mean they could in effect "no longer be able to guarantee the security and confidentiality of the communication through encryption". They send the new regime would allow governments the option of restricting the confidentiality right for national security purposes. The commission is due to make an initial announcement in September and present detailed plans for legislative review later this year.
© The EUobserver


UK: 289 Islamophobic tweets were sent every hour in July

In total 215,246 Islamophobic tweets were sent from English-speaking accounts in July

18/8/2016- The number of times anti-Islamic insults are used on Twitter is rising month-by-month, a new report reveals. Analysis of the social media site found 215,246 Islamophobic tweets were sent in July this year – a staggering 289 every hour. Spikes in offensive language correlated with acts of terrorism, with the largest number of abusive tweets sent the day after the devastating Nice attack, the research says. Researchers at the Centre for the Analysis of Social Media at the Demos think tank, said identifying tweets that were hateful, derogatory and anti-Islamic was “a formidable challenge”. They first collected all tweets that contained one of a list of terms that could be used in an anti-Islamic way, including ‘Jihadi’ and ‘Terrorist.’ Most are too offensive to be published. Between 29 February and 2 August, 34 million tweets meeting the criteria were collected, but most were not anti-Islamic or hateful.

Algorithms were built and used to identify Islamophobic context within a tweet. For example, classifiers were built to separate tweets referring to Islamist terrorism from other forms of terrorism and then distinguish between messages attacking Muslim communities in the context of terrorism, from those defending the communities. The researchers found many of the tweets, which were identified as derogatory and anti-Islamic, included specific references to recent acts of violence and attacked entire Muslim communities in the context of terrorism. The largest of the spikes within July was the day following the Nice terrorist attack, with 21,190 tweets on 15 July. Not far behind, was the day after the shooting of police officers in Dallas on 8 July, when 11,320 Islamophobic tweets were sent. The 17 July was the next worst date, with 10,610 Islamophobic tweets sent the day after the attempted military coup in Turkey, followed by the end of Ramadan on 5 July, with 9,220 tweets.

The day of an IS attack on a church in Normandy on 26 July, 8,950 upsetting tweets were posted, according to the study. The think tank has been monitoring Islamophobic activity on the social network since March and said July recorded the highest volume of derogatory tweets of any month yet. It found an average of 4,972 Islamophobic tweets were sent a day since March. Demos geo-located locate many of the tweets collected and found Islamophobic tweets originating in every EU member state. As only tweets in English were recorded, the majority were traced to English speaking countries. However outside the UK significant concentrations were identified in the Netherlands, France and Germany. In December 2015, Twitter updated its policies to explicitly ban "hateful conduct" for the first time. The move has been followed-by agreements with officials in the EU – as well as Facebook and YouTube – to remove hate speech from their networks.

"Our rules prohibit inciting or engaging in the targeted abuse or harassment of others," a Twitter spokesperson told the BBC. "We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it's happening and prevent repeat offenders."
© Wired UK


Scotland Yard to use civilian volunteer ‘thought police’ to help combat social media hate crime

Original posted on the Independent website not to be found there anymore
By Adam Lusher

14/8/2016- Scotland Yard is to recruit civilian volunteers to help police social media in a new £1.7 million online hate crime unit. The volunteers – already dubbed a “thought police” by critics – will seek out and challenge social media abuse and report it to a new police “online hate crime hub”. Documents outlining how the scheme will work appear to suggest that the use of social media savvy volunteers will help address the problem that: “The police response to online hate crime is inconsistent, primarily because police officers are not equipped to tackle it.” A report by the London Mayor Sadiq Khan’s Office for Policing and Crime (MOPAC), which will help fund the scheme, has said: “A key element is the community hub, which will work with and support community volunteers to identify, report and challenge online hate material. “This requires full time capacity to recruit, train and manage a group of community volunteers, who are skilled in the use of social media and able to both identify and appropriately respond to inappropriate content to build the counter-narrative.”

The report suggests using the anti-racist organisation Stop Hate UK to provide the volunteers, because of its previous experience and ability to “effect speedy mobilisation in London.” The two-year pilot scheme will cost a total of £1,730,000, with the bulk of the funding coming from MOPAC and the Metropolitan Police, supported by £453,756 from the Home Office in the form of a Police Innovation Fund Grant. The initiative comes after a spike in racism following the EU referendum that saw a 57 per cent increase in hate crime reported to the police and included social media users receiving such messages as “go home black b*tch – we voted leave, time to make Britain great again by getting rid of u blacks, Asians and immigrants.”

Prominent figures also received abuse, including the Remain-supporting black London MP David Lammy who called police after reportedly receiving a death threat via social media. In one message he was reportedly told “I hope your kids get cancer and die” and “I wish you the same fate as that b*tch got stab” – a reference to the Labour MP Jo Cox who was killed during the referendum campaign. The online hate crime hub also comes after John Nimmo, 28, from South Shields, Tyne and Wear, was told last month that he faces jail for sending threatening emails to the MP Luciana Berger showing a picture of a large knife and telling her “watch your back Jewish scum”. The scheme is also being piloted after a report by the Tell Mama organisation – which had been due to be unveiled by Ms Cox before she was killed – found that social media was being used as a platform for calls for violence against Muslims.

Tell Mama said it had received reports of 364 “flagrant” incidents of online hate speech, harassment and threats in 2015 and said these amounted to “only a small fraction of the anti-Muslim hate on social media platforms.” But the online hate crime hub, which will be led by a Detective Inspector with the help of four other Scotland Yard detectives, has already been criticised by freedom of speech campaigners as a form of “thought police.” The Liberal Democrat leader Tim Farron told the Mail on Sunday: “We want more police on the street, not thought police. “Online bullying is an increasingly serious problem, but police should not be proactively seeking cases like these and turning themselves into chatroom moderators. “With such measures, even if well intentioned, there is a real danger of undermining our very precious freedom of speech.”

Andrew Allison, from the Freedom Association libertarian group, added: “There’s a risk of online vigilantism, where people who are offended by the least thing will have a licence to report it to the police.” Critics also pointed to cases where the Police appear to have been heavy-handed in dealing with online comments. In one of the more spectacular examples, in 2010 Paul Chambers was arrested under the Terrorism Act and convicted of sending a menacing message after joking on twitter that he would blow an airport “sky high” if it remained closed by heavy snowfall and stopped him travelling to see his girlfriend. It took Mr Chambers two years and an appeal to the High Court before his conviction was quashed.
© The Truth Seeker


UK: The 'yellowface' Snapchat filter is nothing new

Several reputable studies have concluded that the ethnic group that suffers the highest rates of unreported racist hate crime in Britain is East Asians. When the butt of the joke is dehumanised in this way, it’s only a matter of time before that butt gets kicked
By Daniel York

12/8/2016- Snapchat is being defensive about its “anime” filter which is (rightly, in my opinion) being called out as an example of “yellowface”. Yellowface is of course nothing new and neither is the defensiveness around it. People tend to dig their heels in about yellowface a lot. Indeed, I’ve argued previously that yellowface is the last acceptable bastion of racist caricature and racial appropriation. Like blackface and brownface, there are two basic forms of yellowface. There is the type that enables actors (nearly always of Caucasian descent) to portray characters who are supposed to be East Asian. Some of these actors have even been nominated for awards for dressing up in exotic costumes and perfecting stilted hybrid accents. This type of “performance yellowface” completely perpetuates the notion that actors of Caucasian descent are inherently more talented, more intelligent, more nuanced and more skilful practitioners of the thespian arts – an utterly ludicrous premise which has had to be (and continues to be) fought very hard.

After all, let us not forget that once upon a time women were not allowed on the stage either and were portrayed by young men. If anyone seriously wants to try and posit the argument that men playing women is somehow preferable to watching the likes of Judi Dench, Halle Berry or Juliet Binoche in action then good luck with that one. The other type of yellowface – the Snapchat variety – is obviously meant to be fun but also points up and exaggerates certain perceived ethnic “traits” which enforce stereotypes and are used to ridicule and dehumanise. It encourages people to pull back their eyes into thin slants, pronounce their l’s as r’s and force their teeth to protrude in the guise of the “comedy oriental” a la Mickey Rooney in the film version of Breakfast at Tiffany’s.

It is of course entirely false. Many, many East Asians have very large eyes, there is no greater occurrence of buckteeth in certain racial groups and, as for the r’s and l’s, let’s face it, there are sounds in all “foreign” languages that the majority of English speakers will struggle with hopelessly. But the whole point of yellowface is it reinforces a certain perceived cultural superiority: you can’t speak our language perfectly so you’re obviously a bit strange (even though you probably speak our language with far more command and dexterity than most of us would ever have yours). Both types of yellowface render people of East Asian descent as invisible ciphers with no personality or individual characteristics. Like blackface or brownface, they reinforce White Western Caucasian as the supreme “norm”; the default setting to which every other type of ethnicity is at best a quirky exotic counterpoint and, at worst, some form of hateful deviation, to be scorned, dominated and kept in its place lest it claim some form of parity in the wider “Caucasian” world.

If anyone reading feels this in any way over-sensitive it might be worth googling some Nazi caricatures of Jews in the 1930s. I’m sure that was all good fun back in the day but we all know how that ended up. It’s also worth remembering that several reputable studies have concluded that the ethnic group that suffers the highest rates of unreported racist hate crime in Britain is East Asians. Traditionally the most unassertive and disparate racial group lacking any kind of media voice or presence, this is really no coincidence. When the butt of the joke is dehumanised in this way, it’s only a matter of time before that butt gets kicked. It’s sometimes argued that this kind of ridicule cuts both ways and is a basic component of humour that goes on in all cultures – but a recent Chinese detergent advert featuring a black man being “washed” Chinese rightly attracted mass social media disapproval. Interestingly, the one East Asian country where you can find regular racist caricatures of white people is...North Korea.

Any other ways we want to emulate the Democratic People’s Republic? Then start caring about racial caricatures in Snapchat filters.
© The Independent - Voices


Pakistan: Cyber crime bill passed in National Assembly

The Prevention of Electronic Crime Bill 2015 was passed in the national assembly with majority vote on Thursday.

11/8/2016- The senate has already approved the cyber crime bill with 50 amendments in July 29 this year. Minister of State of Information Technology and Telecommunication Anusha Rehman had presented this bill earlier this year. The law envisages 14-year imprisonment and a Rs 5million fine for cyber terrorism, seven-year imprisonment each for campaigning against innocent people on the internet, spreading hate material on the basis of ethnicity, religion, and sect, or taking part in child pornography. The bill awaits signatures by President Mamnoon Hussain after which it will become a law. The bill has been criticised by the civil society members and rights groups for putting curbs on freedom of expression.

14-year jail, Rs5m fine for cyber terrorism
The Prevention of Electronic Crimes Bill 2016 envisages a 14-year imprisonment and Rs5 million for cyber terrorism, and seven-year imprisonment each for campaigning against innocent people on the internet, spreading hate material on the basis of ethnicity, religion and sect or taking part in child pornography, which can also entail a Rs500,000 fine. A special court will be formed for investigation into cyber crimes in consultation with the high court. The law will also apply to expatriates and electronic gadgets will be accepted as evidence in a special court. The bill will criminalise cyber-terrorism with punishment of up to 14 years in prison and Rs5 million in penalties. Similarly, child pornography will carry sentences of up to seven years in jail and Rs5 million, with the crimes being non-bailable offences. The bill also aims to criminalise terrorism on the internet, or raising of funds for terrorist acts online, with sentences of up to seven years in prison.

Under the law, terrorism, electronic fraud, exaggeration of forgery, crimes, hate speeches, pornographic materials about children, illegal access of data (hacking) as well as interference with data and information system (DOS and DDOS attacks) specialised cyber-related electronic forgery and electronic fraud etc would be punishable acts. It will also apply on the people who are engaged in anti-state activities online from their safe havens in other countries. Illegal use of internet data will cost three-year jail terms and Rs1 million fine. The same penalties are proposed for tampering with mobile phones. Data of internet providers will not be shared without court orders. The cyber crime law will not be applied on the print and electronic media. Foreign countries will be accessed to arrest those engaged in anti-state activities from there.
© Geo TV


Brazilian Olympians face organized racist attacks online

'They choose the victims,' says head of Rio's cybercrime unit — and the motive may surprise you

7/8/2016- You can hear the open-air gymnasium long before you see it; the thwacks of bodies in white judo gis hitting the mat. The gym, nestled between downtown and a nearby favela, is Brazil. The competitors of all ages are black, white, brown. Everyone is equal. At least that's what Brazilian world judo champion Rafaela Silva thought until she competed in the London Olympics. Silva was favoured to win a medal, but lost unexpectedly. And if that wasn't bad enough‚ as soon as she went online, she got another punch in the gut. On Twitter, on Facebook, hundreds of people on social media were hurling racist abuse at her. "I was very sad because I had lost the fight," Silva says. "So I walked to my room, I found all those insults on social media, they were criticizing me, calling me monkey, so I got really, really upset. I thought about leaving judo." Brazilian police say racist cyberattacks — especially against high-profile black women — are becoming more common

'They want to become famous'
Alessandro Thiers, the head of Rio's cybercrime unit, recently announced his officers caught those behind a racist cyberattack against a famous black journalist. It's not random racists at work here, he says. The attacks are co-ordinated by groups led by so-called administrators. "They choose the victims and they tell those in the group to act," Thiers said. "So they organize themselves in several states, chose a target ... then people from various states attack the victim." Police say most of the perpetrators are young and middle-class, and their motive often has little to do with white supremacy. "They want to become famous," Thiers says. "In fact, they are just spoiled kids."  Saying shocking things about well-known figures is an easy and often risk-free way to get the notoriety they seek, says Jurema Werneck, one of the founders of the Rio-based NGO Criola. And with the Olympics in their backyard, she fears they will now get a bigger platform. "We are not talking about fake profiles," she says. "Their profiles on the Internet are true ... they're not disguising themselves."

Werneck helps organize campaigns to stop the attacks‚ like a recent one in which Criola activists would find the perpetrators online and shame them by putting up billboards with their pictures near their home or work. She says if her small NGO can find the attackers‚ why can't police? "We find these racists easily. The police can do it too; they have more tools," Werneck says. "They're not doing a good job yet." For Silva, preparing for the Games now involves more than just practising holds and throws. She went to see a psychologist to help her deal with the hate she's bound to get online. "It has helped make me stronger and want to keep going," she says. This time, she knows what to expect. But being prepared, she says, doesn't make it any easier.
© CBC News


Facebook's walls of hate: Sickening abuse plastered online tells minorities to LEAVE UK

Sickening racist abuse regularly posted by warped trolls on Facebook — worsened since the Brexit vote — depicts ethnic minorities as "scum", "rapists" and "terrorists" and orders them to leave the UK, Daily Star Online has found.

7/8/2016- The UK voted to leave the EU in June. But last year net migration to the UK rose to around 333,000 — the second-highest figure ever. Since the vote, a Daily Star Online investigation has found — have suffered shocking abuse online. And social media has represented these minorities as threats to national security and criminals for years, it has emerged. It comes days after , including access to Heathrow Airport. Sick trolls and far right groups — including the English Defence League and Britain First — disseminate online hate and hostility, particularly after major global terror attacks like Brussels 2016. The investigation found hundreds of instances of anti-Muslim hate alone on Facebook, calling Muslims "terrorists", "rapists", claiming Muslim women are national security threats, ordering Muslims to be deported and posts referring to a "war" between Muslims and "us".

The offenders included far right groups, such as the English Brotherhood, but also twisted fantasists determined to spread hate. Shockingly, even councillors' posts were found to be slurs. Cllr Tim Paul Hicks, who represents UKIP on Shepshed Town Council in Leicestershire, is under investigation after allegedly making a series of anti-Muslim Facebook posts. He is accused of putting up a spate of racist images between July 10 and July 20 before the account was taken down five days later. One chilling picture allegedly showed a grenade with the caption: "Hotline to Allah. Pull pin, hold to ear, then wait for dial code." Accompanying the picture was a message saying: "ISIS HQ want to chat to you about Suicide Bomber Training School. Apparently, you missed a lesson." Another post showed tiara placed on top of a full burka and read: "Miss Saudi Arabia". There was also an image of a dog wearing a towel as a veil. Cllr Hick refused to comment on the allegations.

A spokesman from Progressive Leicestershire, a liberal political group, said of the posts: "They don't belong in 21st century Britain. They never have. I find it appalling." Birmingham City University carried out the harrowing research. Dr Imran Awan, associate professor in criminology at Birmingham City University, said: "The types of abuse and hate speech against Muslim communities on Facebook uncovered real problematic associations with Muslims being deemed as terrorists and rapists. "Muslim women wearing the veil are used as an example of a security threat. Muslims are viewed in the lens of security and war. This is particularly relevant for the far-right who are using English history and patriotism as a means to stoke up anti-Islamic hate with the use of a war analogy. "For example, after posting an image about eating British breakfast, a comment posted by one of the users, was: ‘For every sausage eaten or rasher of bacon we should chop of a Muslims head’. "The worry is that such comments could lead to actual offline violence and clearly groups such as this, are using Facebook to promote racial tensions and disharmony."

A spokesman from Facebook said the social media site does not tolerate direct attacks on race, ethnicity or religion. He added the site allows users to report any comment they feel is offensive and that Facebook does remove any content which is inappropriate. A spokesman from The Association of Chief Police Officers said: "We understand that hate material can damage community cohesion and create fear, so the police want to work alongside communities and the internet industry to reduce the harm caused by hate on the internet."
© The Daily Star


Europe's Radical Right Is Flourishing On Social Media

Far-right politics in Germany, France, and the U.K. flourished amid ongoing fears over migrants, terrorism and economic instability

3/8/2016- Anxious citizens across Europe are continuing to flock to their countries’ far-right fringes, posing an unprecedented challenge to established political parties throughout the region. Amid ongoing fears of migrants, terrorism and weakening job markets, support for radical right-wing parties in Europe is growing rapidly, a social media analysis by Vocativ shows. Long banished to the obscure corners of political life, resurgent populist groups in Germany, France, and the United Kingdom now boast more Facebook fans than their mainstream counterparts and have grown at a faster pace. For our analysis, we looked exclusively at the number of Facebook fans who identify as hailing from the home country of each party examined. Vocativ then tracked the growth of these online communities over the course of a year where immigrants, ISIS-inspired massacres, and national referendums dominated the consciousness of the continent.

In Germany, Europe’s leading destination for asylum seekers, fans of the ultra-right Alternative for Germany (AfD) party more than doubled to 240,000 between July 19, 2015 and July 31, 2016. By contrast, the country’s Christian Democratic Union, led by Chancellor Angela Merkel, and Social Democratic Party grew by only 17 percent (to 84,000) and 29 percent (to 87,600), respectively. The Facebook page of France’s National Front, which is led by Marine Le Pen, saw an uptick of 57 percent in the last year, to more than 290,000 fans—four times as many as the 70,000 on the page of President Francois Hollande’s Socialist Party. And in the United Kingdom, Britain First grew its Facebook community by 45 percent, topping the left-leaning Labour Party and the Eurosceptic UK Independence Party.

Events in each of these countries over the last year—coupled with looming concerns over the political stability of the E.U.—have helped to further fuel the populist, anti-immigrant, and anti-Muslim sentiment that underpin Europe’s rightward tilt. German anger over migrants and refugees reached a fever pitch in January when foreigners were accused of carrying out a string of sexual assaults in Cologne on New Year’s Eve. Terror-weary France has been battered by a series of Islamist-inspired attacks, including the deadly truck rampage in Nice last month that left more than 80 people dead. Meanwhile, Britain’s referendum on whether to break from the E.U., which passed narrowly in June, pushed nationalism and economic fears to forefront of public life. Just how well some of these groups fare politically will soon be tested. Germany holds regional elections next month. France’s presidential election will take place in April and May of 2017.
© Vocativ


USA: Neo-Nazi Hacker Distributes Racist Fliers Calling for the Death of Children

For the second time in a year, neo-Nazi hacker Andrew "weev" Auernheimer appears to have targeted flaws in printer networks to distribute racist fliers. This time, he's calling for the killing of children.

3/8/2016- Andrew Auernheimer, the notorious neo-Nazi black hat computer hacker better known as “weev,” claims to have targeted 50,000 printers across the country to distribute hate-filled fliers that call for the killing of black and Jewish children. “I unequivocally support the killing of children,” Auernheimer wrote in the flier. “I believe that our enemies need such a level of atrocity inflicted upon them and their homes that they are afraid to ever threaten the white race with genocide again.” He continued: “We will not relent until far after their daughters are raped in front of them. We will not relent until far after the eyes of their sons are gouged out before them. We will not relent until the cries of their infants are silenced by boots stomping on their brains out onto payment.”

It is unclear what prompted the flier, though Dylann Roof, who will soon face trial for allegedly murdering nine black people during a church service in Charleston, S.C., in June 2015, seems to have been a motivation. “I am thank thankful for his personal sacrifice of his life and future for white children,” Auernheimer wrote. “In honor of Dylann Roof, I will be growing out a bowl cut in solidarity for his trial.”

Auernheimer also praises Anders Breivik, who killed 77 people in separate attacks Oslo, Norway and at a nearby children’s summer camp as a political statement against immigration in 2011. Auernheimer describes Breivik as a Nordic warrior, comparing him to the protagonist in the poem Volundarkvida, where the main character kills the sons of his captor and rapes his daughter after being imprisoned. Like the protagonist of the poem, Auernheimer served a brief stint in prison after he was convicted of one count of identity fraud and one count of conspiracy to access a computer without authorization after exposing a flaw in AT&T security which allowed the e-mail addresses of iPad users to be revealed. Breivik seems to be a fascination of Auernheimer’s. Responding to Breivik’s appeal to receive internet access while he’s incarcerated, Auernheimber created the hashtag “#BreivikOnline” to draw attention to Breivik’s inability to go online.

Andrew Anglin, the founder of The Daily Stormer website that refers to non-whites as “hordes" and Jewish people as “Bloodthirsty Jew Pigs,” also published a blog yesterday mirroring Auernheimer’s demand for Breivik to have internet access. This is not the first time Auernheimer has faxed violent, hate-filled fliers. Earlier this year, he blasted unprotected printers at colleges, universities, and unprotected office networks across the country with swastika adorned fliers promoting an anti-Semitic message.
© The Southern Poverty Law Center


Instagram Will Feature a Hate Filter to Stop Harassment

Instagram is said to work on bringing a hate filter to stop harassment to the social networking platform. What it means is that soon enough, users can start filtering their comment stream and also be able to turn off comments on their posts. This tool should provide ways to stop cyber bullying.

1/8/2016- Online harassment is a recurring issue in our day and age. Anyone with access to the internet for longer than a week has undoubtedly personally experienced or seen how others get bullied over the web. Instagram should be a fun, friendly environment, but problems like these do arise as much as anyone tries to combat them. Instagram already has general policies created to flag specific offensive words or phrases. However, the new feature will allow users to take matters into their own hands and control their account content as they wish. The hate filter to stop harassment works in a simple way. Instagram account holders will be able to change their settings so that they can filter the comments they receive and, if they rather, completely turn off other’s ability to post comments on their account. This way, all users can individually set up their account in such a way that personally offensive content gets ignored.

The new feature is set to arrive on high-profile accounts first, but all users will see the changes in the upcoming months. High-volume accounts can bring the social networking service a great deal of valuable feedback in a shorter period of time. The post-by-post comment filter should roll out to all accounts soon enough. According to the Pew Research Center, about 60 percent of internet users have seen someone being called offensive names. Other 53 percent of users have witnessed to efforts made by some individuals to embarrass someone else. Around 25 percent of web users have seen someone being physically threatened, and some 24 percent have seen someone continuously being harassed for a prolonged period of time. Furthermore, approximately 27 percent of internet users have personally been called offensive names, and 8 percent of them have been physically threatened or even stalked.

These worrying statistics call for more policies and efforts to put an end to online harassment. It is moves like Instagram’s and other networking websites that raise awareness of a serious issue that must be addressed further on.
© The Next Digit


Anti-Semitic hatred is now part of daily life for Jews online

No-one does anything to stop it
By Stephen Pollard

31/7/2016- Not so long ago, the likes of John Nimmo would be living in well deserved obscurity. Nimmo is a misogynist racist who has a penchant for sending threatening messages to women. Before the internet and the advent of social media, he would doubtless have festered alone in his South Shields bedroom and his hate would have been shared only with whichever other losers he happened to speak to. But in our digital age, Nimmo can make contact with pretty much anyone at the touch of a button. Two years ago he did exactly that to Labour MP Stella Creasy and feminist campaigner Caroline Criado-Perez, sending them abusive tweets and getting an eight week prison sentence for doing so. Now he is at it again, this time sending anti-Semitic death threats to the Liverpool Labour MP Luciana Berger. She would, he told her, “get it like Jo Cox”. He warned her: “watch your back Jewish scum, regards your friend the Nazi”, along with a picture of a large knife.

Ms Berger told the court where Nimmo is being tried that his words caused her “great fear and anguish”. She said the tweets left her in a state of “huge distress” and “caused me to feel physically sick being threatened in such a way.” I imagine that you are shocked to read about such behaviour. No decent person could fail to be. But Ms Berger won’t have been. I certainly wasn’t. Nor will any prominent Jew. Not because the behaviour is in any way acceptable. Rather, because it is so run-of-the-mill. Ms Berger receives anti-Semitic abuse every day. In spades. Indeed, you will not find a single prominent Jew with a Twitter or Facebook account who does not regularly receive anti-Semitic abuse. When I wake up and check my Twitter feed it rarely contains fewer than ten anti-Semitic messages. More often than not it’s far more. Another 20 or so come during an average day. And that’s after I have blocked over 300 different tweeters – a number that increases every day.

Some even amuse me, such as the recent claim that I “lead British Zionists with their propaganda to enable them to control UK.” Another tweet informed the world: “Pollard is the chief protagonist of Zionist supremacism in UK. He controls MSM.” MSM is an acronym for mainstream media – which means I apparently control all British media. Which would be really useful, if it were true. Sadly, I can’t even control my own kids. Some are threatening. One notorious anti-Semite that I had previously blocked started informing her followers that I was in the habit of ringing her voicemail and had left abusive messages threatening to rape her. She also posted a tweet suggesting that someone “pop” me off. In my experience, the police have been entirely useless. Last year I had to explain what Twitter was to two PCs from the Met who had been sent to talk to me about a threat I had reported. Though they had heard of it, they had no real idea what it was.

This is an epidemic of hate. And with the odd exception, such as the clear death threat to Ms Berger, nothing is done about it. Certainly not by Twitter. I have given up reporting the culprits, since not once has Twitter taken any action against them. Free speech, innit? But one thing puzzles me. Have the likes of Nimmo always been with us, and has social media simply given them a tool and a voice they didn’t have before? Or has social media itself raised the temperature and itself caused much of the epidemic? For most of my 51 years, anti-Semitism was something I encountered only fitfully; the odd unthinking throwaway remark or “joke”. Certainly nothing that would give me pause for thought. But the past few years have been different. I have not gone a day without encountering it. As a journalist, I have reported the spate of such comments from Labour members with astonishment that anti-Semitism can have entered the language of a mainstream party, however marginally.

My hunch is that it has always been there, but we simply never heard it. In the years after the Second World War, no one voiced anti-Semitism, even if it lay buried deep within their psyche. Even Jewish jokes were rarely told in polite company. But as memories faded and the Holocaust grew further away, social wariness of Jew-hate dissipated. History then reasserted itself. It’s not called the longest hatred for nothing. And the kind of anti-Semitism that once remained private, behind closed doors, now has the megaphone of social media. And that, we surely know, is not going anywhere.
Stephen Pollard is editor of the Jewish Chronicle
© The Telegraph


UK:Far right targets Muslim women in Facebook hate campaigns

In hundreds of postings Islamophobes spread hate speech to foster violence against UK's Muslims.

28/7/2016- Islamophobes are targeting Muslim women in online hate campaigns, according to a new study. A Birmingham City University study examined hundreds of Facebook pages, posts and comments as part of an extensive survey of the spread of anti-Islam hate speech online, including those associated with far right groups Britain First and the English Defence league. They found 500 instances of Islamophobic abuse, in which Muslims were branded terrorists and rapists, alleged to be waging "war" on non-Muslims, and in which calls made for Muslims to be deported, as part of a campaign to "incite violence and prejudicial action." Women wearing Islamic dress are branded a "security threat." There is evidence of the hatred spilling into attacks and real life abuse, with a 326% surge in Islamophobic incidents recorded last year, and more than half of the victims women.

Researcher Imran Awan said that the recent murder of MP Jo Cox and the surge of racist attacks in the wake of the Brexit vote showed the urgency of tackling online hate speech. "What is has shown is that the far right and those with links and sympathies with the far right were using Facebook and social media to in effect portray Muslims in a very bad and negative fashion," Awan said. "After Brexit people have felt much more empowered and confident to come and target Muslims and others in racist hate attacks. This was all playing on social media but no one looked at it. If Facebook had been monitoring this racism, then I'm not saying they could have stopped the racist attacks, but it certainly could have given them an insight into the racist people using their platforms." Online abuse surged after events such as the murder of soldier Lee Rigby by two Islamic extremists in 2013, or the sex abuse cases in Rotherham, according to the study.

It found that 80% of the abuse was carried out by men, who singled out Muslim women for attacks, with 76 posts portraying women wearing the niqab or hijab as a "security threat." The next most frequent form of abuse called for Muslims to be deported, with 62 instances recorded. It identifies five kinds of online Islamophobe, from the 'producers' and 'distributers' seeking to create "a climate of fear, anti-Muslim hate and online hostility," to the 'opportunists' who will spread anti-Muslim hate speech in response to a specific incident, such as atrocities committed by terrorist group Isis. Also responsible are 'deceptives', who will concoct rumours and false stories to whip up Islamophobic hatred, such as the rumour Muslims wanted to ban cartoon character Peppa Pig, and 'fantasists', who fantasise about Islamophobic violence and make direct threats against Muslim communities.
On Tuesday, Home Secretary Amber Rudd announced the launch of a campaign to combat hate crime in the UK, with Her Majesty's Inspectorate of Constabulary to review the way hate crimes are reported and investigated by police in England and Wales. It comes with more than 6,000 hate crimes recorded by police in the wake of the 23 June EU Referendum. The Muslim Council of Britain recorded 100 crimes in the weekend after the referendum. Islamophobia monitoring group Tell MAMA found a 326 increase in Islamophobic incidents last year, with Muslim women "disproportionately targeted by cowardly hatemongers." "We have known that visible Muslim women are the ones targeted at a street level, but what we also have seen in Tell MAMA, is the way that Muslim women who are using social media platforms, are targeted for misogynistic and anti-Muslim speech.

In particular, there is a mix of sexualisation and anti-Muslim abuse that is intertwined which also hints at perceptions and attitudes towards women in our society," said Tell MAMA director Fiyaz Mughal. "We are also aware from our work in Tell MAMA, that the perpetrators age range has dropped significantly from 15-35 to 13-18 showing that anti-Muslim hate in particular is drawing in and building a younger audience which is daunting for the future. We need to redouble our efforts if we are to have social cohesion in our society and we also need to ensure that women feel protected and confident enough to report in such hate incidents."

Facebook needs to do more to tackle race hatred
Facebook recently signed up to a new European Union code of conduct obliging it to remove hate speech from its European sites within 24 hours. Awan said that UK authorities and Facebook needed to do more to combat the problem. "I think police have a really tough job in the sense that in my understanding it is like finding a needle in a virtual haystack, and they are not clued up enough. I don't think they have enough training to look at social media posts, police need to be trained on what to look at," he said. A College of Policing spokesman said: ""We are working with the Crown Prosecution Service, partners and police forces to raise awareness and improve the policing response to hate crime. This will ensure offenders can be bought to justice and evidence of their hostility can be used to support enhanced sentencing. "The College has developed training for police forces to issue to officers and staff and published Authorised Professional Practice, which is national guidance, for those responding to hate crime.

"In addition, more than £500,000 has been awarded to the University of Sussex and the Metropolitan Police through the Police Knowledge Fund to pilot a study that will examine the relationship between discussions of hate crime on social media and data relating to hate crime that has been recorded by police. The fund allows officers to develop their skills, build their knowledge and expertise about what works in policing and crime reduction, and put it into practice." Facebook says that it will not tolerate content that directly attacks other directly based on race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition, and its policies try to strike the right balance between giving people the freedom to express themselves and maintaining a safe and trusted environment. It said it has rules and tools people can use to report content that they find offensive. IBTimes UK has contacted Facebook for comment.
© The International Business Times - UK


Headlines July 2016

Wikileaks denies anti-Semitism over (((echoes))) tweet

If any one form of discriminatory social media expression has been on the rise in recent months, it’s been anti-Semitism.

24/7/2016-The Donald Trump presidential campaign’s well-documented white nationalist and Neo-Nazi following continues to bring such hatred to the forefront. Trump himself had even retweeted things from members of the “white genocide” movement, and in June, the campaign tweeted out an anti-Semitic meme that originated from the alt-right fever swamps of social media. On Saturday, a completely different organization seemed to dip its toes in those waters, too. Wikileaks started tweeting about (((echoes))), and it’s generated a great amount of controversy. It’s one of the increasingly well-known methods of harassment used by anti-Jewish racists on Twitter, which has exploded into wider visibility in recent months¯tweeting at Jews, and bracketing their names with two or three parentheses on either side.

It’s intended both as a signal to other anti-Semites and neo-Nazis, to highlight the target’s Jewish heritage (or perceived Jewish heritage, since racists aren’t always the sharpest or most concerned with accuracy), and track them on social media, making it even easier for other anti-Semites to join in on the abuse. After the phenomenon became more widely discussed in the media, many Jews and non-Jews alike began self-applying the parentheses on Twitter names, in a show of anti-racist solidarity. That’s where Wikileaks comes in. On Saturday, amid the group’s high-profile dump of thousands and thousands of emails from the Democratic National Committee, its Twitter account said something very suggestive about its critics. The tweet has since been deleted, going against Wikileaks' perceived notion of radical transparency. Nevertheless, screenshotters never forget.

It’s not exactly the most coherent tweet, but the thrust is nonetheless pretty clear: Wikileaks accused most of its critics of having the (((echoes))) brackets around their names, as well as “black-rimmed glasses,” statements that many interpreted, plainly enough, as “most of our critics are Jews.” The Wikileaks account subsequently tweeted some explana-tions of what the offending tweet meant, suggesting that “neo-liberal castle creepers” had appropriated the racist-turned-anti-racist solidarity gesture, turning it into “a tribalist designator for establishment climbers.” A clarifying tweet also misspelled “gesture” as “jesture,” which further stoked accusations of witting anti-Semitism. Wikileaks ultimately defended the decision to delete the tweets, saying they’d been intentionally misconstrued by “pro-Clinton hacks and neo-Nazis.” It’s also been maintaining a pretty aggressive public relations posture regarding these latest leaks. It threatened MSNBC host Joy Reid for tweeting that she planned to discuss an “affinity” between the group and the Russian government on her show, saying “our lawyers will monitor your program.”

So, again, not the best tone for a group dedicated to prying open closed organizations, regardless of their desires. It also responded to an article by Talking Points Memo’s Josh Marshall, investigating alleged ties between the Trump campaign and Vladimir Putin, accusing him of “weird priority” for focusing on the method of the correspondences' release rather than the data dump itself. Wikileaks has also accused Twitter as well as Facebook of censoring information about the DNC emails, highlighting DNC email-related posts that were flagged as “unsafe.” Facebook CSO Alex Stamos subsequently stated on Twitter that the problem had been “fixed,” however, and there’s no shortage of Facebook links out there directing people straight to the leaked materials. Twitter similarly denied the allegations in a tweet from its public relations account.

The Wikileaks brouhaha wasn’t the only instance this weekend of a controversial, perceived piece of anti-Semitism on Twitter getting immediately rolled back and explained away. The Trump campaign landed in yet another such situation on Sunday morning, when General Mike Flynn¯once considered by Trump for his vice presidential selection¯retweeted someone who accused “Jews” of misleading people about the origins of the DNC email leak. Flynn has since apologized, saying he only meant to retweet a link to an embedded CNN article about the leak.
© The Daily Dot


Leslie Jones: 'Hate speech isn't freedom of speech'

Leslie Jones insists she didn't leave Twitter when she was subjected to racist messages, she just signed out "to deal with what was going on".

22/7/2016- Leslie Jones was stunned by the injustice of being targeted by a gang of Twitter trolls. The actress took a break from the micro-blogging site this week after being bombarded with highly-offensive and racist messages online surrounding the release of her new movie, an all-female reboot of 1984 classic Ghostbusters. Appearing on Late Night With Seth Meyers on Thursday night (21Jul16), Leslie admitted that while she has experienced bullying before during her career, it was the amount of comments she was receiving that caused her to take action. "What’s scary is that the insults that hurt me, unfortunately I’m used to the insults, that’s unfortunate, but what scared me was the injustice of a gang of people jumping against you for such a sick cause," she said. "I mean, everybody has an opinion and it all comes at you at one time, they really believe in what they believe in, and it’s so mean. Like, it’s so gross and mean and unnecessary."

Leslie made the decision to retweet some of the most hateful messages to show people the nature of the vitriol. After that, many people accused her of preventing them from having freedom of speech, but the actress powerfully exclaimed: "Hey, hate speech and freedom of speech - two different things." After revealing the nature of her cyber bullying, Twitter CEO Jack Dorsey got in touch to offer his help to Leslie in dealing with the haters. But Leslie admits it was much easier convincing Facebook to step in than Twitter. However, she is now hopeful that by making a stand, she has raised awareness that this type of bullying does go on - and it needs to be stopped. "So it was just one of those things, I was like, ‘OK, so if I hadn’t said anything, nobody would have ever known about this. All those people still would have an account," she mused.

Leslie also used the interview to insist that her absence from Twitter was always meant to be temporary. In fact, the actress laughed that she was stunned by the headlines announcing she had quit the site. "I didn’t leave Twitter. I didn’t leave - I just signed out because I wanted to deal with what was going on. And then I went to bed, woke up the next morning and I was like, 'They said I left Twitter! I didn’t leave!'" she said.
© The Belfast Telegraph


Russia: Moscow Police Report 86% Rise in Online Hate Crime

19/7/2016- The Moscow police have recorded an 86 percent rise in online extremism in the first half of 2016, compared to the same period last year, the Interfax news agency reported. Anatoly Yakunin, the head of Moscow's Interior Ministry announced that all forms of extremist crimes had risen 25 percent in the first half of the year but that the 86 percent rise in online extremism caused particular concern. Yakunin told journalists in March that combating extremism would be the Moscow police’s highest priority in 2016. Deputy head of Russia’s Interior Ministry, Vladimir Markov in March explained that nationwide rises in extremist crime do not represent a worsening in the situation, the RIA Novosti news agency reported. “In fact the situation has become more stable” he said. Markov put the rise in extremist crime down to “new crimes concerning online extremism coming into force, as well as increased police surveillance online and increased competency of police officers and investigators in this field.”
© The Moscow Times


UK: Jewish groups join campaign to battle online hatred

The initiative has gained the support of key community groups who fight against anti-Semitism

19/7/2016- Key community bodies have joined a campaign to tackle online hatred, including anti-Semitism. ‘Reclaim the Internet’ is an initiative to fight against abuse on the web, launched by Labour MP Yvette Cooper during a conference on Monday. It has gained the backing of the Community Security Trust, after 159 cases of anti-Semitic incidents on social media were reported to the them in the last year. Speaking to Jewish News, CST said they were “very happy to lend our expertise at its opening panel event.” The social media organisations are moving in the right direction, but overall the responses remain inconsistent and inadequate, especially regarding mass campaigns against individuals. CST reminds anyone encountering anti-Semitism online, to please report it.” The initiative has also been backed by the Holocaust Educational Trust. Karen Pollock, Chief Executive, said it is “proud to support #reclaimtheinternet.”

All too often, social media is used to spread all forms of hate, including antisemitism – as we are all too familiar with. The internet can be a huge force for good, bringing people together across the world, and it is our duty to speak out and stamp out the type of vindictive behaviour that fosters intolerance in society. ” Campaign Against Antisemitism commented: “It’s vital that Jewish people feel safe from anti-Semitism online. Victimising and bullying Jewish people either offline or on the Internet has a deeply negative effect on both the individual targeted and the wider community. Zero tolerance of anti-Semitism needs to be more than just a slogan, and must apply across the board in the form of action taken against abusers.

Anti-Semitic hatred is rampant on the Internet and though the police response leaves much to be desired, it is the Director of Public Prosecutions who is ultimately not taking the necessary action to stop it. Our teams are working hard to invert this problematic status quo.” Jewish Labour MP Luciana Berger was among high profile figures supporting the new campaign. The Liverpool Wavertree MP has been regularly targeted with anti-Semitic abuse.
© The Jewish News UK


Netherlands: Facebook temporarily shuts down nationalist page, with 252,000 likes

19/7/2016- Facebook closed down the Netherlands’ biggest far-right supporters’ page for a time, the NRC said on Tuesday. The page, Nederland mijn Vaderland (the Netherlands, my fatherland) had 252,000 likes and was taken offline on Saturday, the paper said. It was back up and running on Tuesday afternoon. The page was launched in 2004 on Hyves, a Dutch social media network which has since closed down. Thomas van Elst, one of the page’s four moderators, told the NRC that Facebook had not given a reason for closing down the page. He said the page’s supporters ‘want to keep the Netherlands as it is,’ and that immigration, refugees and the EU were popular topics. Another of the page’s moderators was behind a call earlier this year to ‘wave goodbye’ to Sylvana Simons, the black television presenter who campaigns against racism and the character of Zwarte Piet, the NRC said. Between March and June, three other nationalist Facebook communities were taken offline with combined likes of 200,000. A fourth website, campaigning against racist stereotyping, has also been removed.
© The Dutch News


US government allowed to plead in Facebook data case

19/7/2016- The US government can take part in a case against Facebook on data transfer from Europe to the US, the Irish high court said on Tuesday (19 July). The case was brought by Austrian activist Max Schrems. It was formally opened last October after the European Court of Justice (ECJ) struck down an EU-US data protection agreement known as Safe Harbour. It will determine whether European internet users' data is sufficiently protected from US surveillance. The court's decision will allow the US government to defend its legis-lation before the ECJ. “The United States has a significant and bona fide interest in the outcome of these proceedings”, said high court judge Brian McGovern. He explained that "the imposition of restrictions on the transfer of such data would have potentially considerable adverse effects on EU-US commerce and could affect US companies significantly”. Schrems said in statement that the US participation in the lawsuit showed that he had "hit them from a relevant angle". “The US can largely ignore the political critique on US mass surveillance, but it cannot ignore the economic relevance of EU-US data flows," he said. The court's decision comes a week after the European Commission launched a new data protection agreement with the US called Privacy Shield. The commission said the deal provides new guarantees that Europeans' privacy will be better protected. But Schrems said it did not address concerns raised by the ECJ when it struck down Safe Harbour. 

The commission "knows it will sink sooner or later," he said about Privacy Shield.
© The EUobserver


USA: Man charged with assault after homophobic Periscope rant leads to arrest

A Detroit man was charged on Sunday after he was accused of hurling gay slurs and pointing a gun at another man, then posting a video of the incident on social media.

17/7/2016- Stephen Drake Edwards, 20, who turned himself into police Saturday after posting an apology on Periscope, was charged by the Wayne County Prosecutor’s Office with felonious assault, carrying a concealed weapon in a motor vehicle, and felony firearm. Edwards was driving his car around 3 p.m. Tuesday on the 22000 block of Lyndon and yelling threatening and derogatory remarks while pointing a weapon at a 23-year-old man as he walked down the street, prosecutors said. “The remarks made by Edwards were derogatory statements referring to the victim's sexual orientation. The victim was able to get away from Edwards,” the Prosecutor’s Office said in a statement. Edwards posted the incident on Twitter. An investigation by Detroit police led to the identification of Edwards.

In the video, a man believed to be Edwards was in a car and using gay slurs when he called out to the victim as he left a store. When the victim approached the vehicle, Edwards pointed a gun at the victim and told him to take off his pants. He did not fire the weapon in the Twitter video, and no one was injured. The video, which was originally reported by BLAC Detroit magazine, ended shortly afterward. Later, on Periscope as @Binswanson, Edwards posted a 48-minute video in which he says he would have killed the victim if he had taken off his pants. On Saturday, Edwards turned himself into police. Before that, he posted to his Periscope account what he called “APOLOGIES to LBGT COMMUNITY.” In the four minute video, he said he is the one in the video hurling the gay slurs and pointing the gun, but said “physically it wasn’t me. I was intoxicated.” “I apologize though. I got my whole family looking at me. I don’t even know why I did it. I’m going to go turn myself in,” he says while recording the video from the passenger side of a moving vehicle.
© The Detroit News


Homemaking with Nazis: The Bizarre Domestic Underbelly of Race Hate Websites

By Alex Blake

14/7/2016- What do you picture when you hear the word ‘racist’? Maybe a jackbooted skinhead marching with Nazi flag in tow? Or perhaps an old granddad who sits you on his knee and tells you about the time he first took a swing at someone of a different race? How about...a wholesome stay-at-home mum about to bake a fresh batch of cakes for you and your friends? Granted, the latter isn’t what most people would imagine. But then most people have probably never delved into the weird world of Nazi homemaking advice. Racists like pie and tidy dining rooms too, you know. To explore the mind of a homemaker with malice on the mind, we’re going to be stopping off at two of the internet’s largest hate sites: Stormfront and Vanguard News Network. With hundreds of thousands of members between them (and many more lurking guests besides), these are the places to be if you want to vent about that slightly Arabic-looking guy who looked at your daughter funny. Like a BNP voter's front room, but online.

Stormfront (and Where to Find the World's Greatest Cheese)
The first stop on our magical mystery tour is Stormfront, the world’s largest hate site with over 300,000 registered users. Its members past and present have included such luminaries in the white nationalist world as former Ku Klux Klan leader (and Stormfront founder) Don Black, Holocaust denier, former Congressman and KKK Grand Wizard David Duke, Norwegian mass murderer Anders Breivik, as well as your run-of-the-mill neo-Nazis, white supremacists and other race hate types. As you'd imagine, much of the content on Stormfront is made up of posts eulogising dead Nazis, attacking minorities, discussing race 'science' and lamenting (or celebrating) current events and the way of the world. However, tucked away among the vitriol and hate there is a side of the website that you wonder might be there accidentally: homemaking advice.

What brand of beans should one stock their pantry to prepare for the imminent global race war? Where can you buy energy-efficient lightbulbs from non-Jewish-owned businesses? Do any Stormfront members offer plumbing services so I can have a nice white Nazi come fix my leaky pipes? These are the sorts of threads that populate Stormfront's homemaking section. A common theme is what to do when the world flips over and everyone joins gangs and paints themselves like in Mad Max. Obviously stockpiling white-friendly food is must (no tacos please). Beans, pasta, powdered milk and tinned vegetables are all in. But the forum’s (even more) weird edges can’t be hidden for long. You know you’ve found a gem when you see a thread titled ‘The World’s Greatest Cheese Resource’ in a Neo-Nazi forum, and it does not disappoint. After the original poster links to, user Jeremy miller chimes in: “Mother’s milk is used in Europe”. OK.

That really gets user Kostadina going. “Sounds like a creepy feminist performance art thing to do,” they opine. “I see nothing about “Europe” but a lot about JEW YORK. Which I am not reproducing here. The Village Voice even thinks it's disgusting, which says a lot.” “This should be a product the White pride enterprises could make for our own consumption so that we are not always having to buy this product from the enemy,” says Defend Out Homeland, helpfully. “We make our own dairy products from goat milk. This way we avoid the enemy,” agrees Tenaj. With the Great Cheese Villain duly avoided, our Aryan friends retire to chew on white-approved gorgonzola, or something. But in all seriousness, when even a discussion on the virtues of fromage quickly descends into talk of ‘the enemy’ and ‘JEW YORK’, you know we’re dealing with people in a constant state of paranoia, enemies everywhere they look.

And what better way to thwart these enemies? By raising your kids the ol’ fashioned (racial) way – after all, in the words of convicted murderer and racist hero David Lane, “We must secure the existence of our people and a future for white children” (AKA the ‘Fourteen Words’). To further this aim, user Keelan has a bright idea: “I’ve been kicking around ideas for a European Heritage Coloring Book for Children.” You know, like Dora the Explorer, except with no Hispanic people. Aside from the attempted racist humour (“Make sure you leave out the brown crayons!”, says ADAMANT), most posters in the thread are delighted. “This is a beautiful idea and I for one would like to know when I can order STACKS of these books for my grandchildren”, gushes BeautynBrains1488. “This is a wonderous idea!”, comments RebelGirl91, before getting to the clincher: “I don't have any kids, but I'd save this for when I do have some… we need to start teaching our children at young age!” A fear for the future, as reflected in the Fourteen Words, is an overwhelming driving force for white nationalists the world over. They see a world of changing demographics and they panic. When your mind is so worried for later generations, you do what you can to teach them what you believe, so your ideas don’t die with you.

VNN (and the History of the Sausage)
VNN (tagline: ‘No Jews. Just right’) is the younger, brasher, uglier brother of Stormfront, run by the chronically unpleasant Alex Lindner. While the Stormfront mods put up a weak show of being a family-friendly civil rights for whites website, VNN is the proud thug who flies a swastika flag from his bedroom window. Speaking of which, unlike on Stormfront, Nazi flags are permitted on VNN, as are the vilest racial epithets. Repugnant it might be, but VNN does have unintentional comedy moments of its own. Alongside beauties like ‘#1 Coconut Oil Thread’ (as opposed to all those other inferior coconut oil threads) and ‘The Mysterious Origins of a Food That's Always Been Funny: The Sausage’ lies the curiously-confessional title ‘I Eat Ruffle Chips Smothered in BBQ Sauce’. Where Stormfront is somewhat practical, VNN is just odd.

In a thread discussing food waste in America, user Nate Richards comments insightfully: “…most of what I eat comes from the salvation army. You don't have to sign up at this one, it’s not a ‘food box’ handout based on income. Anyone can come load up on this stuff… You can barter these to non-whites and mental defectives.” So thoughtful! User Crowe followed up by ruminating on the five year old peanuts he just ate: “They weren't as crunchy as fresh ones, but it didn't make me sick or give me the shits.” He was having a merry old time until user James Dovery popped up with: “One of thee most cancerous molds [sic] grows on peanuts.” Crowe didn’t post again in that thread.

“The most important decision a woman makes in life is who she lets get on top of her.”
But as any self-respecting white nationalist will tell you, schools are merely propaganda centres teaching anti-white lies. Better to homeschool your kids, and what better teacher is there than Alex Lindner himself? Here’s what he has to say about homeschooling girls: “The most important decision a woman makes in life is who she lets get on top of her.” Please, Alex, do go on. And go on he does. He recommends teaching girls a (true) story of a man who tore his scrotum on a piece of machinery and then stapled it together again. Apparently, this aptly demonstrates the “incredible intensity and impersonality of the male sexual drive”. But that’s not all: “draw out the lessons of his self-surgery and eventual reporting to a formally qualified surgeon for better repair - as these highlight the strengths and weakeness [sic] of masculine toughness.” Truly, a lesson for us all (but mostly girls, we assume).

Why do these homemaking forums exist? What can they tell us about the racist community?
Your initial reaction to these threads was probably along the lines of a big fat ‘WTF’. I know mine was. Nazis are thugs, right? So why are they interested in discussing cheese-making and insulating their lofts? To the average Joe raised on the Nazi caricature, this all seems very odd, even a little disorientating. It’s certainly not what we’ve come to expect from such racist malcontents. But if we’re to combat the pervasive spread of far-right ideas, we need to move beyond the concept of the cartoon-cutout racist, the one with the goosestep and stiff Nazi salute. That simply doesn’t reflect the reality. These people are not going to B&Q or Mumsnet for their advice; they’re frequenting a race hate website. That’s because, to them, racism is more than a simple idea that they can compartmentalise in their brain and forget about; it's a way of life. This is their community, as it were.

Reinforcing that, a survey of Stormfront's women-only forum by Tammy Castle PhD and Meagan Chevalier found that most women posters used the sub-forum as a form of social media, to connect with other women who share their views. More broadly, Don Black himself has stated that he uses Stormfront to reach like-minded people whom he otherwise would not be able to contact. Certainly the combination of increased reach and anonymity afforded by the internet is a strong force that binds people whose views are not normally popular within polite society. When you’re so despised by the world around you, you learn to distrust it. You stop consuming any kind of mainstream media (run by Jews, they say), you stop associating with those who find your views objectionable, and you retreat into the only community that welcomes you – other racists.

That’s one reason why, for all our efforts to counter it, racism is still alive and well. The people who go to Stormfront and VNN seeking homemaking advice are happy being outsiders – it’s their identity. They go online to ask for help in what seems like an inappropriate place because, in actuality, it’s the most appropriate place, because it’s the one place that they can associate with likeminded people. This is important to understand, because not every racist wears Klan robes. They’re university lecturers, politicians and historians. They’re the friendly neighbour whose political beliefs are only revealed after he murders an MP. As the Brexit aftermath has shown, racism is alive and well in the UK when most of us thought it moribund. If we’re only on the lookout for the swastika-wearing skinhead, we’ll miss half the people out to divide, not unite, our beautifully diverse communities.

Not every racist is as easy to identify as Curtis Allgier or Bryon Widner. But understanding the ties that bind them together – including discussions of cheese and homeschooling – can help us fight them more effectively.
© Gizmodo UK


UK: Xenophobia on Twitter: tracking abuse in the wake of Brexit

More than 250,000 tweets were sent from the UK referring to migration or migrants between June 22 and 30. But what does the data really show?
By Carl Miller

13/7/2016- June 23, 2016 is a day many of us will remember with either a smile or shiver. But the political battle wasn’t just fought in television studios, tub-thumping speeches and a blizzard of leaflets – the EU referendum also dominated the digital world. Politics is changing. Our political lives are moving online, from the noble, generous and high-minded to the vicious and terrible. Social media is a crucial new political battleground, alive with all the arguments and disputes the referendum provoked, and now still raging in the aftermath of the decision itself. At Demos we’ve spent months looking at the digital side of politics for Channel 4, with particular interest in how it related to migrants and minority groups. As referendum day approached, we were determined to get a clear understanding of what was happening on the world’s most open social network: Twitter. How were people using it to express support and solidarity for migrants and religious and ethnic minorities? And how were they using it to attack them?

Immigration formed a massive part of the Leave campaign, both on social media, and on the ground. More than 250,000 tweets were sent from the UK referring to migration or migrants between June 22 and 30. As it became clear that we were actually going to leave the EU, discussion about immigration soared. Much of this was simply people trying to come to terms with what Brexit actually meant. However, over roughly the same period, about 16,000 Tweets appeared using a term or hashtag associated with xenophobia. Most of these, over 10,000 actually did so to voice support for migrants and take on xenophobic hate. However 5,000 were xenophobic and formed a lingering, sinister background in the run-up to the referendum and in the days after. Unfortunately, xenophobia doesn’t stay within the boundaries of social media.

The police reported a 42 per cent increase in allegations of hate crime in the week after the referendum result, with more than 3,000 allegations made – the worst on record. Twitter was also a place where people made public the abuse and hate that they had received, often on the street. Two significant hashtags rose in the days after Brexit: #safetypin and #postrefracism. Of almost 100,000 Tweets sent using these hashtags over the seven days after the referendum we judged 2,400 to be sharing accounts of abuse. Placed on a map, they are spread right across the UK. Big numbers can often hide what they’re actually counting. Underneath the statistics, maps and graphs are thousands of shocking human stories. These little red dots, all too often, represented a tragedy of bigotry and hatred, reportedly happening on Britain’s streets.

This is significant. Twitter is now a place not just for the haters but also the victims. It’s a place used by people who suffer abuse to make sure that they don’t suffer in isolation. On social media, people can make public victimisation that is inherently and horribly intimate. Thanks to Twitter victims can shout as loudly as their abusers, and that’s a good thing. Politics is often nasty. It’s where ideas, world-views and even basic ideas of right and wrong clash with each other. As a liberal, you hope that as this happens the good ideas win out over bad ones; but some kind of conflict is, itself, inevitable – that’s part of the point of politics. But the EU referendum has been an entirely different political animal. Rather than resolve a political dispute and create a new consensus, society feels more divided than ever.
Carl Miller is research director at the centre for the analysis of social media, Demos.
© Wired UK


Poland: Auschwitz museum prohibits Pokémon Go play on its grounds

The Auschwitz-Birkenau State Museum is not buying into the Pokémon Go craze.

13/7/2016- On Tuesday, the Holocaust memorial site tweeted that it will not allow visitors to play the new smartphone game because it is “disrespectful on many levels.” New York magazine first reported Tuesday that some users of the Nintendo game, which allows players to capture its animated creatures on their phones at outdoor sites and buildings with the help of phone GPS systems, were playing at Auschwitz. Others soon took to Twitter to report finding Pokémon at the popular memorial in Oswiecim, Poland, but their screenshots of game activity did not match the normal look of the game. The game has not been officially released in Europe. On Tuesday, ADL CEO Jonathan Greenblatt went on Twitter to call for the museum’s visitors to refrain from playing.

The same day, the U.S. Holocaust Memorial Museum in Washington, D.C., also issued a statement condemning playing the game on its grounds. The Washington Post reported that the museum contains three different “PokéStops” — real-life sites where players can redeem in-game items. “Playing the game is not appropriate in the museum, which is a memorial to the victims of Nazism,” Andrew Hollinger, the museum’s communications director, told the Post. “We are trying to find out if we can get the museum excluded from the game.”

Since its release last week, Pokémon Go has become the most popular mobile game in U.S. history, with over 20 million daily users. The stock of its parent company, Nintendo, rose 23 percent on Monday. New York magazine reported that playing the game at other sites — such as Ground Zero in New York City, near a North Carolina statue of a Confederate general and at the site of multiple African-American mural memorials in Brooklyn — has also caused controversy. The game’s developer, Niantic, ran into similar trouble last year when one of its games, Ingress, allowed players to battle for control over real-life locations that happened to include multiple former concentration camps such as Auschwitz, Dachau and Sachsenhausen.
© JTA News.


German cops raid online anti-Semites, Holocaust deniers

Police seize dozens of computers owned by members of neo-Nazi Facebook group who posted xenophobic, racist messages

13/7/2016- German police on Wednesday launched nationwide raids targeting social media users who posted racial hatred, including anti-Semitism, on Facebook and other online networks. Police swooped down on the homes of some 60 suspects across 14 of Germany’s 16 states, the BKA federal crime bureau said, in a crackdown on “verbal radicalism” and related criminal offences. No arrests were made, but computer equipment, cameras and smartphones were seized in the first-ever such mass raids targeting online hate crime. Most of the suspects allegedly belonged to a neo-Nazi Facebook group whose users had posted xenophobic, anti-Semitic or other far-right messages. The posts included messages denying or relativizing the Holocaust, celebrating aspects of National Socialism and using Nazi symbolism, and calling for attacks on refugees and politicians.

BKA chief Holger Muench said police were taking a “clear stance against hate and incitement on the internet,” which had increased amid the refugee crisis and was poisoning public discourse. Interior Minister Thomas de Maiziere said that “violence, including verbal violence, in any form and in any context” was “unacceptable.” He said there are “moral principles offline and online” and stressed that “criminal law applies on the internet.” Justice Minister Heiko Maas said that pressure on internet giants such as Facebook, Google and Twitter to find and block hate speech had grown. “First steps have been taken,” he said, “but they are nowhere near sufficient.”

Facebook pledged in September to fight a surge in racism on its German-language network, as Europe’s biggest economy became the top destination for refugees, triggering a backlash from the far right. The US social media network said it would encourage “counter speech” and step up monitoring of anti-foreigner commentary. Users have accused the company of double standards for cracking down swifter and harder on nudity and sexual content than on hate-mongering. Last week, the families of 5 US citizens killed in Israel sued Facebook for $1 billion claiming the social media site had allowed terror group Hamas to incite violence.


Germany: Underground ‘hacktivists’ in Berlin are connecting refugees to free Wi-Fi

9/7/2016- On a warm afternoon in early June, Mohammed Mossli was sitting in a trendy café in Berlin. The café, with its raw wooden countertops, craft sodas and fashionable young men and women typing away at laptops, was far from the sniper fire and rubble of Aleppo, Syria, Mossli’s hometown, which he describes as “only dust and ashes.” Still, Mossli, who is 21, tall, thin and prone to smile, seemed at ease as he rolled a cigarette and kidded around with one of his new friends: Philipp Borgers.  Borgers is a German software developer and member of the “hacktivist” group Freifunk, a community of hackers, programmers and free network activists across Germany attempting to spread “mesh networking,” an ad hoc wireless network technology that allows computers and devices to connect directly to one another without passing through any centralized authority or organization.

Refugees Offline
A hacktivist and a refugee might seem like an unlikely pair, but in a city with 40,000 refugees, this collision of worlds is increasingly common. And for Mossli, becoming involved with the city’s tech community has helped make Berlin a new home: Back in Syria, he had been in his second semester of studies for a computer science degree at Aleppo University. That is until the Bashar al-Assad regime started detaining some of his classmates. “Sometimes, they arrested people right in the exam room,” he says. “Just because of your last name or because someone in your class was at a protest, it’s enough reason for them to arrest you.” Afraid he could be next, Mossli fled Syria and, like thousands of others from his war-torn country, made his way to Germany, where he has been living for the past 10 months. Mossli’s parents are still in Aleppo, and his only connection to them is WhatsApp messages and, when the internet in Aleppo is working, brief Skype calls. That’s why Mossli has come to treasure something many of us take for granted: a Wi-Fi connection.

In Berlin, finding Wi-Fi can be as difficult as divining for water: A law known as Störerhaftung makes the owner of a Wi-Fi network liable for any illegal downloads or illicit activity using that connection, discouraging many businesses from providing free networks. Beyond legal restrictions, a lack of investment from the German government has also created technological limitations, says Borgers. “There is a refugee crisis,” he says. “But there is also an infrastructure crisis. Germany is far behind other countries when it comes to internet connection.” The combination of restrictive legislation and a lack of technological infrastructure has made it difficult for many of Berlin’s 149 refugee shelters to provide Wi-Fi to their residents. In one shelter where Mossli stayed, he says there was no Wi-Fi and just four computers for 400 residents. “I never even tried to use them,” he says.

Although every shelter in Berlin must adhere to strict standards of hygiene, security, fire safety and food preparation, internet access is not mandatory. Yet for many refugees, access to the internet is the only way to communicate with their families and one another, or navigate the complexities of a foreign country. “This was something we could do something about,” says Borgers of Freifunk. “So we decided to help.”

Freifunk expands access
Elektra Wagenrad is one of Freifunk’s oldest members. She says mesh networking and groups like Freifunk are in many ways an evolution of the anarchist spirit that has long permeated Berlin. Mesh networking involves setting up routers or “nodes” in public places (often church steeples or radio towers) to allow one computer or device to connect to every other in the network. Once in the mesh network, a computer can share its internet connection with any others in range. “If one node fails, the connection will find a different route,” says Wagenrad. “The network can heal itself.” Berlin’s Freifunk network has 617 nodes, with somewhere from 3,000 to 5,000 users, the group estimates. Freifunk holds meetings every Wednesday night inside c-base, a huge underground space in the shadow of the Berlin TV Tower filled with ’80s video game memorabilia, 3-D printers, dozens of computers and a space station airlock: Legend has it that c-base is a spaceship that landed in Berlin some 4.5 billion years ago.

The group’s work with refugees in Berlin began in 2012, when refugees were occupying Oranienplatz, a public square in the Kreuzberg district, to demand better treatment. The occupation had no information technology infrastructure, and so Freifunkers decided to get the refugees internet. In December 2013, Freifunk connected its first refugee shelter, the Gerhart Hauptmann School. As the refugee crisis grew in 2014 and more shelters began opening, Freifunk expanded its network. It has connected more than 30 shelters in Berlin and more than 200 across Germany.

David Achuo, 24, is a Freifunk alumnus. He learned about mesh networking from Wagenrad, who has started giving workshops for refugees. Achuo is a refugee from Cameroon, where he was no stranger to online activism. During the country’s elections in 2011, Achuo created a website in support of the opposition People’s Action Party. When the ruling Cameroon People’s Democratic Movement party discovered the website, Achuo says it made him a target: On election day, he was stabbed at a polling station 11 times. “It’s just God that saved me,” he says, pulling up his shirt to reveal deep scars on his chest. Achuo fled to Germany and has spent the last four years at a shelter in Potsdam, about an hour outside of Berlin, waiting for his asylum application to be processed. Thanks to Freifunk, he was able to provide refugees in the shelter with free Wi-Fi.

Achuo also runs an internet café inside the shelter set up by Refugees Emancipation, a nonprofit organization that runs internet cafés in shelters in Potsdam and Berlin. The organization’s founder and director, Chu Eben, is also a Cameroonian refugee: He arrived in Germany in 1998 and was put in a former military bunker in what used to be East Germany. Eben says he felt completely isolated from the rest of society. Then the internet came along. “My friends in Africa called me and asked me for my email address,” he says. “I didn’t know how to tell them I’d never used a computer.” He decided to do something about it. He connected with some University of Potsdam students, who got him online and eventually helped him raise the funds to launch Refugees Emancipation and it’s first refugee camp internet café. Now the organization runs eight more across Potsdam and Berlin. “Coming together in the cafés breaks the isolation. It builds a direct connection with civil society, between refugees and can allow us to create a political platform.”

Civil disobedience
Freifunk too has a political dimension: It operates without government authorization. As Theresa Züger, a researcher at the Alexander von Humboldt Institute for Internet and Society, puts it, “It’s a very productive kind of civil disobedience. It’s not just disruptive but also empowering. It’s citizens taking politics in their own hands and doing it in a very positive way.” Recently, both groups worked together. In October 2015, Freifunk worked with Chaos Computer Club, another hacker collective in Germany, to launch a fundraiser for Refugees Emancipation. They not only beat their  ambitious target of 67,000 euros (about $74,000) but also received 2.5 tons of hardware from one of Berlin’s district councils.

In June of this year, Eben celebrated the launch of his organization’s newest internet café inside a shelter on Heinrich-Mann-Allee in Potsdam. At the opening, Eben smiled and proudly showed off the 20 new computers to a group of refugees, social workers and journalists. Achuo was there also, talking politics with Fadir Sujaa, a Syrian refugee who will be running the café. The room was filled with children from Syria, Afghanistan, Iran and other countries, excitedly clicking away on the machines. On the wall behind them, a simple phrase was written in English, Arabic, German and Farsi: “Internet access is not a luxury. It’s a necessity.”
© Newsweek


USA: Twitter Refuses to Ban Obscene Image of Police Execution

8/7/2016- Despite the efforts of Twitter users to report a graphic tweet depicting the execution of a police office, Twitter has not taken action against the tweet or the user who published it. “[I don’t give a f—k] it’s time for the police n they families to start feelin the pain we feel,” wrote user @Marcel_TNG, alongside an image of a black-clad executioner graphically slitting the throat of a police officer, in an apparent response to the deadly ambush in Dallas that left five officers dead. Offended users have attempted to report the obscene tweet, and the account responsible for it, but as of Friday afternoon the tweet remains with a disclaimer that the image “may contain sensitive material,” an indication that the social media company had reviewed the image and allowed it to remain online. Twitter recently suspended the account of provocative commentator Milo Yiannopolous for criticizing Islam.
© Heatstreet


UK: Arsenal ban fan indefinitely for discriminatory social media posts

Arsenal have indefinitely banned a fan for “offensive and inappropriate” tweets posted online.

8/7/2016- The Gunners were alerted to the discriminatory tweets by Kick It Out – which praised the club for their actions – and a copy of the letter sent to the unnamed fan has since been posted on social media. Arsenal have confirmed that the letter is genuine and warned other supporters that similar bans await anyone else posting discriminatory messages online.  Kick It Out, football’s equality and inclusion organisation, has been running a social media campaign called ‘Klick It Out’ as it looks to curb the sharp rise in online discrimination. “The tweets were brought to our attention by Kick It Out and our position is that we do not tolerate discriminatory language or behaviour of any description,” an Arsenal spokesman told Press Association Sport. “We work closely with Kick It Out to monitor these things and we identified him as being a previous member of Arsenal which is how we located his details. The message is zero tolerance against discriminatory behaviour. It is good to get that message out there.”

It is believed the Klick It Out campaign has helped lead to 11 bans handed out by clubs to supporters during 2015-16 – and Anna Jonsson, Kick It Out’s reporting officer, commended the decision taken by Arsenal. “We welcome the strong action taken by Arsenal Football Club following reports of social media discrimination by a small number of supporters,” she said in a statement released to Press Association Sport. “As a third-party reporting bureau who brought the discriminatory posts to the attention of the club, we do not have the authority to impose stadium bans or other sanctions on supporters, but instead rely on clubs to implement punishment as they see fit. “Social media discrimination within football has significantly increased and the reason we’re running our ‘Klick It Out’ campaign is to raise awareness of online discrimination and how people can report such incidents. We encourage supporters to report any discrimination they see. “It’s credit to Arsenal in this case for taking action and sending out a message to supporters that discrimination of any kind won’t be tolerated.”
© Breaking News Ireland


UK: Facebook group in Yeovil calls time on 'casual racism'

7/7/2016- A Yeovil-based Facebook group with more than 15,700 members has been forced to declare it is "no place for racism". Administrators of the group have become concerned about what has been dubbed "casual racism" in Yeovil Real News, a group with 15,707 members. The warning has come after a thread about car insurance turned into a clash of racially-charged views. A young driver - whose profile identifies him as being Polish - casually asked on Wednesday (July 5): "Best insurance company for a young driver?" But the following day one member of the group posted: "There must be a good one back in Poland?". Other members of the group were quick to criticise the racially charged comment with one saying: "Any need for that comment?" The woman behind the remark said: "Yeah I live in town and I have Polish drunks outside my window every day and every night and me and you pay for them. Do you want an up-to-date video?"

Again users were quick to retaliate to the apparently abusive comments with one saying: "Doesn't mean you can react like that, you need not treat every other Polish person the same." Another said: "So just because one Polish drunk is outside your window every night you need to treat every other Polish person the same?" Not restricting their comments to Polish people the controversial user said: "No no no, not just one! Come and live my life here and then you'll know what you're talking about. In this town there are Romanians, Turkish, Bengalis, Polish. Not racist, your eyes will be opened." But others said the comments were "unneeded", and "disgusting". The user retaliated again with: "Basically you all love the Polish, I'm OK with that, you threw it all on me." One user commented: "He's a young lad who just passed his driving test and you attacked him." Another said, "but what has this lad done that was so wrong to you? Feel free to explain please?" with one more saying, "If you don't agree with a young lad asking for some advice why comment?"

Eventually group administrators stepped in to bring an end to the heated debate and removed the controversial user from the group entirely. The exchange is one of several racially-harged rows to erupt on social media since the result of last month's EU referendum. A discovery of a new mosque on Sherborne Road by a resident who had not seen planning notices prompted a similarly heated discussion as did comments on an incident in Yeovil in which a man had been repeatedly punched and kicked in a racially aggravated assault. That discussion was eventually closed with the administrator saying: "The level of racist and frankly disgusting comments on this thread is disgraceful. Perhaps people should start behaving like decent moral human beings before engaging on a social media forum. Comments closed due to total rude and moronic bigots."

In a subsequent post, one of the administrators warned users who wished to express what could be interpreted as racist remarks in the private group that these could be made public. The post by one of the page's administrators read: "After previous threads have been closed due to nasty racist behaviour, I am forewarning you all. If you wish to comment with racist comments and you deem it acceptable on a public forum, then do not be surprised if your name and comment appear in print or elsewhere on social media. There is no place for racism or any other kind of bigot behaviour in society." The discussion comes soon after it was revealed reports of hate crime in Avon and Somerset has more than doubled after Britain voted to leave the EU. Churches in Yeovil have called for compassion in the wake of the vote with neighbourly love stressed by Adam Dyer of St John's Church and Yeovil Community Church.
© Somerset Live


UK: Tory MP says social media firms should stop abuse or pay for policing

Former culture secretary Maria Miller says companies should face levy if they fail to do more to tackle online abuse

7/7/2016- A senior Tory MP is calling for a levy on social media companies to pay for the policing of online abuse if they fail to do more to tackle the crimes taking place on their platforms. Maria Miller, who was at the forefront of creating a new law against revenge porn, said: “We need to start the dialogue, to say to them: ‘What more can you be doing to tackle the scale of the problem?’ because there is a desperate need for action. If necessary after that we need to put a levy on those organisations to pay for the policing of this. “The police are telling me they cannot cope with the scale of the crime that is being carried out online, in particular online abuse, whether that is image-based sexual abuse, or whether that is homophobic or transphobic hate crime online. They cannot deal with the scale of it and in other similar situations it has become necessary to talk to the organisations where the crime is being generated to establish how they can start to foot the bill.”

Football clubs and sporting venues are some of the private organisations that pay a fee to the police to provide security at their events – an example Miller believes could be used in discussions with social media companies. “We have to look at the law to strengthen the sanctions that are available and we also have to turn a very sharp spotlight on to the platforms to show up those that are not taking this seriously,” she said. Miller used the Commons debate on Thursday to call for the government to set out specific laws to tackle online abuse. She called for better training for police officers and zero tolerance for hate crime online and offline. MPs in the debate spoke of receiving a torrent of online abuse, from being called Nazis to getting rape threats. They called for more action from social media companies and for the government to recognise the problem and stamp out abusive behaviour on the internet.

Tasmina Ahmed-Sheikh, an SNP frontbencher, said the abuse aimed at her was “sickening filth”. She said: “In the past 14 months I have been called a Nazi, received messages which have called for me to be shot as a traitor … and strangers have attacked my father. Some of the dreadful things I have had said to me are not worthy of the status and statute of this chamber. My husband sees these messages, my children have to read this garbage and my staff are required to read it.” She added: “Social media and publishing platforms must accept this is a serious issue and do more to address it.”

Liz McInnes, the Labour MP for Heywood and Middleton, condemned “the apparent lack of coherent policy” among social media companies to combat online trolls and hate speech. She told of how, during the referendum debate, one user tweeted her: “We will see what you say when an immigrant rapes you or one of your kids”, which Twitter said did not violate its rules. McInnes said she had contacted Facebook about a comment aimed at a fellow MP which read: “She looks like a f-ing mutant and should be burned at the stake,” and she said Facebook had replied saying: “It doesn’t violate our community standards.” Recent research has shown an increasing amount of discriminatory abuse, particularly aimed at minority groups.

Miller intends to bring forward amendments to the government’s digital economy bill, which was published on Wednesday, to help police more easily bring prosecutions. She is calling for the government to create a proper strategy to tackle online abuse. “We have got a very real problem with online abuse in this country,” she said. “What we need is a strategy to deal with it and the government has so far taken a piecemeal approach. The scale of criminal activity that is going on is completely unmanageable. We can’t turn a blind eye to it any longer. And I think it is starting to spill over into the face-to-face world.” Miller cited the rise in hate crime in the last week as potentially being linked to the levels of online abuse being perpetrated unchecked. “It would be interesting to know how much work is being done to understand whether there is a relationship between the increases in hate crime that we are seeing on our streets and the amount of hate crime that is perpetrated online. We cannot separate the two worlds,” she said.

In March a senior police officer told the Guardian the law needed to change to enable police forces to better tackle the scale of online abuse, which was threatening to overwhelm law enforcement.
© The Guardian.


Denmark: Euro 2016 France-Iceland game hijacked by racist right-wing party

7/7/2016- The Football Association of Iceland (KSÍ) has condemned the unauthorised use of a photograph of an Icelandic player on a race-related image published by a Danish far-right nationalist political party. The image shows Iceland captain Aron Einar Gunnarsson with the Iceland team alongside a picture of several members of the French team, with the caption “Share if you think ‘France’ should be playing in the African Nations Cup”. The image is a propaganda tool of the Party of the Danes, a far-right nationalist party which won less than 1% of votes in the 2013 municipal elections in Denmark. KSÍ has posted a statement on its website condemning the image, which openly suggests that the French team is somehow African and does not belong in Europe. Iceland was knocked out of Euro 2016 by France on Sunday.

“Football is a force for unity,” reads the statement. “Fans of all nations and backgrounds unite. The common interest of a large chunk of the human race in the sport of football which we so love brings people together. We use sport to bring people together, not split them apart.” “Among the joy and pride brought by the wonderful achievements of the Icelandic national team at Euro 2016, it is horrible to see abuses of the type perpetrated by the Party of the Danes.” “Forces of division have no place in the football movement, and football authorities in Europe have fought hard against racism in the sport. This battle is far from over and KSÍ is committed to taking part in combating racism with all its might”. “Iceland gained the respect of the world during the Euro 2016 finals thanks to their positive conduct and KSÍ completely dissociates itself from hate propaganda of this kind.”

The Party of the Danes posted the controversial image on their Facebook page on Monday, the day after France knocked Iceland out of Euro 2016, with a message asking people to submit their e-mail address “if you also do not believe that Europe and Denmark should be transformed into an African backyard”. KSÍ has said that they will be requesting that the image be immediately taken down.

As of now, the Danish party has yet to remove the post from its site, garnering nearly 700 shares and more than 800 reactions...
© The Iceland Monitor


South Africa Opposed UN Resolution On Internet Access

6/7/2016- South Africa recently opposed a human rights council resolution that calls on governments to ensure access to the Internet and recognizes that the right to freedom of expression extends online. Contrary to many media reports, South Africa did not vote against the resolution itself; the resolution was passed by consensus on 30 June, meaning that no official vote was recorded. 53 countries, including Nigeria, Senegal, and Tunisia sponsored the resolution. Prior to its passage, South Africa had voted in favour of an amendment submitted by China and Russia that would have deleted text ensuring people's access to internet. South Africa also supported another Russian amendment to remove any references to freedom of expression. These amendments, considered hostile by the resolution sponsors, were defeated.

In explaining her concerns with the resolution, South African Deputy Permanent Representative to the United Nations, Ncumisa Pamella Notutela claimed that the resolution was calling for an absolute right to freedom of expression online, which runs counter to provisions against hate speech and racism within South African law. She stated that "incitement of hatred is problematic in the context where we are having our domestic debates on racism and the criminalisation thereof. The exercise of the right to freedom of opinion and expression is not absolute and carries with it duties and responsibilities for rights' holders ... The draft resolution omits key provisions on the permissible limitations and prohibition of hate speech under international human rights law." She also said that the resolution made no reference to hate speech and cyber bullying.

But the resolution references the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights, which do allow for limitations on the freedom of expression. South Africa is a party to both covenants. Additionally, the resolution calls for "combating advocacy of hatred that constitutes incitement to discrimination or violence on the Internet, including by promoting tolerance and dialogue". Since no country called for an officially recorded vote, the resolution automatically passed by consensus. With its passage, the Human Rights Council now officially recognises that people have a right to Internet access and online freedom of expression. However, the resolution was non-binding meaning that no country is obligated to follow through with providing these rights. 
© All Africa


Facebook Pushes Back Against Israeli Claims of Incitement Against Jews

3/7/2016- Facebook is doing its share to remove abusive content from the social network, it said on Sunday in an apparent rejection of Israeli allegations that it was uncooperative in stemming messages that might spur Palestinian violence. Beset by a 10-month-old surge in Palestinian street attacks, Israel says that Facebook has been used to perpetuate such bloodshed and Prime Minister Benjamin Netanyahu’s rightist government is drafting legislation to enable it to order social media sites to remove postings deemed threatening.  Ramping up the pressure, Public Security Minister Gilad Erdan on Saturday accused Facebook of “sabotaging” Israeli police efforts by not cooperating with inquiries about potential suspects in the occupied West Bank and by “set(ting) a very high bar for removing inciteful content and posts.”

Facebook did not respond directly to Erdan’s criticism, but said in a statement that it conferred closely with Israel. “We work regularly with safety organizations and policymakers around the world, including Israel, to ensure that people know how to make safe use of Facebook. There is no room for content that promotes violence, direct threats, terrorist or hate speeches on our platform,” the statement said. It appeared to place an onus on Israeli authorities, as with any other users, to flag offensive content to Facebook monitors. “We have a set of community standards designed to help people understand what’s allowed on Facebook, and we call on people to use our report if they find content they believe violates these rules, so that we can examine each case and take quick action,” the statement said.

Erdan, who urged Israelis to “flood” Facebook founder Mark Zuckerberg with demands for a policy change, expanded on the Netanyahu government’s complaint in remarks published on Sunday. Of 74 “especially inciting and extremist posts” Israel had brought to Facebook’s attention, 24 were removed, Erdan told the Yedioth Ahronoth daily, adding that jurisdiction was an issue. “The big problem is in Judea and Samaria, because Facebook does not recognize Israeli control there and is not prepared to turn over information,” Erdan said, using a biblical term for the West Bank, which Israel captured in the 1967 war and where the Palestinians, with international support, seek statehood. Justice Minister Ayelet Shaked called on social media companies to curb pre-emptively content deemed by Israel to be a security threat. “We want the companies not to approve and to themsel-ves remove posts by terrorist groups and incitement to terrorism without us having to flag each individual post, in just the same manner, for example, that they today do not allow posts and pages with child pornography,” she told Israel’s Army Radio.

Citing sources familiar with the technology, Reuters reported last month that Facebook and other Internet companies have begun using automation to remove Islamic State videos and other extremist content from their sites.
© Reuters


Facebook allowed to collect data on non-users in Belgium

1/7/2016- An appeals court in Belgium earlier this week ruled in favour of allowing Facebook to amass data on people in the country who are not registered users of the social media giant. The ruling reverses a previous court decision that had imposed a tracking ban on Facebook following revelations it was collecting information on non-Facebook users and others who were not logged into their accounts. “Belgian courts don’t have international jurisdiction over Facebook Ireland, where the data concerning Europe is processed,” said the appeals court. But Belgium's privacy watchdog, the Commission for the Protection of Privacy, said the latest ruling means a Belgian resident will be unable to "obtain the protection of his private life through the courts and tribunals when it concerns foreign actors".

The head of the watchdog, Willem Debeuckelaere, said Belgians "remain exposed to massive violations of their privacy". Facebook places special cookies on people's computers without their permission. The small files track the internet activity of logged-out users as well as those that had opted out of being tracked. University researchers in Belgium had revealed the privacy breaches in a report published last year. They found Facebook uses its social plugins, such as the 'like' button, to monitor the activity. The button has been placed on health and government sites. The US tech giant, for its part, says the cookies are needed for security reasons. "We are pleased with the court’s decision and look forward to bringing all our services back online for people in Belgium," said Facebook in a statement.
© The EUobserver


Hello Racist Exposes Racism, Challenges Cultural Sensitivity

1/7/2016- While there are many who believe that America has become too sensitive, or that America is becoming too politically correct, there are those, like Paris, that sensitivity, be it about race, religion, sexuality, or other concerns come from a place that is real. The 12, 346 members and growing daily, of the HelloRacist Facebook community not only agree, but continue to expose racism on a daily basis and to share as much encouragement as possible. Being able to interview the anonymous founder of HelloRacist, was an opportunity to understand why a platform like HelloRacist continues to be necessary today.

The tagline of Hello Racist is Expose a Racist, why do you feel this is necessary in today’s time?
In today’s society, as a country, in the past 50-100 years, we have obviously come pretty far in terms of giving all races and people equal rights and respect. However, in reality, there are still so many people and segments of society who are still stuck in a place where, for whatever reason, be it hate, ignorance, how they were raised, where they live, etc., racism is still acceptable behavior. It’s still systematically a part of some people’s cultures, lives, and mindset. And when we are dealing with a problem where people are still engaging in racist behavior in a very open way that is widely understood and known to be wrong. The solution is that to change this behavior people need to be confronted with their actions, the reality of what they are doing and behavior they are engaging in, to make them fully understand and appreciate that what they are doing is wrong, and to make them stop. The confrontation is the lynch-pin to facilitate change when it can be achieved.

Exposing racists is so important in today’s time though not just to try and change people. But because there are still a surprising number of people who are racist who really shouldn’t be because of who they are and what they do professionally. Who, if they are racist, they have the potential to ruin people’s lives when they let their racial biases and prejudices creep in and affect their professional decisions and judgments in ways that are completely incomprehensible and illegal. And that really shouldn’t happen in today’s society. I’m speaking about people who serve society in roles as police officers, teachers, public officials, doctors, bankers, realtors, judges, CEO’s, and others. It’s very critical information to understand if people filling important roles in society are racist and are making decisions and judgments that affect people of all races. The site operates as a public service announcement in that regard. These types of people need to be outed and fired/removed.

Some people say that we live in a time where there seems to be too much sensitivity, anything can be misinterpreted, do you believe that your site can help to propel this cultural sensitivity?
I don’t know. Maybe. I think there is some truth to people’s complaints that society can be a bit too sensitive about certain issues or too politically correct. Truth be told some people are overly sensitive to issues that have very little relevance to the majority society. However, in most cases, I feel like sensitivity, be it about race, religion, sexuality, or other concerns comes from a place that is real. I.e., people are truthfully offended and feel disrespected. It’s just difficult to put yourself in someone else’s shoes to really feel and understand where that is coming from.

I think what makes things worse though is that most people when faced with sensitivity or criticism from others, they get defensive and/or “double down” and attempt to minimize or trivialize others concerns or their own conduct. When what they really should do is acknowledge the issue, try and understand the other person’s perspective, and if it’s appropriate just apologize or move on. We all have the prerogative to say and do things that can be construed as being insensitive and offensive if that is how we really feel. And we also have an equal right to be offended. However, my hope is that if and when are put in these situations we can be more “real” about these situations. I would hope that does that more so than just propel sensitivity.

Through your website and facebook page, you have exposed a lot of direct racism, has there been a problem with some of these people challenging your platform?
Not really. Most of the legal threats I get are hilarious.

How does a platform like Hello Racist truly make a difference rather than just being a place for people to vent?
I think when some people are confronted and called out about their racism, it actually does change them. They become ashamed and embarrassed. But they learn not to do it again. People learn to be racist. And I like to believe that most people at heart don’t really believe in racism or subscribe to it. They just are because that’s what they learned to do and were never told not to or confronted by anyone to tell them that it’s wrong. I think it also makes a difference because what we’ve also seen is that there are people who are truly racist to the core and will never change. And often they joined gangs and subscribe to very violent groups who I think are real threats to commit violent acts. So people learn and are warned about truly dangerous and racist individuals and can hopefully avoid them

Why do you choose to remain anonymous?
Running a site like, you are inevitably going to receive threats. Legal threats. Threats of violence. Some are laughable. Some are very real and concerning. To the extent possible, I’d like to keep these threats out of other aspects of my life. Being anonymous allows me do that.
© The Huffington Post


Kenya: State to double penalties for cyber crimes in proposed law

29/6/2016- Persons using the Internet to spread hate, hack into a protected computer system or intercept communication face a penalty of up to Sh20 million or 20 years jail or both in a proposed law. This is double the Sh10 million or 10-year jail term under section 25 of the Kenya Information and Communications (Amendment) Act 2013, for unauthorised access to a computer system with an intent to commit a crime. The draft Computer and Cybercrimes Bill 2016 seeks to align the law to advanced forensic procedures when investigating rising cases of cybercrimes estimated to cost economies tens of billions of shillings a year. Crimes such as bank fraud, money laundering, hate speech, theft of identity and child pornography are increasingly being committed online, underlining the prioritisation of the bill by the government. Others are unauthorised access to a protected computer system, phishing, botnets and, cyber-stalking and bullying.

“We have extensively consulted and we are still going to consult the public, but we are talking of a maximum of 20 years imprisonment and Sh20 million fine,” ICT Cabinet secretary Joseph Mucheru said yesterday. “Some countries have gone for life imprisonment (for unauthorised access to a protected computer system), like in Uganda. Here, we have taken into account the severity (of the offence). The key thing is we are taking it extremely serious and we record these things.” State actors will, however, still have to obtain a court order to access information during investigations. “In this bill, we are still protecting your data and privacy, but there is a clear process on how the law enforcement agencies access any record or information that they require in their investigations,” Mucheru said.

The draft bill borrows from Budapest Convention on Cybercrime – an international treaty to ensure harmony in national laws on cybercrimes effected in July 2004. The USA, Japan, Australia and South Africa are some of the parties to the convention. Further input came from Council of Europe’s Cybercrime division as the state eyes international co-operation in obtaining cross-border help during investigations, collecting evidence and ensuring preservation of traffic data. The new bill has been necessitated by alleged attempts by 77 Chinese nationals to build a cyber command centre in the posh Runda estate in December 2014 and hacking of 103 government websites in January 2012. Inter-agency team from the ICT ministry, the Central Bank, the Office of the Director of Public Prosecutions, Communications Authority, ICT Authority, National Police and National Intelligence Service has drafted the bill in a process that started last October.
© The Kenya Star


Germany: Online hate speech, conspiracy theories boom

Online abuse and far-right propaganda have increased dramatically in the past 18 months, a new study shows. Social media giants have committed to the government taskforce, but there is still much to be done.

29/6/2016- Online racist abuse and hate speech have exploded in Germany in the past 18 months, a new report by the anti-racism foundation Antonio Amadeu Stiftung (AAS) has found, with calls for violence against refugees, false stories and rumors about their crimes, and neo-Nazi slogans (often disguised to avoid litigation) all on the rise. The 22-page report, released this week, also found a connection not only with the increase in violence against refugees and refugee homes, but also with an increase in "conspiracy-ideology" attacks on politicians, journalists and volunteers helping refugees. The report found that social media was acting as a powerful amplifier for abuse. "The monitoring report reveals that the agitation is intensifying in the social media," AAS chairwoman Anetta Kahane said in a statement. "The dimensions of hate reach from racist agitation, celebrating the reports of attacks on refugees and arson attacks on asylum homes up to agitation against volunteers who help refugees, journalists, administrators, and politicians."

Skepticism about politics on the rise
There has also been an increase in agitation from across the political spectrum against authorities, the media,and NGOs, according to the foundation - as well as a growing mistrust of the mainstream media and politicians. "On the social web we are observing the building up of a dangerous front from different political spectrums, but which are increasingly finding a common denominator, and that is 'hate against the system'," Kahane said. "What is noticeable: the longer that agitation on the Internet against refugees continues, the more often one finds conspiracy-ideological statements. Politicians become 'traitors,' journalists are defamed as 'lying press' and supporters from civil society are described as 'dirty leftist do-gooders'. The report was produced to coincide with Tuesday's release of the latest federal intelligence agency (BfV) report on politically motivated crime in Germany, which noted a 42-percent rise in acts of far-right violence in 2015.

Aping respectability
The AAS also detected a more insidious trend - websites set up by far-right groups to appeal specifically to the middle classes. AAS found some 300 "no to refugee homes" Facebook profiles, which, they argued, were designed to appeal to "concerned citizens," by using local information and consciously unprofessional design to attract people with fears and concerns about planned refugee homes. This, they said, was generating support for the populist right-wing Alternative for Germany (AfD). "There are a number of signs, but you have to look at these pages more closely," said Johannes Baldauf, one of the authors of the report. "A good sign is always - what kind of language is used there. Are there words like 'system press,' do they claim that you can't believe the press, or that politicians are all corrupt." Another sign used by such profiles, he said, was links to sources that are not credible. Often, Baldauf argued, it's clear that more extremist organizations are behind such sites. "If the NPD [far-right National Democratic Party] says something like, 'we're against refugees,' then it's very clear for people - that's a taboo," he said. "But if someone else comes along and says, 'I'm really concerned if so many refugees come, there will be problems with drugs and they want to attack our women,' then it has a different effect, even though the content is the same. But if there is an NPD logo there, then a lot fewer people listen."

Facebook's transparency problem
Last December, the German Justice Ministry set up a taskforce to combat online hate speech, and enlisted social media giants Facebook and Twitter to help stamp it out. But Philip Scholz, Justice Ministry spokesman, said that while those companies had acknowledged their responsibility, more could be done. "It's not enough. The AAS report confirms that," Scholz told DW. "Even though it joined the taskforce, and made certain commitments, Facebook is still very un-transparent. We don't know how many people are employed at Facebook to check the reported content. We don't even know how many complaints are made to Facebook and what percentage of the content is deleted. So for us it is quite hard to judge what the actual reasons are." "You have to say that the companies that earn a lot of money with the Internet have a responsibility to find an adequate solution," he added. But Baldauf said there is plenty that the state could do, as well, especially when it comes to comments on Facebook posts, rather than statements made by operators of certain pages. "That's a negotiation between the state and the company, and there's still a lot both sides could do," he said. "The companies have to give criminal prosecutors access to certain things. But the structures that the state puts at the disposal of these things are not adequate either."
© The Deutsche Welle.


Dutch Teenage girl’s mother takes Instagram to court over fake account with naked photos

28/6/2016- The mother of a 15-year-old girl has taken Instagram to court over a fake account under the teenager’s name containing naked photos and videos, reports The Volkskrant. This mother, from Hoorn in the Netherlands, wants to know who set up the account, but Instagram will not provide user data without a court order. A court in Alkmaar has heard that the photos and sexually explicit film do not actually feature the teenager, but the account included her nickname, and may be related to school bullying. Marianne Zeeman, the mother’s lawyer, reportedly said: “This is not the only incident. The girl has been pushed to attempt suicide several times.” She added that the family saw legal action against Instagram to test their “strong suspicions” about the culprit as the only way to deal with the bullying, as police had been unable to resolve the issue.

Instagram, an American company, argued it is a neutral platform acting as an intermediary, not responsible for content, and bound to protect user privacy. Jens van den Brink, acting for Instagram, reportedly said it found itself “between a rock and a hard place”. The fake account has been taken offline. The court will give a decision on whether to force Instagram to reveal the name of the alleged bully by 11 July. Last year a 21-year-old Dutch woman called Chantal successfully took Facebook to court to find out who had posted a “revenge porn” sex film of her online, and an IP address involved is currently being investigated. Meanwhile, the Dutch government is planning to tighten rules to protect children online this year, making “sex chat” and sexual extortion of minors a crime.
© The Dutch News


How Do FB “Community Standards” Ban Muslim Civil Rights Leader and Support Anti-Muslim Groups?

By Mandie Czech

27/6/2016- Facebook is a social network that has over one billion members. It’s a place for businesses and artists to connect their facilities or art with people, and a place for families and friends to connect and share their feelings, thoughts, and activities. Facebook prides itself on being inclusive of everyone. Recently, there has been an increase of Islamophobia and Islamophobic rhetoric on Facebook and founder Mark Zuckerberg vowed to fight this hate speech. So how did Ahmed Rehab, Executive Director of the Council on American Islamic Relations (CAIR) – Chicago, get kicked off Facebook – twice – after posting criticism of Donald Trump’s own Islamophobic comments?

In a Facebook posting on December 9, 2015, Zuckerberg wrote,
I want to add my voice in support of Muslims in our community and around the world. After the Paris attacks and hate this week, I can only imagine the fear Muslims feel that they will be persecuted for the actions of others. As a Jew, my parents taught me that we must stand up against attacks on all communities. Even if an attack isn’t against you today, in time attacks on freedom for anyone will hurt everyone. If you’re a Muslim in this community, as the leader of Facebook I want you to know that you are always welcome here and that we will fight to protect your rights and create a peaceful and safe environment for you. Having a child has given us so much hope, but the hate of some can make it easy to succumb to cynicism. We must not lose hope. As long as we stand together and see the good in each other, we can build a better world for all people.

Given that Zuckerberg made such a bold statement and openly told Muslims that Facebook is a welcoming environment, it seems to be a contradiction when a Muslim community leader critical of bigoted speech towards Muslims gets banned by Facebook. Most recently, on his personal Facebook page, Ahmed Rehab criticized Republican Presidential candidate Donald Trump calling out Trump’s, “false and racist accusation that Muslims refuse to assimilate” and asking why Trump wishes to bestow “Nazi mindsets onto our country.” After multiple pro-Trump trolls likely reported his posting to Facebook, Rehab was banned from posting for two days. Rehab was exercising his right to free speech, which is constitutionally protected under our First Amendment. After he was allowed to post again, Rehab made commentaries regarding his experience being banned.

He posted again about Donald Trump, criticizing Trump’s Hitler-like attitudes. Within a matter of hours, Rehab was again banned from posting on Facebook for three days for speaking his mind and using free speech. During the start of his second ban, Rehab commented that Facebook apologized for his first ban stating,
Facebook sent me an apology, blaming the error on an employee, and yet still didn’t lift the ban despite several requests. This may not be a conspiracy, just a stupid, incompetent company at work”. Facebook originally told Rehab that his posts violated “community standards”, but a quick check of Facebook’s standards shows Rehab’s commentary not to be in violation.

This incident raises other questions. Why does Facebook ban and ask questions later? What does Facebook do if the employee reviewing the claim of standards violation is prejudiced? Why can’t you, the accused, fight your case more effectively with Facebook and be allowed to see the claim made against you or your post. It isn’t a fair system if you can’t defend yourself. If Ahmed Rehab is banned consecutively for exercising his free speech, why aren’t so many others challenged in the same way? Consider the Facebook pages dedicated to anti-Islam/anti-Muslim hate groups such as: “Exposing Islam,” “Stop Islam,” “Islamic morality is immoral,” “The Truth about Islam,” “Just Say No To Islam,” “North American Infidels,” “Ban The Burqa,” “Anti-Islam Alliance,” Bureau of American Islamic Relations (BAIR), “Women of the World United Against Islamic/Muslim Sharia Law,” “Pamela Geller’s Official page,” “A Cult Called Islam,” and “Bare Naked Islam.” These hate groups cause distress and fear for Muslims yet Facebook has no problem allowing these pages to operate even though many of these pages advocate killing, Islamophobia, and oppression against Muslims.

How do blatant attacks against a group of people qualify as “free speech” when the speech is clearly violent and threatening in nature? Most of those pages talk about killing Muslims, banning Islam and Sharia law, and practicing violence against anyone who is or looks Muslim. If Facebook actually holds community standards, these pages and individuals should be blocked for spreading violence and hate, not to mention, the insinuation of killing people. While language is protected under our First Amendment, threatening language that insinuates violence such as what is found on those pages cannot logically be deemed acceptable under those community standards. Yet for some reason, those pages rarely find themselves in trouble. When I reported an Islamophobic page, I was greeted with a friendly message from Facebook telling me that they reviewed my request but found the language did not violate community standards.

This wasn’t the first time I was surprised to find that these “community standards” don’t seem to apply to everyone. A few months ago my friend took a photo of herself and me; we were both wearing hijabs. I made that photo my Facebook profile picture. When I commented in rebuttal to someone who was badmouthing the Prophet Muhammad and the Qur’an, I was confronted with hostility and Islamophobia. I was called “terrorist” and this individual suggested I was going to, “don a suicide belt to kill people.” I reported this man and his commentary to Facebook for review on the grounds of “violating community standards” due to his hate speech. Within less than 24 hours I had a notification stating that his commentary was reviewed but staff found nothing wrong with it. It was suggested to block him.

On another occasion, I ran across what I deemed to be a dangerous and mentally ill individual who was talking about killing Muslims, his guns, and how he would like to lynch President Obama. I reported him but Facebook reviewed his comments and sent me a message about how he is practicing his right to free speech. This link provides information regarding Facebook’s community standards, which they don’t always seem to follow themselves. To further prove that Facebook has a tendency to single out Muslims, I found that when my profile picture features me without a hijab, I could comment on someone I disagree with and not get my comment removed. Yet, when my profile picture depicts me in a hijab, when I disagree with someone, I have found that more frequently, my commentary is removed because it “violated community standards” or was otherwise deemed inappropriate.

While Ahmed Rehab has been fully restored to Facebook, it doesn’t take away from the fact that he was banned to begin with. It never should have happened. This is the fault of Facebook’s policies of banning first and asking questions later along with Facebook’s claim of “employee error.” It is a major problem that Internet trolls can have so much power to get some one banned even for a short time. A large, sophisticated social media company such as Facebook needs to quickly be able to investigate whether “community standards” have really been violated before someone is banned. The standard apparently is to assume the poster is guilty until proven innocent. Social media in general can be hard to navigate, especially if you are part of a targeted minority like Muslims, who now seem to attract attacks both online and on the streets. But there is even a bigger question here. With all of the attacks and hate directed at Muslims, why doesn’t Facebook step up and act on its claim of inclusiveness and take down pages that are spreading hate and violence against Muslims?
The views expressed in this article are the author’s own and do not necessarily reflect Chicago Monitor’s editorial policy.
© The Chicago Monitor


UK: Ex-Yorkshire mayor in racism storm over anti-Muslim and ‘Romania gypsy’ tweets

A former Yorkshire mayor faces being reported to the police over alleged racism and anti-Muslim comments on social media.

30/6/2016- Councillor Heather Venter, who was mayor of Driffield in 2013 and 2014, supported controversial posts on Twitter, but denies harbouring racist views. One tweet she ‘liked’ said: “Shouldn’t employ Muslims. Nothing but trouble.” Another tweeted on April 30, read: “Sadly, looks like Romania’s Gypsy begger/pickpockets will b [sic] soon replaced by African Muslims.” She also tweeted a link to an article by a neo-Nazi website that read: “White South Africans march in London against white genocide.” The controversy comes after a website accused the councillor of racism for her Twitter activity. George McManus of the Beverley and Holderness Labour Party. said the tweets ‘liked’ by Coun Venter were “designed to cause offence”.

He added: “There’s no room for remarks like these in a civilised society. I am particularly concerned that this person occupies a position of authority as a councillor and that this impacts badly on the reputation of the good people of Driffield. They are in my opinion designed to cause offence and to cause racial and religious hatred. “I intend to ask Humberside Police to consider whether or not they constitute an offence under section 127(1) of the 2003 Communications Act.” Coun Venter has denied the allegation and said: “I can’t understand it because I’m not racist.” She told The Yorkshire Post she could not remember the tweet about employing Muslims, and said: “I just can’t understand how I would have favourited it. I can’t remember doing that. I like Muslims. I’m pro Palestinian for God’s sake.” But she defended the tweet about Romanians. She said: “It’s happening. It doesn’t mean I don’t like them. Yes they are coming, it’s a fact. You have to see things as they are. Certainly there’s no malice behind it.”

On the South African tweet, she said: “I lived in South Africa. There was a protest, a march in London, about white genocide, because farmers are getting murdered every week.” Coujn Venter has 59,800 likes and says Twitter had been her “lifeline” since her serious illness, especially last couple of months when she had been housebound. She added: “It beggars belief I have 59,800 likes - doesn’t it make you think it is a concerted effort to get at me?” She said she was being called a Nazi on Facebook: “It’s all over Facebook apparently. It said I was a Nazi and I should be made to resign. I could ask the police whether it is an offence that someone local has said I am a Nazi. It’s a two way thing. I could be accusing them of libel, assassination of character. I just find it all pathetic.”

The former mayor moved back to the UK in 1998, having lived in South Africa for a number of years with her South African husband. During her time there, she said she took pity on out-of-work African men, by providing them with food. She added: “When you see a man and all they’ve got is their pride - they’ve got nothing, absolutely nothing - and for a man to come and stand at your gate and beg for food, you feel for them. “My gardener would stand every day waiting to be picked up for work and if they didn’t work, they didn’t eat. “I used to give him chicken and stuff like that.”

And when asked about her ‘liking’ a Tweet that attributed knife crime in London to black people, she said: “That’s a sad undeniable fact. That doesn’t mean that I don’t like blacks. I have a lot of black friends.” Claire Binnington, clerk of Driffield Town Council: “We’ve made the person who made us aware of this to contact standards at East Riding Council, which is the procedure for people who have complaints. She does make clear on Twitter that she speaks for herself and not the town council.”
© The Yorkshire Post


UK: Minister for hate crime won't use Twitter because it's so awful

Karen Bradley, the Home Office Minister responsible for fighting hate crime, stays off Twitter because it's too full of hatred

29/6/2016- The Government Minister responsible for fighting hate crime has revealed she stopped using Twitter because she was sick of the abuse. Midland MP Karen Bradley made the admission after she was asked about racism, anti-Semitism and intimidation. She said: “I am not on Twitter now. There is a reason I am not on Twitter. I just decided I didn’t want to listen to this kind of nonsense.” And later, she praised Birmingham MP Jess Phillips (Lab Birmingham Yardley) for staying on Twitter despite the abuse she has received. Mrs Bradley said: “The honourable lady I know has experienced far far more than her share of abuse, particularly online, and she’s a stalwart for standing up and being there, and still being on on Twitter. I’m not quite sure why she is.” Karen Bradley is the MP for Staffordshire Moorlands and a Government Home Office Minister responsible for hate crime, extremism, anti-social behaviour, violence against women and girls and other issues.
© The Birmingham Mail


UK: Man arrested in London over suspected racist social media posts

Detectives investigating extreme right-wing, Islamophobic and anti-Semitic postings on social media have arrested a man.

29/6/2016- The 41-year-old was held in London on Wednesday morning on suspicion of inciting racial hatred. Scotland Yard said the man, who is from London, was arrested at approximately 6.30am as part of a pre-planned operation in north London by officers from the Crime Disruption Unit within the the force's Counter Terrorism Command, supported by the Territorial Support Group. A Met Police spokesman said: "Detectives executed search warrants at two addresses, both in north London, as part of this investigation, which relates to social media postings of an extreme right wing, Islamophobic and anti-Semitic nature. "Searches at one of the addresses are ongoing. A number of digital items have been seized at one of the properties." The arrested man has been taken to a north London police station where he remains in custody.
© The Press Association


Brexit: Facebook page highlights racism after vote triggers spike in hate crimes

Critics have attempted to shut down a Facebook group which highlights racist encounters amid a spike in hate crimes after the EU referendum.

27/6/2016- Sarah Childs, 32, set up the Worrying Signs page with two university friends to show highlight a surge in xenophobic incidents since Britain voted to quit the EU. Since the group was set up yesterday it has amassed more than 7,500 members as users flood the page with stories of racist confrontations. Stories include one man who said a “Go Home” message aimed at a Romanian pupil was scrawled on a toilet wall at his daughter’s school, and chants of “Make Britain white again” on London’s Portland Street. And while thousands of people have praised attempts to highlight the fears, others have left messages criticising the page and urging Facebook to ban it.

Ms Childs, a community enterprise consultant from Sheffield, said: “The idea of putting all the stories together was that it’s easy to dismiss one story, but when you have a few hundred all together they make a bigger impression. “It’s harder to say, ‘oh, it’s just a minority of people it’s happening to, it’ll all blow over’. “Maybe it is a minority. I don’t know, but I don’t think that’s the most important point here. This is an issue that is affecting a lot of people and we can’t allow it continue.” “I’ve experienced some harassment from people who don’t like what we are doing, some people trying to get me banned from Facebook to interrupt what we’re doing.

“I have also received some messages from angry Leave voters who feel that I’m trying to paint them personally as a racist, but that isn’t what this campaign is about at all. “This is a problem that arisen in the aftermath of the referendum but it’s not really about the referendum anymore. “It’s not about Leave or Remain. We don’t think everyone who voted Leave is a racist. We just want to highlight a problem that needs to be addressed going forward.” Ms Childs called on authorities to address the growing fears of hostility towards foreign residents. She added: “While we have politicians, lawyers, and economists working on the legal and economic ramifications of leaving the EU and drawing up a plan for that, we don’t see anyone talking about a social plan.

“If we’re going to make this country a better place for all of us to live in we need to plan to heal our social divides as well, and to ensure that every person in our communities feels welcome and safe." She added: "We currently have over 7,000 members in the group and more requesting to join every minute. This is a group that has only been in existence for 26 hours. "Our initial goal was simply to draw attention to the rise in racist and xenophobic harassment and violence in the wake of the referendum. “It’s clearly something that is resonating with a lot of people. Our hope is that it becomes big enough to more than just an awareness raising initiative but an actual spur to action and leadership on this matter.”

London Mayor Sadiq Khan urged Londoners to "stand guard" against hate crime following Britain's decision to withdraw from the European Union. The rallying cry came 24 hours after the Met confirmed it was investigating allegations of criminal damage after racist graffiti was reportedly smeared on a Polish community building in Hammersmith.
© The Standard


Google and Facebook Quietly Escalate Their Cyber-War on IS

The two tech giants have stepped up their fight using the same technology used to remove videos with copyrighted content.

27/6/2016- Silicon Valley has long struggled with how to police inappropriate or even criminal content. Earlier this year, Microsoft, Facebook, YouTube, and Twitter agreed to work with the European Union to identify and combat hate speech online. The problem these companies face is that they often rely on users submitting and flagging material, but the concern is that if companies start taking down users’ posts themselves, they run the risk of being seen as self-censoring. Now, though, at least two tech companies have turned to automation to remove extremist content from their platforms. YouTube and Facebook are among a group of tech giants that have quietly begun to use automation to eradicate videos featuring violent extremism from their Web sites, Reuters reports.

Two sources tell the news outlet that the technology the companies are utilizing is the same used to automatically identify and delete copyright-protected content, though it’s unclear how much of the process is automated. (Google, Facebook, and others are already using automation to eliminate child pornography on their platforms.) The companies’ end goal is not to identify new extremist videos posted to their platforms, but to prevent re-posted material that’s already been deemed inappropriate from spreading, including Islamic State videos. Neither YouTube’s parent company Google nor Facebook would confirm the reports, nor will they discuss the use of such automation publicly, Reuters’ sources say, partially out of concern that terror groups will learn to circumvent the technology.

The report comes amid growing calls by political leaders for tech companies to fight terrorist propaganda on their own platforms. Shortly after the terrorist attack in Orlando that left 50 dead, presumptive Democratic presidential nominee Hillary Clinton began to urge tech companies to combat extremist propaganda from the likes of IS online. “As president, I will work with our great tech companies from Silicon Valley to Boston to step up our game,” Clinton said in a speech. “We have to [do] a better job intercepting IS’s communications, tracking and analyzing social-media posts, and mapping jihadist networks, as well as promoting credible voices who can provide alternatives to radicalization.” Clinton hasn’t called for blocking content online, though Donald Trump, now the presumptive Republican presidential nominee and Clinton’s primary opponent, has.

Following Apple’s public spat with the F.B.I. over its refusal to unlock an iPhone belonging to the San Bernardino shooter, Trump called for a boycott against Apple and argued in favor of the United States closing off parts of the Internet to thwart ISIS, though it wasn’t entirely clear what he meant.
© Vanity Fair


USA: Halting the hate

A new technique for removing radical propaganda

25/6/2016- American officials referred to Anwar al-Awlaki as a senior recruiter for al-Qaeda. After being connected to numerous terrorist attacks, in 2011 he became one of the first United States citizens to be killed by an American drone. Yet Awlaki’s online lectures continue to inspire Islamic extremists nearly five years after his death. His videos are thought to have helped radicalise those responsible for the attack this month on a gay nightclub in Orlando, for the shootings in 2015 at the Inland Regional Centre in San Bernardino and for the Boston Marathon bombings in 2013. Once such extremist videos appear online they never disappear. YouTube removed hundreds of Awlaki’s videos in 2010. But a search of the platform reveals thousands of copies remain in circulation. Now a new technology promises to help prevent extremist videos from spreading on the internet.

The technique, known as “robust hashing”, was developed by Hany Farid at Dartmouth College in Hanover, New Hampshire, working in partnership with Microsoft. In essence, it boils down a photograph, video or audio file into a unique numeric code. To generate a code for a photo, for example, the image is first converted to black and white, changed to a standard size and then broken up into squares. Dr Farid’s algorithm then calculates the variation in intensity (the brightness of the pixels) across each of the cells in this grid. Finally, the intensity distribution of each cell is combined to create a 144-digit signature (or “hash”) for each photo. The technique can identify photographs even if they have been altered in minor ways (if a photograph’s colour is changed, for example, or if marks are made on it). Dr Farid estimates that his software can check up to 50m images a day. Importantly, there is no way to reconstruct a photograph from its hash.

An earlier version of the technology, called “PhotoDNA”, has already been successfully deployed to remove child pornography from social-media sites but is able to create hashes only for photographs. Working with the Counter Extremism Project (CEP), a non-profit organisation, Dr Farid has been able to extend robust hashing to video and audio files. Dr Farid has not published his work. The reason for that is he fears it would help people to try to circumvent the technology or allow repressive regimes to use it to suppress dissent. Instead, he and the CEP hope to set up the National Office for Reporting Extremism (NORex). This body would help maintain a database of extremist imagery and assign robust hashes to the most brutal or dangerous. Social-media companies have yet to sign up but if past experience is a guide, they soon will.

In 2009 Microsoft donated PhotoDNA to the National Centre for Missing & Exploited Children, an American organisation which has built a registry of hashes from its database of abusive images. The technology, which removes hundreds of thousands of photographs each year, is used by nearly all social-media companies, including Facebook and Twitter.
© The Economist


Headlines June 2016

UN aware of racism on internet

This impulse stems from the enormous increase in hateful comments in response to the shooting of a Syrian refugee at the Slovak-Hungarian border.

24/6/2016- The UN Committee on the Elimination of Racial Discrimination (VRAX) unanimously denounced statements against minorities found in online discussions. This impulse stems from the enormous increase in hateful comments in response to the shooting of a Syrian refugee at the Slovak-Hungarian border in early-May. These comments were collected and presented to the committee by the Islamic Foundation in Slovakia, VRAX informed in a press release. Customs officials close to Ve¾ký Meder flagged down four cars full of migrants that entered Slovakia from Hungary in May 2016. One of the vehicles refused to stop, prompting the authorities to open fire at its tyres, during which the woman was hit.

Moreover, the Slovak Catholic Charity and NGO Human Rights League informed the committee of repeated physical assaults on a young refugee from Somalia including one witnessed by her young son. Over the last year, we have registered a significant rise in hate speech against refugees, foreigners and other minorities which have boiled over into physical attacks against the most vulnerable persons, said Zuzana Števulová, head of the Human Rights League. “Such a situation is not acceptable and requires not only the activities of the police, prosecution and civil society, but also major political indicators and conviction from the highest authorities of the country,” said Števulová in the press release.

VRAX vice-president Irena Biháriová added that criminal sanctions cannot provide the only and universal solution to the problem in spite of more active support of the use of legal instruments for the fight against this phenomenon. VRAX asks to investigate and punish perpetrators and for plans to settle in detail the topic of online hateful statements through a special working group. The committee delegated Biháriová to discuss matters with the Interior Minister and the general prosecutor to enhance cooperation in preventing and combating extremism and radicalism, the press release reported.
© The Slovak Spectator


New Zealand: Cyberbullying: The media should practise what they preach (commentary)

As so often happens with the rapid uptake of technology, we’re quickly forced to confront new ethical dilemmas.

23/6/2016- As so often happens with the rapid uptake of technology, we’re quickly forced to confront new ethical dilemmas. Cyber-bullying is proving one of the great unforeseen challenges of our time. It’s admirable our media are now showing leadership with an It’s Not Okay–style campaign to discourage bullying and abuse on social media. But this will only resonate if the media takes responsibility for its own contributions to this pernicious social problem.

The internet has been the biggest democratic boon to human communication since the printing press. Yet the online revolution has also put the media in a spiral of financial vulnerability and many outlets have ramped up their salacious, celebrity and “click-bait” content in a bid for survival. This creates the optimal environment for social bullying. The more lurid the story, the nastier the comments and the wilder the “social media storm” that acts to justify the news judgment. The media need not actively encourage commenters to “hate on” subjects, but too often they fail to provide a handbrake by adequately moderating abuse from comments threads.

Even state-funded Radio New Zealand, which need not stoop to click-bait for survival, has been inadvertently caught out, by not dealing in a timely manner with a slew of ugly and racist comments about the Prime Minister on its site earlier this year. It would be a useful display of anti-bullying leadership for media outlets to provide no comments function at all unless it is adequately moderated. Likewise, they might take note that some of the worst online offenders are journalists and columnists. The Press Council recently upheld a complaint against a New Zealand Herald journalist who went on Twitter to bait a public figure who was a subject of his pending news story.

Social media are public information outlets no less than newspapers or radio stations. What members of the media do there reflects on the standards and ethics of their employers. It’s beyond ironic that the media have run innumerable stories about employees being disciplined and even fired for questionable or distasteful online posts, while media managers seem blind to some of their own writers’ online aggro in forums such as Twitter. This has included obscenity and vilification – not just of politicians or newsmakers but of ordinary New Zealanders, including young women, just doing their jobs, who have dared to displease. If the media and their corporate sponsors aren’t aware of the damage being done to their brands, they should be. Enforcing professional standards of conduct is critical.

Beyond that, most of us, armed with common sense and empathy, can tell the difference between gratuitous bullying – which may meet the definition of harmful digital communication – and fair comment and criticism, which is healthy and necessary. However, this is an area in which we should tread carefully. No matter how well-intentioned the desire to repress abuse and hate speech, we risk crimping freedom of expression. In critical respects, the internet has changed nothing. The most effective remedy for objectionable speech remains, as always, not to silence or gag those with whom we disagree but to provide more opportunities for free speech. Yet our new ability to access a like-minded cyber-community can make us feel entitled to shut down those whose opinions we dislike. This is a variant of bullying, distinguishable from the standard kind only because of its self-righteousness: someone says something others find sexist, racist or unscientific and keyboard warriors propose a boycott and lead a massed online beat-up.

Sustained abuse, silencing and threat of income loss – bullying doesn’t come much worse. Yet, too often, people feel virtuous in such pile-ons because they’re only trying to silence views inimical to their personal community. It’s righteous for us to “un-person” them, but Stalinist or fascist if they try to stifle us. Amid the torrent of comments, the job of the media is clear. It is to facilitate the expression of facts and opinions as a civilised and informed function of a healthy society. It is never to simply stand back and watch bullying work its repressive harm on our freedom.
© The New Zealand Listener


Auschwitz Game Highlights Serious Holes in Google’s Review Process

Controversy raged this week over news that the Google Play store had allowed a free mobile game that promised players could “live like a real Jew” at Auschwitz.

23/6/2016- For the second time in a month, Google’s review process was brought into serious question. But now, the game’s creators have come forward to say that was the point of the game. TRINIT, a vocational school teaching video game design in Zaragoza, Spain, asked their students to design games that would test the strength of Google’s policy on hateful speech and inappropriate imagery during the review process, the institute told The Forward in an email. “Surprisingly, Google denied almost all of the test apps, but [the Auschwitz game] was approved,” the institute said. TRINIT said it pulled the game, which it said was nonfunctional and only included a start page, on Sunday night after realizing it had sparked media controversy. The institute said it received a notice from Google later that night notifying it that the app had been reported several times. Google confirmed to the Forward that the app was pulled from its store on Monday.

In addition to its Auschwitz game, TRINIT said it chose to pull other test apps from Google Play, including apps named “Gay Buttons” and “Kamasutra Dices.” The school said it instructed students to test Google’s app policy by specifically testing themes corresponding to questions on a Google survey used in the app approval process. One question on the survey, shared with the Forward by TRINIT, asks whether the app under review contains symbols or references to Nazis. Although the school said it replied yes to the survey question, Google still approved the submission. A Google spokesperson said, “While we don’t comment on specific apps, we can confirm that our policies are designed to provide a great experience for users and developers.” “This clearly indicates that Google needs to be more vigilant about its review process,” said Jonathan Vick, assistant director of the Anti-Defamation League’s cyberhate response team.

However, Vick also finds blame with the way TRINIT conducted its experiment and remains skeptical of the app’s true purpose. Vick told the Forward it concerns him that the school felt it was sufficient to take down the offensive app without issuing a statement, and he called on the school to explain itself in public. “Review is a human process and any time people are injected into the equation, the margin for error increases,” Vick said. “Since the Google review process isn’t transparent, we don’t know where in the review chain someone approved the app, but it means more training might be needed for Google employees,” he said. “If real, the experiment speaks for itself,” Vick said.

Google launched a new app review process last year with the goal of catching apps that violate its policies on hateful speech before they reach the Google Play store, including both machine and human review elements. However, the company is still in the process of fine-tuning the process and relies heavily upon community reporting to review the millions of game submissions it receives.
Google Game Let Users ‘Live Like a Real Jew in Auschwitz’    Application That Identifies (((Jews))) Online Disappears From Google Browser, App Store
© The Forward


Nigeria: Bill to protect social media users against hate speech passes first reading

22/6/2016- The House of Representatives on Wednesday passed for first reading a Bill for an Act to Provide for the Protection of Human Rights online. The bill which was sponsored by Rep. Chukwuemeka Ujam (PDP-Enugu) is titled ``Digital Rights and Freedom Bill’’. Presenting the bill, Ujam said that it sought to guard and guide Nigerian internet users on their rights and to protect the rights. According to him, Section 20(3) provides against hate speech online, while Section 12 of the Bill outlines the process to be followed before access is granted to governmental agencies and others to the personal data of citizens. He said that the bill also provided for the protection of citizen's rights to the Internet and its free use without undue monitoring. He added that it was targeted at ensuring openness, Internet access and affordability as well as the freedom of information online.

The lawmaker said that Nigeria lacked a legal framework for the protection of internet users, in spite of being a subscriber to international charters which recognised freedom and access to the Internet as a human right. Some of the charters, he said, were the African Union Convention on Cyber-Security and Personal Data Protection of 2014. Contributing to the debate, Rep. Aminu Shagari (APC-Sokoto) said that the bill was aptly designed for the protection of persons online. On his part, Rep. Sani Zoro (APC-Jigawa) stressed the need to create awareness on the details of the bill to prevent the public from misconstruing it as legislation that will restrict the freedom of internet users in the country. The bill was unanimously passed through a voice vote by the lawmakers. The Speaker of the house, Yakubu Dogara, referred the bill to Committees on Telecommunications and Human Rights for further legislative action.
© The Daily Trust


UK: Far-right groups incite social media hate in wake of Jo Cox’s murder

20/6/2016- Police are being urged to investigate extreme right-wing groups in Britain and their incitement activities after a series of hateful messages were published on social media in the wake of Jo Cox’s murder. Nationalist groups have been accused of glorifying Thomas Mair, Mrs Cox’s accused killer, crowing about the attack and making excuses for it. It comes amid concern about the rise of the far right in pockets of the UK, notably in Yorkshire, with violence at anti-immigration marches and increasing anti-Muslim hate crimes. In the days since Mrs Cox’s death scores of members of far-right organisations have taken to social media to make threats to other MPs and to crow about the fate of the 41-year-old mother, who was a prominent campaigner for remaining in the EU.

The northeast unit of National Action, which has campaigned for Britain to leave the EU, tweeted: “#VoteLeave, don’t let this man’s sacrifice go in vain. #JoCox would have filled Yorkshire with more subhumans.”
#VoteLeave, don't let this man's sacrifice go in vain.#JoCox would have filled Yorkshire with more subhumans!
— National Action NE (@NANorthEast_) June 16, 2016

The police northeast counter-terrorism unit confirmed it was probing a number of “offensive messages on social media and extreme social media content”. A spokesman said: “We are conducting checks on this material to establish whether or not any criminal offences have been committed.” There have been numerous other disturbing messages from far-right supporters in other areas of the country, resulting in calls for police to monitor and investigate online hatred. A member of the English Defence League, another far-right group, posted on Facebook: “Many of us have been saying for years that sooner or later “SOMEONE” was going to get killed. No one thought it was going to be one of “them” (left-wing) who was going to be the first victim of the coming civil unrest heading towards Europe ... BUT he had reached his breaking point (like many of us) and snapped.”

One Twitter user described Mrs Cox as a “traitor” while another said she was a “threat to the UK” and described Mr Mair as an “Aryan warrior”. Another group, calling itself the Notts Casual Infidels, linked to a news story of Mrs Cox’s murder and posted on Facebook: “We knew it was only a matter of time before we take it to the next level. We have been mugged off for too long.” A man associated with Pegida UK, an anti-Islam group, posted on Facebook: “From today the game changed as a good friend said have a look at today’s date 16/06/2016. Next time the government must listen to its people.”

Matthew Collins, head of research at Hope not Hate, a charity that seeks to defeat the politics of extremism within British communities, said he was concerned that “there are a number of tiny, right-wing organisations that are taking great glory and satisfaction from Jo’s death”. He added: “I think the police should look at the motives behind some of those people that are continuing to speak so much hatred and division.” Mr Collins said that although there were many people who did not agree with or vote for Mrs Cox, “they had the decency to recognise the contribution she made to wider society”. Referring to hateful messages posted on social media, he said: “These people are so on the margins of society that they no longer have any sense of moral decency or moral codes. I think the police should look at the motives behind some of those people that are continuing to speak so much hatred and division and are well aware of what such words have led to. These people are engaged in a whole network of tearing down the moral fabric of society.”

Stephen Kinnock, the MP who shared an office with Mrs Cox, was subjected to “particularly venomous” online abuse last week after an article about his family’s support for the Remain campaign. One email threatened violence and has been reported to the police, he said. Mr Kinnock said the far right were a “shady bunch” who had many of their “views legitimised by the referendum and the choice of the Leave campaign to go hard on immigration”. “I get the sense that a lot of rhetoric around the Leave campaign would have been classified as far right only five years ago but now it’s more mainstream. “There seems to have been a drum beat over the years for venomous rhetoric. A lot of this referendum would have been classified as pretty extreme. “Many MPs have a siege mentality because of the abuse, so I do think something needs to be done about it, but the question is what. You’ve got to get a balance between free speech and protecting people’s security. The last thing we’d want to do is never hold surgeries, then the bad guys have won.”
© The Times


India: To counter hate messages online, Bareilly cops seek 2k ‘digital volunteers’

In order to keep an eye on "online rumour-mongering", police in Bareilly division is planning to rope in over 2,000 'digital volunteers' for the task.

20/6/2016- In poll-bound UP, these volunteers will keep a close eye on "communally-sensitive messages and polarization propaganda" that has potential to disturb peace in the region. Deputy inspector general of police (DIG) Bareilly Range, Ashutosh Kumar said, "We need at least 2,000 digital volunteers to tackle rumours and wrong information posted on social media sites. Director general of police has instructed every district to engage digital volunteers. As of now, the response from our range has been cold because of lack awareness about the initiative, but we are working towards it." Explaining the importance of engaging these volunteers, DIG said, "If any objectionable content is posted on any social media platform, the first step for us is to lodge an FIR. Police then contacts cyber police stations in Agra and Lucknow, from where officials write to the headquarters of these sites in foreign countries and the process of removing the content is initiated. It's a long process and much damage is done till this procedure is completed." He added, "We know that there are other ways of getting such content removed instantly. Like on Facebook if the post receives a certain number of 'dislikes'. For such situations digital volunteers will have a huge role." These volunteers will also play an important part in informing public through social media on what actually happened, he said.

According to cops, anyone, who is a regular social media user can become a digital volunteer. A person who is interested in maintaining peace in their neighbourhood and is well-versed with social media can volunteer for it. For becoming a member, a person can follow official accounts of police on social media and inform them about it. "Few of the digital volunteers can reach at the scene and help in making people aware about the truth. A riot-like situation takes place at many locations due to false rumours spread on WhatsApp, Facebook, Twitter, Instagram and other such sites," he said. "As UP is gearing for state assembly elections, scheduled for next year, there are chances that few persons will try to mislead people for their communal agenda, creating law and order problem. To thwart their attempts, we need such initiatives," he said. "In fact, we also have a software through which we can know that how many persons are talking about a certain issue by typing few keywords. We can also know that how many of them are spreading wrong information and trace their IP address," said DIG.
© The Times of India


New Zealand: Cyberbullying: Retiring judge leads new centre to assess laws

Hub at Auckland University to provide research and development into technology’s effects on legislation.

17/6/2016- A new national cyber-law centre is being set up and its first project is putting the Harmful Digital Communications Act under the microscope. The New Zealand Centre for ICT Law, which opens next month at Auckland University, aims to provide an expanded legal education for students and provide research and development into the impact electronic technology has on the law. The centre's new director, retiring district court Judge David Harvey, said he regarded the centre as a vital hub for both the legal fraternity and the public. "More and more IT is becoming pervasive throughout our community and it's providing particular challenges and interesting developments as far as the law is concerned." Research was already underway on the effectiveness of the Harmful Digital Communications Act. Future projects would include digital aspects of the Search and Surveillance Act, Telecommunications Act and Copyright Act.

Mr Harvey, who consulted with the Law Commission on the legislation, said already significant trends were emerging in prosecutions taken under the Harmful Digital Communica-tions Act. In its first year 38 cases had come before the courts, which he described as surprisingly high for such a recent law. "It's quite a few for a relatively new piece of legislation that's dealing with not a new phenomenon but a new technology, and it seems that the prosecution people with the police have been able to grapple with some of the aspects of this." Researchers had already noticed a significant number of cases involved revenge porn and a broad swathe of electronic media used to harm others. "[The act] catches any information that's communicated electronically. If you're making a nasty telephone call using voice on your smartphone, that amounts to an electronic communica-tion. So it's the scope of the legislation and who's being picked up that becomes very, very interesting."

He said it remained troubling to see the level of harm inflicted through technology. "It's a matter of concern that people seem to lack the inhibition that you would normally expect in what they say and what they do. "A number of cases have involved posting intimate photographs and intimate videos online with the intention of harming somebody else. The number of occasions on which that has occurred is surprising. "I think the level of anger that is expressed or at least the intensity of the language - hate speech - is also a matter of concern." Mr Harvey expected the second component to the act, the civil agency to be headed by NetSafe, would have an enormous impact. "It will be interesting to observe how many applications are made to the approved agency in the first place and subsequently how many are settled or resolved or go on to the court. I imagine there will be quite a bit of activity coming up once the civil enforcement regime is in place."

While it was still early days he was confident the act was providing help to people being cyberbullied. "It won't solve the problem in the same way that making murder a crime doesn't stop murder but at least it will provide people with a remedy, with a place to go which they haven't had before."

© The New Zealand Herald


Britain First: The far-right group with a massive Facebook following

16/6/2016- The Leader of Britain First has distanced the far-right group  from the murder of Labour MP Jo Cox, despite several witnesses confirming that the killer shouted "Britain First" three times during the attack in Leeds on Thursday. "At the moment that claim hasn't been confirmed - it's all hearsay, Paul Golding said. "Jo Cox is obviously an MP campaigning to keep Britain in the EU so if it was shouted by the attacker it could have been a slogan rather than a reference to our party - we just don't know. "Obviously an attack on an MP is an attack on British diplomacy - MPs are sacrosanct. We're just as shocked as everyone else. Britain First obviously is NOT involved and would never encourage behaviour of this sort. "As an MP and a mother, we pray that Jo Cox makes a full recovery." In a video on the party’s website he said the media had “an axe to grind”. He added: “We hope that this person is strung up by the neck on the nearest lamppost, that’s the way we view justice.”

What we know about the group
Formed in 2011 by former members of the British National Party, Britain First has grown rapidly to become the most prominent far-right group in the country. While it insists it is not a racist party, it campaigns on a familiar anti-immigration platform, while calling for the return of “traditional British values” and the end of “Islamisation”. The party says on its website: “Britain First is opposed to all mass immigration, regardless of where it comes from – the colour of your skin doesn’t come into it – Britain is full up.” Although it claims to have just 6,000 members, Britain First has managed to build an army of online fans, mainly by using social media to campaign for innocuous causes such as stopping animal cruelty, or wearing a poppy on Remembrance Day, and appealing for users to “like” its messages.

It now has more than 1.4 million “likes” on Facebook, more than any other British political party. In a bid to garner newspaper coverage, the group has carried out mosque invasions and so-called “Christian patrols”. A march in January targeted Dewsbury, near Jo Cox’s Batley and Spen constituency, and featured 120 Britain First members carrying crucifixes and Union Jacks through the town. Mrs Cox wrote on Twitter at the time: “Very proud of the people of Dewsbury and Batley today - who faced down the racism and fascism of the extreme right with calm unity.” Britain First’s current leader, Paul Golding, stood against Sadiq Khan in the London mayoral election earlier this year. After Khan’s victory, the group announced that it would take up “militant direct action” against elected Muslim officials. In a chilling warning on its website, the group said: “Our intelligence led operations will focus on all aspects of their day-to-day lives and official functions, including where they live, work, pray and so on.” The party has a vigilante wing, the Britain First Defence Force, and last weekend carried out its first “activist training camp” in Snowdonia, at which a dozen members were given “self defence training”.
© The Telegraph


Austria: Far-right leader caught up in online racism scandal

The leader of Austria’s far-right Freedom Party (FPÖ) was caught up in yet another scandal this week after his supporters posted racist comments about Austria's football team on his Facebook page.

16/6/2016- Many of Heinz-Christian Strache’s Facebook followers started posting anti-immigrant hate speech after Austria lost their first Euro 2016 game to Hungary 2-0 on Tuesday. The comments were published underneath a post from Strache wishing the Austrian team luck with their debut game. After they lost he suggested that people keep their spirits up and that the referee was partly to blame for Austria’s loss. Some of his followers disagreed, however, arguing that having players whose families have an immigrant background on the Austrian team might be why the Austrian team lost. One poster described the Austrian team as “the amazing national team with two coal sacks”, likely referring to David Alaba and Rubin Okotie, who have a Nigerian-Filipino and Nigerian background respectively. Another user said he “could puke” when he sees “what is sold as Austria”.

Germans writing online had similar complaints about their own team. One commentator said that his team should no longer be called the German team but just “the team”, suggesting that because some of the German players' parents have immigrant backgrounds they are not true Germans. A member of the far-right Alternative for Germany (AFD) party recently also faced criticism for saying that the German team was “no longer German”, the Local Germany reported. It is not the first time that Strache has been caught up in a scandal involving comments left on his Facebook page. Only a few days ago, his followers posted death threats to Chancellor Christian Kern from the Social Democratic Party (SPÖ). The Freedom Party leader has had to ask his followers to be more moderate with their postings. The FPÖ have deemed these comments unacceptable but have also often said that they could not check each one, as there were so many posted everyday.
© The Local - Austria


Imagine CYBERSPACE without HATE

By Deborah J. Levine, Award-winning author/Editor, American Diversity Report

14/6/2016- As a former target of Cyber Hate, I sat spellbound with various movers and shakers of Chattanooga’s Jewish community as we listened to Jonathan Vick, Assistant Director of the Cyber Safety Center of the Anti-Defamation League. Founded in 1913 “to stop the defamation of the Jewish people and to secure justice and fair treatment to all,” ADL’s tag line is “Imagine a World Without Hate®.” ADL began reporting on digital hate groups in 1985, exposing and monitoring groups such as StormFront created by KKK leader, Don Black. StormFront was popular with white supremacists, neo-Nazis, bigots, and anti-Semites. In recent years, StormFront has moderated its language somewhat to appear more mainstream. It’s membership has grown to almost 300,000 despite reports documenting one hundred homicides committed by StormFront members (Southern Poverty Law Center).

Hate groups like StormFront pick up speed on the internet with new technologies, create global communities, raise funds, and convert the unwary into believers with sophistica-ted techniques. According to Vick, these groups can also intimidate into silence, disarm by hacking, encourage hate crimes, and punish by hijacking. The good news is that known groups can be better monitored on the internet and exposed, where once they operated under the radar. The not so good news is that the mask of anonymity of Cyber Hate can pose a huge challenge. In his 2011 address, Hate on the Internet: A Call for Transparency and Leadership, Abraham Foxman, ADL National Director, described the problem which has only become worse with time. “Today, we have a paradigm shift, where Internet users can spew hatred while hiding behind a mask of anonymity. The Internet provides a new kind a mask - a virtual mask, if you will - that not only enables bigots to vent their hatred anonymously, but also to create a new identity overnight... Like a game of “whack-a-mole” it is difficult in the current online environment to expose or shame anonymous haters.”

The major Internet companies wrestle with these issues, as do we. How should they define what is hateful and what violates their terms of service? How do they police the incredible number of posts, blogs, and videos posted online every minute? As companies like Facebook, Twitter, and YouTube grapple with privacy issues, the public needs to voice its concerns. Organizations like the ADL can and do influence what is uploaded and posted.

Vick discussed the Anti-Cyber Hate Working Group in Silicon Valley that ADL convened to explore these issues with tech companies effectively. Given the current political cycle, this discussion is vital as religious and political-based hacking increases. For example, Vick cited a “Brag sheet” that lists 35,000 websites hacked leaving anti-Semitic messages. The messages include “memes” that perpetuate stereotypes of Jews. Some are anti-Israel, others depict Jews as money lenders. They can be almost impossible to monitor, such as the (((Hugs))) graphic that identify Jews on Twitter. On Google Chrome, there’s an app that identifies Jews on any given page. ADL has contacted Google and they have removed it, but this is an example of how technology is evolving.

Technology adds to the aggressive presence of hate groups, as anyone who has been hacked can confirm. When my online magazine was hacked and hijacked, the FBI traced the perpetrators to a terrorist group in Iran. The American Diversity Report was erased and replaced by a single screen claiming responsibility and threatening my life with unrepeatable epithets topped off by “Death to mother-f***** Zionists!” All the sites on my webmaster’s server were similarly wiped out and replaced, whether shopping pages or golf tournaments. I was invited to leave the group, become my own webmaster, and implement my own security. All of which I’ve done in a highly-motivated learning mode. Anything and anyone can be hacked. In the Target stores case, the hackers went through one of their service providers, an air condition company. Vick cited a case on Facebook, where a user named Roman Kaplan had a weak password that was taken over by ISIS which then had access to all his contacts and apps. The goal is to make you feel targeted, vulnerable, and isolated.

Vick offered advice for protecting yourself against digital terrorism.
Be Aware: Google yourself and know where your name appears. How do you identify yourself and what personal information do you give? Be aware of how your information is shared on the internet by organizations, including your synagogue.

Protect yourself: Passwords are your best protection. Don’t use your name, religion, location, or personal information. Instead, pick a favorite song lyric, use caps, numbers, and symbols with it. Don’t have an online password vault with all your passwords in it. Write down the passwords in a notebook. Old-fashioned pen and paper will keep them safe.

Protect your website: Don’t host your own website. Use a reputable company and make sure that they have a phone number contact for emergencies. Know when people visit and why. If you start getting friend request from strangers from strange places and unusual traffic spikes, be suspicious.

Protect your E-mails: Have multiple email accounts for various audiences. Do not use same PW on all accounts. Watch for Phishing emails and robot calls. They may be from companies that look real but the actual email is bogus. Advise staff to never open an attachment from an unknown source. Don’t click, don’t open. When in doubt- delete. Err on the side of caution.

Protect your social media presence: Know who is posting and tagging your pictures. Segregate your personal, community and professional life on separate pages. Limit the amount of personal information that you post. Know where your posts and blogs are going. Who is likely to target you? Know where your problem people are and, if you’re enraged, take time to respond. Know the terms of service and what crosses the line of acceptability and how to report an incident.

Protect your devices: Understand the inter-connectedness of devices and apps. Your mobile provider knows what you are doing. Apps know what you’re doing. When logged into a service, they know what you’re doing. If you various sites open, they can all see what you’re doing. Make no presumption of privacy on mobile devices.
© The Huffington Post


Australia: Facebook dragged into QUT racism case

Facebook has been dragged into a racial discrimination case involving three Queensland university students.

13/6/2016- Federal Circuit Court Judge Michael Jarrett on Monday ordered the social media giant be subpoenaed for information on the account details of a Queensland University of Technology Student accused of making a racist comment online. He ordered the subpoena be sent to Facebook's international headquarters in Dublin, along with 100 euros to cover international postage fees. Calum Thwaites has denied being responsible for a two-word racist post among a 2013 Facebook thread about three students being asked to leave an indigenous-only computer lab. He claims a post was not written by him and came from a fake account. Mr Thwaites is being sued for $250,000 alongside fellow students Alex Wood and Jackson Powell by Cindy Prior, the indigenous administration officer who asked the students to leave.

Mr Wood has not denied posting "Just got kicked out of the unsigned indigenous computer room. QUT (is) stopping segregation with segregation?" on Facebook after being asked to leave the lab and Mr Powell has admitted writing "I wonder where the white supremacist lab is?" However, both deny their posts were racist. Barrister Susan Anderson, representing Ms Prior, told the court on Monday Facebook should be asked to provide details about Mr Thwaites' accounts. Ms Anderson said the information from Facebook, providing it still had it, would probably be able to answer whether Mr Thwaites was behind the post. Tony Morris QC said although his client, Mr Thwaites, would be "delighted" to be proved right, the application to subpoena the documents was futile and would only "muddy the waters" of the case. Judge Jarrett will publish his reasons for allowing the subpoena in the coming days. Lawyers representing the trio have called for the matter to be dismissed, however Judge Jarrett is yet to deliver his judgment on that application.
© 9 News


Twitter Can't Figure Out Its Censorship Policy

13/6/2016- New York Times editor Jon Weisman announced he was leaving Twitter last week, thanks “to the racists, the anti-Semites, the Bernie Bros who attacked women reporters yesterday.” Enough was enough. Here’s what happened: In response to a rash of hatred on the site, Weisman’s colleague Ari Isaacman Bevacqua (also a Times editor) reported accounts that used anti-Semitic slurs and threats to Twitter support. Twitter replied that it “could not determine a clear violation of the Twitter Rules,” Weisman told me. It didn’t make sense to him. Weisman isn’t alone. A Human Rights Watch director, a New York Times reporter, and a journalist who wrote about a video game have all reported a similar phenomenon. Still more confirmed the process independently to Motherboard. They each got what they perceived to be a threat on Twitter, reported the tweet to Twitter support, and received a reply that the conduct does not violate Twitter’s rules.

When Twitter made new rules of conduct in January, the company gave itself an mpossible task: let 310 million monthly users post freely and without mediation, while also banning harassment, “violent threats (direct or indirect)” and “hateful conduct.” The fault lines are showing. The hands-off response that Bevacqua received fits with the Twitter that CEO Jack Dorsey touts. Censorship does not exist on Twitter, he says. But there’s another side to Twitter, one with a “trust and safety council” of dozens of web activist groups. This side of Twitter developed a product to hunt down abusive users. It’s the one that signed an agreement with the European Union last month to monitor hate speech. It’s joined by Facebook, YouTube, and Microsoft in the agreement, and while it’s not legally binding, it’s the first major attempt to put concrete rules in place about how online platforms should respond to hate speech.

“There is a clear distinction between freedom of expression and conduct that incites violence and hate,” said Karen White, Twitter’s head of public policy for Europe. What’s not entirely clear, is how Twitter is going to enact this EU agreement, though it seems like the platform will rely on users reporting offensive content. The internet has always been a breeding place for vitriol, but it’s become much more present lately. Neo-nazis have been putting parentheses, or “echoes,” around the name of a Jewish writer. Google Chrome recently removed an extension called Coincidence Detector that added these around writers’ names. The symbol represents “Jewish power,” because anti-Semites just can’t give up on their theory that Jews are behind everything bad in history. From a practical standpoint, policing hate speech on a platform with 310 million monthly users is difficult. The “echoes” don’t show up on a Twitter search or on a Google search.

Twitter wants to be a place of open and free expression. But it also, at least according to a statement to the Washington Post, wants to “empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance.” “I would say that much of the anti-Semitism that is being spread on Twitter and other platforms is not new in terms of the messaging and content,” Oren Segal, director of the Anti-Defamation League’s Center on Extremism, told Motherboard. “What’s new is for people to be able to deliver their hatred in such public and efficient ways.” The “echoes” symbol has extended to be a sign of racism as well. The symbol is common on Twitter even without the extension—some writers have put the marks on their names voluntarily, to reappropriate the symbol, but others use it for hatred. One user sent Weisman a photo of a trail of dollar bills leading to an oven.

Another user tweeted in reply: “well Mr. (((Weisman))) hop on in!” This user has a red, white, and blue flag with stars, stripes, and a swastika as his cover photo. It’s a flag from the Amazon television series The Man in the High Castle, which depicts an America under Nazi control. Weisman reported these tweets to Twitter. The site didn’t remove them. Some others, though, were removed. “Suddenly I get all these reports back saying this account has been suspended,” Weisman told the Washington Post. “I don’t really know what their decisionmaking is,” he said. “I don’t know what is considered above the line and what isn’t.” “It’s not like this echo or this parentheses meme was in and of itself the most creative and viral anti-Semitic tactic that we’ve seen,” Segal said. “It’s relevant because we’ve come to a time of more anti-Semitism online... It represented one element of a larger trend.”

Twitter has taken action against accounts perceived as offensive in the recent past. Recently it suspended five accounts that parodied the Russian government, although now the most popular of these, @DarthPutinKGB, is back up. Since mid-2015, Twitter has suspended more than 125,000 accounts for promoting terrorism, a practice that picked up in 2013. After the March terrorist attacks in Belgium, the hashtag #StopIslam was trending. Twitter removed it from the trending topics sidebar, although many instances of the hashtag were using it in a critical light. Earlier this year, the platform revoked the “verified” status of Breibart personality Milo Yiannopoulos, who tweets provocative messages that have been described as misogynistic and as harassment. Yiannopoulos said he reached out to Twitter twice, but he never got an answer about why he was un-verified. The platform was frustratingly unresponsive, as users who reported offensive tweets found as well.

To Twitter’s co-founder and CEO, bigotry is part of life. “It’s disappointing, but it’s reflective of the world,” he said when Matt Lauer asked him about people who use the platform to “express anger and to hurt people and insult people.” He reminded Lauer that users are free to block whomever they’d like, although he’s never blocked anyone on his account.
© Motherboard


Finland: Police ponder probe into Soldiers of Odin secret Facebook group

Police say they are considering a criminal investigation into racist messages exchanged in a secret Facebook group by leaders of the Nazi-linked Soldiers of Odin. Police chief Seppo Kolehmainen confirmed to Yle that police will try to determine whether or not any of the group’s messages are criminal in nature.

11/6/2016- In March this year, Yle obtained screenshots of a secret Facebook group maintained by leaders of the anti-immigrant group Soldiers of Odin, which was founded by Kemi-based neo-Nazi Mika Ranta late last year. Among one of the regular greetings used by members of the group is the salutation, "Morning racists." The posts also feature members showing Nazi salutes and include images of Nazi symbols. As reported earlier this week by Yle, leaders also suggested patrolling without insignia so as to be able to engage in attacks more freely, urging members to have "unmarked patrols and zero tolerance for dark skin" and to "hammer anyone who even leans to the left". Police commissioner Seppo Kolehmainen told Yle that officers will be looking into the group’s posts to see if they bear the hallmarks of criminal activity. "We are now evaluating the content of the messages to see whether or not they can be considered criminal. The National Bureau of Investigation is now responsible for the evaluation and on that basis we will determine whether or not to begin an investigation into some message or individual," Kolehmainen said.

Fresh assault conviction
Finnish news agency STT first reported on law enforcement’s intention to investigate the group and its messages. Soldiers of Odin founder Mika Ranta was convicted of aggravated assault in May. He had previously been convicted of racially-motivated attacks on two immigrants in 2005. Ranta, who was previously a member of the neo-Nazi Finnish Resistance Movement, said he founded Soldiers of Odin, ostensibly to protect nationals following the arrival of asylum seekers in the northern town of Kemi.
© YLE News.


Google didn’t need to delete the anti-Semitic (((Echo))) app (opinion)

The reaction from social media users was the best two-fingered response that Twitter has ever seen, as Jewish people reclaimed their identity from the trolls who hoped to use it against them
By Jacob Furedi

9/6/2016- Anti-Semitism is all the rage these days. From the emergence of far-right parties across Europe to our very own Labour party, we are constantly warned that life as a Jew is becoming rather unpleasant. Most recently, animosity towards the Jewish people has extended into the cyber sphere. An anti-Semitic app available to download on Google Chrome has made its way into the public sphere after Jonathan Weisman, deputy Washington editor for the New York Times, raised questions about why Twitter trolls were referring to him as (((Weisman))). He had just tweeted about an article criticising GOP candidate Donald Trump titled “This is how fascism comes to America”. It became clear that certain users had downloaded a “Coincidence Detector” which automatically surrounded Jewish names written on the internet in parentheses. ‘Israel’ automatically reads as (((Our Greatest Ally))). Users of the app consequently used the symbol to denote a Jewish subject online.

Having been born into a Jewish family, I’m not particularly surprised. To be honest, the most offensive element of the app is its shameful appropriation of that fantastic gram-matical tool: the parenthesis. By highlighting the presence of Jewish names, the app intends to make users aware of Jewish involvement in the media. According to its creators, the chosen people have secretly masterminded to take over the world. Given the apparent ignorance of the schmucks who created the “Coincidence Detector”, it wouldn’t be surprising if their deeply-held fear was correct. Perhaps I’m being harsh. The algorithm used by the app was pretty clever. Anti-Semites who use the detector’s use of parentheses are almost untraceable given that search engines tend to exclude punctuation from their search results. Anyhow, Google decided that it no longer wanted to host the extension and promptly removed it from its store by appealing to “hate speech”. Given that Google is private company, it had every right to withdraw a component of its search engine that may affect the reputation of its business.

But was it necessary? The Twittersphere’s reaction suggests not. Rather than needing to be shielded from anti-Semitic users, people actively chose to track them down and expose their prejudiced convictions. Jewish users reacted with the best two-fingered response Twitter has ever seen. They promptly edited their usernames to include the symbol that was previously being used against them. Jonathan Wiseman became (((Jonathan Wiseman))) and Jewish journalists and writers followed suit. Soon our newsfeeds were plastered by comments from (((Jeffrey Goldberg))), (((Yair Rosenberg))), (((Greg Jenner)))) and (((Lior Zaltzman))). Instead of appealing to “hate speech”, these people thought it more prudent to reclaim their Jewish identity from a few trolls who hoped to use it against them.

Despite receiving a five star rating on Google’s store, only 2,473 people downloaded the app. And it showed. Their voices were soon drowned out by swathes of users undermining their anti-Semitic cause. Crucially, the counter-movement demonstrated that Jewish users didn’t need Google to protect them from the ‘Coincidence Detector’. They were perfectly capable of doing that themselves. From their enslavement in Egypt to their genocide in Eastern Europe, the Jewish people have never had it easy. But, importantly, they still survived. We shouldn’t be too surprised, therefore, that they managed to deal with a crudely devised anti-Semitic app. ‘Coincidence’? I think not.
© The Independent - Voices


How Jews Are Re-claiming a Hateful neo-Nazi Symbol on Twitter

To combat the online vitriol, Jews and non-Jews alike are adopting a controversial new method which, some critics say, is equivalent to pinning a yellow 'Jude' star to one’s shirt.

7/6/2016- It is not a particularly pleasant time to be a Jew on the Internet. In recent weeks, Jewish journalists, political candidates and others with Jewish-sounding names have endured a torrent of anti-Semitic vitriol online, much of it coming from self-identified supporters of U.S. Republican presidential candidate Donald Trump. Until it was removed last week, a user-generated Google Chrome extension allowed those who installed it to identify Jews and coordinate online attacks against them. It has gotten so bad that the Anti-Defamation League has announced that it is forming a task force to address racism and anti-Semitism on social media.

Last week, Jeffrey Goldberg, a national correspondent for The Atlantic, decided to fight back. He changed his Twitter username to (((Goldberg))), co-opting a symbol that neo-Nazis and others associated with the so-called “alt-right” use to brand Jews on blogs, message boards, and social media. The “echoes,” as they are called, allude to the alleged sins committed by Jews that reverberate through history, according to Mic, a news site geared toward millennials that first explained the origins of the symbol. Then Yair Rosenberg of Tablet Magazine, another popular troll target, encouraged his followers to put parentheses around their names as a way to “raise awareness about anti-Semitism, show solidarity with harassed Jews and mess with the Twitter Nazis.” Several journalists and other Jewish professionals followed suit, and the “thing,” as Internet “things” are wont to do, took off.

Jonathan Weisman, a New York Times editor who changed his username to (((Jon Weisman))) over the weekend, wrote on Twitter that the campaign was a way to show “strength and fearlessness” in the face of bigotry. Weisman was the victim of a barrage of anti-Semitic abuse last month after he tweeted the link to an article in the Washington Post that was critical of Trump. Weisman retweeted much of the filth — including memes of hook-nosed Jews and depictions of Trump in Nazi regalia — that came his way. “Better to have it in the open,” he wrote. “People need to choose sides.” In Israel, where Twitter is less popular than other social media platforms like Facebook and Instagram, a small number of journalists, including Haaretz’s Barak Ravid, joined the cause.

Many non-Jews also added the parentheses to their usernames out of solidarity. Among them was NAACP President Cornell Brooks, who tweeted on Saturday: “Founded by Jews & Blacks, the haters might as well hate mark our name [too]: (((@NAACP))).”  Neera Tanden, president of the Center for American Progress, a left-leaning think tank, told Haaretz that she joined the campaign after being targeted on Twitter. “I don’t know if they thought I was Jewish or that they are just awful,” said Tanden, who is Indian-American and not Jewish. “Anti-Semitism is as hateful as racism and sexism and as a progressive, I stand against it.” Yet the cheeky campaign struck some Jews as unseemly, the virtual equivalent of willingly pinning a yellow “Jude” star to one’s shirt. On Sunday, the journalist Julia Ioffe tweeted that she was “really uncomfortable with people putting their own names in anti-Semitic parentheses.”

Ioffe, who filed a police report in Washington, D.C. last month after receiving threatening messages following the publication of an article she wrote about Melania Trump, told Haaretz that she understood the purpose of the campaign and was not calling for others to abstain from participating. Nevertheless, she said, it only seemed to provoke more harassment. “The second I started tweeting about it, all those bottom dwellers immediately rose to the surface and said things like, ‘You’re doing our work for us,’” Ioffe said. Goldberg explained that his goal was simply to mock neo-Nazis by reclaiming and neutralizing an element of their online culture, such as it is. He said he was inspired by “the way the LGBT community took the word ‘queer’ and made it their own.” (On Sunday, he reversed the parentheses around his last name. Why? “Just because I can.”)

In a statement to Haaretz, ADL CEO Jonathan A. Greenblatt wrote: “There’s no single antidote to anti-Semitism posted on Twitter. An effective response includes investigating and exposing the sources of hate, enforcing relevant terms of service, and promoting counterspeech initiatives. From our perspective, the effort by Jeffrey Goldberg and others to co-opt the echo symbols is one positive example of clever counterspeech.” On Monday, the ADL added the triple parentheses to its online hate symbols database. The parentheses are beginning to disappear from Jewish Twitter usernames as “our little war on #altright,” in Weisman’s words, seems to have reached a stalemate. But the debate about whether or not it was “good for the Jews” to out themselves in such a way is still roiling.

Mordechai Lightstone, a rabbi in Brooklyn who works in the Jewish social media world, said it was dangerous “if we only subvert these hateful acts and use that as the sole basis to define our identities.” A better solution, he said, would be to “channel this into positive actions expressing Jewish pride.” How best to fight back against the anti-Semitic trolls is both a moral and logistical dilemma, according to Ioffe. She noted that it is impossible to determine how many there are and whether or not they are real people or bots. (The "Coincidence Detector" Chrome extension that automatically put parentheses around Jewish-sounding names had been downloaded about 2,500 times before it was removed by Google for violating its policy against harassment.) “It’s hard to figure out how to strike that balance between standing up to them and giving them too much attention, between de-fanging them and giving them more fodder,” she said. “I think it’s something that we Jewish journalists are going to have to continue to grapple with.”
© Haaretz


USA: This Guy’s Simple Google Trick Sums Up Racism In A Nutshell

8/6/2016- One needs to look no further than current events to see that racism is sadly alive and well in America in 2016. From the fact that George Zimmerman can attempt to auction off a gun he used to kill a black teen, to #OscarsSoWhite, to the very fact that Donald Trump is the Republican presidential candidate, racism continues to dominate headlines in this modern day and age. To give one example and prove just how systemic a problem racism in our country is, a guy with the Twitter handle @iBeKabir recorded this video of himself performing a very simple Google trick. First he searches for the images that come up when you google “three black teenagers.” The results are predominately of mugshots and inmates. “Now let’s just change the color right quick,” he says, replacing the wording with “three white teenagers.” What he yields are generic stock photos of smiling white teens palling around, some holding sporting equipment. The post has unsurprisingly accumulated over 45,000 likes and 50,000 retweets in less than 48 hours at the time of this writing — and those numbers will only continue to skyrocket as the tweet goes viral — obviously indicating that he’s struck a chord with far too many people.
© UpRoxx


Israel: Shaked: Facebook, Twitter removing 70% of ‘harmful’ posts

Social media giants clamping down on incitement to violence in Israel, says justice minister

7/6/2016- Facebook, Twitter and Google are removing some 70 percent of harmful content from social media in Israel, Justice Minister Ayelet Shaked said Monday. Speaking at a press conference in Hungary, Shaked said the social media giants were working to remove materials that incite to violence or murder, the Ynet news website reported Shaked was attending a conference in Hungary on combating incitement and anti-Semitism on the Internet. In a post on her Facebook page, she said: “The Hungarian Justice Minister said correctly that verbal incitement can lead to physical harm and that he is committed to the war on incitement. Anti-Semitic internet sites in Hungary have already attacked him for the existence of the conference. “A joining of forces by justice ministers from all over the world against incitement and our joint work vis a vis the internet companies will lead to change. “Already now, the Israeli Justice Ministry is managing to remove pages, posts and inciteful sites by working with Facebook and Google.”

Social media first came to the fore as a key tool for avoiding state-operated media organs and for communicating, particularly for the young, during the so-called Arab Spring, the wave of protests that swept the Arab world between 2010 and 2012. More recently, and for similar reasons, it has become the preferred medium through which terror groups try to communicate their messages and recruit new members. Palestinian social media has played a major role in the radicalization of young Palestinians during the current wave of violence against Israelis, which began in October. In one recent example of a crackdown on internet incitement, Twitter closed dozens of accounts held by members of the Izz ad-Din al-Qassam Brigades, the military arm of Hamas.

In response, the Brigades’ spokesman, who goes by the nom de guerre Abu Obeida, vowed: “We are going to send our message in a lot of innovative ways, and we will insist on every available means of social media to get to the hearts and minds of millions.” The terror group uses its social media accounts to publish internal news about the organization, such as when its members die in training accidents, and also to call for and praise attacks against Israeli civilians.
© Times of Israel


Online anti-Semitism: Difficult to Fight, but Even Harder to Quantify

Amid the Jew-hating, anti-Israel and Holocaust-denying conversations, 12 percent of the anti-Semitic discourse one Israeli company monitors is Trump-related.

7/6/2016- Julia Ioffe, a Jewish journalist, becomes the target of anti-Semitic attacks, and even death threats, from Donald Trump supporters on social media after she publishes a profile of his wife Melania.
Jonathan Weisman, a Jewish editor at The New York Times, finds himself inundated with anti-Semitic epithets from self-identified supporters of the presumptive Republican presidential candidate after the editor tweets an essay on fascist trends in the United States.
Erin Schrode, a young Jewish Democrat running for Congress in California, receives a torrent of Jew-hating messages on Facebook (“Fire up the ovens” was just one of the gems) in what appears to be an orchestrated attack launched by American neo-Nazis.
A Google Chrome extension (removed a day after it was discovered) marks members of the Jewish faith online by placing three sets of parentheses around their names.

Mere coincidence, or is this the dawn of a new and dangerous era in online anti-Semitism? The honest answer, say those in the business of tracking attacks on Jews, is that it’s hard to tell. In the old offline world, life was far less complicated. You counted acts of vandalism, physical assaults and whatever else was quantifiable, compared the total with the previous year, and then determined whether things were getting better or worse for the Jews. With the advent of social media, however, those sorts of calculations have become virtually impossible. Not only is it difficult to know what to count (Tweets? Retweets? Likes? Posts? Shares? Follows? Reports of abuse?), but also, with billions of people posting online, how do you begin searching?

“Back in the days when online anti-Semitism was confined to websites like Stormfront and Jew Watch, we were able to keep statistics,” says Rabbi Abraham Cooper, who runs the Digital Terrorism and Hate Project at the Simon Wiesenthal Center in California. “But in the era of social networking, the numbers have become meaningless. If you get one good shot in and it goes viral, how do you count it? Social networking has changed the whole paradigm.” Jonathan Greenblatt, chief executive of the Anti-Defamation League, has been keeping himself busier than usual this election season, calling out anti-Semites, their supporters and apologists. Yet, even he is reluctant to describe the current level of online attacks as unprecedented. “Back in 2000, when Joe Lieberman was on the presidential ticket, there were anti-Semitic attacks against him, too. So there’s certainly a history of these things,” he notes. “But we didn’t have Twitter back then. What social media has done is offer a platform that circulates some of the most noxious ideas in ways that were never previously possible, allowing bigots and racists, once marginalized by mainstream society, to now come out of the woodwork.”

Even if it were possible to make accurate numerical calculations about online anti-Semitism these days, says Greenblatt, there is no way to know if the situation has become worse, “because we don’t have a sample set from previous elections with which to compare.” Probably the closest thing to hard statistics related to the phenomenon appear in a recent report compiled by Buzzilla, an Israeli company that monitors and researches discussions in various online arenas: responses to articles, blogs, forums and social media. In preparing the report – commissioned by an Israeli nonprofit that promotes Holocaust remembrance – Buzzilla scoured the Internet for key phrases associated with anti-Semitism (“Hitler was right,” “burn the Jews,” “hate the Jews” etc.). “We define anti-Semitism as content that is against Jews, not against Israel per se,” says Merav Borenstein, Buzzilla's vice president for strategy and products. Regardless, she notes, Israel serves as a lightning rod for online anti-Semitism.

Examining anti-Semitic discourse over the course of a 12-month period ending in March 2016, the report found a spike in the three last months of 2015, coinciding with the spate of Palestinian stabbing attacks against Israelis. “We have found that whenever Israel is in the news – and this was true during the Gaza War in the summer of 2014 as well – it translates into a rise in online anti-Semitism,” says Borenstein.

Cooper, of the Simon Weisenthal Center, confirms this pattern. “You can almost write the script,” he says. “Within an hour of any terror attack against Jews or Israelis, the images of the perpetrators are up online, and they are touted as heroes who should be emulated.” According to the Buzzilla report, roughly 600 anti-Semitic conversations took place in the arenas it monitors in April 2015. By March 2016, that number had almost tripled. (The peak month was December 2015, with 2,500). At the request of Haaretz, Buzzilla also examined how much of the recent anti-Semitic discourse on the Internet has been fueled by the Trump campaign. It found that since the beginning of this year, 12 percent of the total volume of anti-Semitic discourse in the arenas it monitors is related to the presumptive Republican presidential candidate although not posted by him personally.

Flagging offensive content
They Can’t is the name of relatively new Israeli nonprofit devoted to fighting online anti-Semitism. Through a network of grass-roots activist, the organization flags anti-Semitic content, mainly on YouTube and Facebook, and demands that it be removed. Its founder, Belgian-born Eliyahou Roth, says their track record is unmatched. “Over the past three years, we’ve managed to remove more than 45,000 accounts, pages, videos, posts and photos with anti-Semitic content from the Internet,” he says. “About 41,000 items were what we call classic anti-Semitic items, another 1,000 dealt with Holocaust denial, and the rest, which were in Arabic, fell into the category of terror incitement.” That was out of a total of 78,500 anti-Semitic items that his organization tracks on an ongoing basis. Over at the Simon Weisenthal Center, Cooper says that the number of anti-Semitic items his organization has succeeded in removing from the Internet is “probably in multiples of tens of thousands.”

But such success is not the norm, according to a report prepared earlier this year by The Online Hate Prevention Institute. Titled “Measuring the Hate: The State of Anti-Semitism in Social Media,” it found that out of 2,000 anti-Semitic items the Australian-based organization had been tracking over a period of 10 months, only 20 percent had been removed from the Internet. The report did take note, however, of significant variations in the response rates of different social media companies. Facebook was hailed as the company most responsive to demands to remove anti-Semitic content, whereas YouTube was the least. A breakdown provided in the report of anti-Semitic content by category found that 49 percent was “traditional” (defined as containing “conspiracy theories, racial slurs and accusations such as the blood libel”), 12 percent was related to Holocaust denial, 34 percent to Israel, and 5 percent promoted violence against Jews.

Acknowledging the difficulties of quantifying online anti-Semitism, David Matas, a prominent Canadian human rights lawyer, points to a key indicator that social media companies adamantly refuse to divulge, although it could provide a useful benchmark: the number of complaints they receive about anti-Semitic content. Speaking at a recent conference in Jerusalem, Matas, who also serves as senior legal counsel of B’nai Brith Canada, lamented that “unless we have a solution on metrics, we cannot even know the problem.” Danielle Citron, a professor of law at the University of Maryland and an expert on online harassment, is not sure whether online anti-Semitism is spreading or simply drawing more attention. “What I can say is that it’s become more mainstream,” she notes. “It is no longer hidden in the dark corners of the internet like it once was. We are now seeing it on very mainstream platforms like Facebook and Twitter.”

At the same time, Jew-haters are clearly feeling more emboldened – not only by the anonymity provided by social media, says Citron, but also, more recently, by the nod they’ve received from the Republican presidential hopeful. “Trump gives people permission to be hateful, whether that is to women, to the disabled or to Jews,” she explains. How much of what seems like an uptick in online anti-Semitism can be blamed on extreme right-wingers who support Trump and how much on extreme left-wingers who hate Israel? “I see two twin vectors converging here,” says the ADL’s Greenblatt. “One is right-wing anti-Semitism, steeped in white supremacist ideology, and it’s very anti-Jewish. Then there is the left-wing anti-Semitism, steeped in anti-Israel ideology. In my estimation, though, the end result is the same: Jews are being attacked for being Jewish. It’s prejudice plain and simple.”
© Haaretz


Canadian content rules for online media have weaker support, survey suggests

Canadians back regulations, but want a more 'hands off' approach online, pollster says

3/6/2016- Canadian content rules need updating, the majority of respondents in a new online poll said — but people had more divided views on whether online media should be subject to the same regulations as traditional media. The online poll conducted by the Angus Reid Institute comes after Federal Heritage Minister Mélanie Joly announced in April a period of public consultation around current broadcasting and content regulations, with the possibility of changes to laws and agencies as soon as 2017. Roughly 56 per cent of the 1,517 Canadians surveyed said online media should not be subject to the same types of CRTC regulation as traditional media, while 44 per cent said all media should be regulated the same.

When asked by pollsters whether existing policies "do a good job of promoting" Canadian cultural content, 40 per cent said yes, 26 per cent said no and the rest were uncertain. However, 60 per cent of those surveyed replied that the current Cancon regulations need to be reviewed and updated. The survey's release coincides with CTV's announcement it would cancel Canada AM after 43 years, a change that could leave a "big hole" in the Canadian content spectrum depending on what replaces it, said Shachi Kurl, executive director of Angus Reid. Overall, Kurl said that Canadians support media regulations, but want a more "hands off" approach online. This is especially true among Canadians aged 18 to 34, she said, who use newer media such as Spotify and Netflix. Young people often see stars, including Justin Bieber, who were discovered on YouTube and perceive it as "doing it on their own," Kurl said. "The argument has yet to be made for these younger Canadians that protection, supports and government regulation is something that will enable Canadian content to thrive," she said.

Protect and promote culture
A majority of respondents, 61 per cent, said Canadian culture is unique and needs government support to survive, while the remaining 39 per cent said Canadian media "will be fine without specific protection policies and support from government." Respondents across the country supported cultural protection, with Quebecers having the most support at 70 per cent and Albertans showing the lowest support at 54 per cent. Kurl said that even though the majority of Canadians still support regulation, it may not stay that way. "Across Canada, two in five [people] or more think that actually it's time to take the reins off," she said. "It's not the majority view, but it's a growing view."
The polls by the Angus Reid Institute were conducted between May 10 and 13, 2016, interviewing 1,517 Canadians via the internet. A probabilistic sample of this size would yield a margin of error of plus or minus 2.5 per cent, 19 times out of 20.
© CBC News


USA: Logic isn't needed for Internet (opinion)

By Roger Bluhm, managing editor of the Dodge City Daily Globe.

2/6/2016- People don’t use logic when it comes to the Internet. People consistently create fake reports of celebrities dying online, just to stir things up. Not long ago Gabriel Iglesias, the comedian known as "Fluffy" was the victim of this hoax. As people were offering good will and prayers to his family, he tweeted out he was still alive. There is no logic for anyone to "kill" someone, yet it’s happened so much online, when a real report comes out, we take a while to believe it. Then there’s the anonymous posters in a chat room or Facebook, people who say mean things for their own benefit or just to start a situation. What’s the purpose? As I’ve said repeatedly in this space, if you have the guts to say something, have the guts to put your name on it. Own it.

How about those who go online looking for love, or lust? Millions of world wide web surfers are looking for the perfect match, the perfect right now or the perfect hook-up for later. I’m in the minority of people it seems as my wife and I have been together almost 25 years (my anniversary is in February.) Almost all of my cousins on my mother’s side of the family have been divorced at least once. I have two cousins who have each been married — and divorced — five times. Of course, at least three times their marriages fell apart because they found a new love online.

Terror groups like ISIS recruit our youth online. How? They tell our children we don’t care about them. They preach to the side of teenagers that wants to rebel, but also wants to be wanted. It’s amazing how terror groups have exploited our teenagers, but it doesn’t have to be like that. We can be more in our children’s lives and make sure they know we love them and they can tell us anything. I mean anything, because this is also how children get molested and molesters get away with it. Logic would suggest a person doesn’t go looking for hate groups, how to make bombs or child porn, yet it happens. Neo-Nazi groups have web sites, bomb-building instructions (and how to make methamphetamine) are available online and child porn has been shared and collected since the Internet was first introduced.

Logic doesn’t apply at all.

It has always amazed me how the best in advances can also be the worst in advances. The Internet brought a way for people across the country to talk to one another. In the early days there were Internet Chat Rooms where a man in England could talk with someone in Idaho. Of course, as is human nature, this created a whole new way for people to connect with others and disconnect with loved ones. We should have guessed then what was coming. As the Internet grew — faster, more reliable — and cell phones turned into smart phones, logic continued to go out the window. Does anyone older than 30 believe people are dying because someone has to read a text? It’s reading while driving and no one would have believed that to be smart or safe 20 years ago, but people do it all the time now.

In fact, people have been hurt in many ways by simply paying attention to their smart phones and not on their surroundings. A huge debate sprang up online recently over the death of a gorilla at a zoo. It seems a toddler decided to get into the moat surrounding the gorilla habitat and the animal grabbed the toddler. Zoo officials killed the gorilla to save the child’s life, yet some believe other measures should have been taken. I noticed someone took video of the entire situation, not once stopping to call for help on the phone, or off. Where were the parents? I never let my children out of my sight when they were toddlers and we were in a public place. Maybe because of my job — and reading many stories of child abductions — but I made sure my children were always safe.

I’m guessing mom or dad or both were buying shoes online or answering an email instead of making sure his or her son didn’t jump into the moat creating an overblown online debate. I just hope that not everyone reads this column online. It’ll just prove my point about logic having little place on the Internet.
© The Dodge City Daily Blobe


USA: Commissioner: 'White Pride World Wide' post ‘not a neo-Nazi thing’

Wade Eisenbeisz said his posting was ‘not a neo-Nazi thing’

3/6/2016- An Edmunds County commissioner who posted a white supremacy symbol on Facebook says he didn’t realize what the symbol represented. Wade Eisenbeisz recently shared a link that included the symbol and the words “White Pride World Wide.” The privacy setting on his Facebook page was public, meaning anyone could see the post. Eisenbeisz tells the American News that he was unaware what the symbol represented. He says he only meant to show that he is proud to be white. He says there’s nothing wrong with being proud of one’s race. The Anti-Defamation League says the symbol posted by Eisenbeisz is used by groups such as neo-Nazis. Eisenbeisz says his posting was “not a neo-Nazi thing.” He has deleted the post.
© The Associated Press


Google removes anti-Semitic app used to target Jews online

3/6/2016- Google has removed an app that allowed users to surreptitiously identify Jews online after a tech website brought the tool to widespread media attention and spurred a backlash. Coincidence Detector, the innocuous name of the Google Chrome internet browser extension created by a user identified as “altrightmedia,” enclosed names that its algorithm deemed Jewish in triple parentheses. The symbol — called an “(((echo)))” — allows white nationalists and neo-Nazis to more easily aim their anti-Semitic vitriol. The extension was exposed Thursday in an article on the tech website Mic by two reporters who had been targets of anti-Semitic harassment online. Google confirmed that evening that it had removed the extension from the Chrome Web Store, citing violation of its hate speech policy, which forbids “promotions of hate or incitement of violence.”

The Mic reporters traced the triple-parentheses symbol to a right-wing blog called The Right Stuff and its affiliated podcast, The Daily Shoah, starting in 2014. The parentheses are a visual depiction of the echo sound effect the podcast hosts used to announce Jewish names. The echo has now emerged as a weapon in the arsenal of the so-called “alt-right,” an amorphous, primarily online conservative movement that has been become more visible and vocal in the midst of Donald Trump’s presidential campaign. “Some use the symbol to mock Jews,” the Mic article explains of the echo. “Others seek to expose supposed Jewish collusion in controlling media or politics. All use it to put a target on their heads.” One neo-Nazi Twitter user provided a succinct explanation to The Atlantic magazine national correspondent Jeffrey Goldberg, who added the parentheses to his Twitter handle to mock the trend.

The product description of the now-disappeared Google extension said it would help users identify “who has been involved in certain political movements and media empires.” The use of the term “coincidence” was meant to be ironic. The Coincidence Detector had nearly 2,500 users and a five out of five stars rating. There was a suggestions tab to submit Jewish names to be added to the algorithm. Mic was tipped off to the use of the echo after Jonathan Weisman, an editor at The New York Times, retweeted a Washington Post article called “This is How Fascism Comes to America,” a scathing indictment of Trump. Weisman asked one of his harassers, @CyberTrump, to explain the symbol. ‘It’s a dog whistle, fool,’ the user responded. ‘Belling the cat for my fellow goyim.'” In addition to prompting action by Google, the report drew disbelief and protest across Twitter, with several Jewish users also adding parentheses to their names.

Julia Ioffe, a journalist who became the target of a campaign of anti-Semitic harassment after she wrote a profile of Melania Trump in GQ that Donald Trump supporters didn’t approve of, retweeted the Mic article with bewilderment. The alt-right has joined real-world white supremacists in generally embracing Trump’s candidacy, and the presumptive Republican nominee has been criticized for not doing more to distance himself from such supporters. The Daily Beast reported that Jared Kushner, Trump’s Jewish son-in-law, was among those targeted by the extension. While Coincidence Detector was mostly focused on names, with terms like “Jews,” “Jewish” and “Holocaust” unaffected, a notable exception was “Israel,” which Coincidence Detector changed to “(((Our Greatest Ally))).” The extension could be set at various levels of intensity, from 0-100 sets of parentheses. Writer Joe Veix dug into the extension’s code and compiled a full list of the 8,771 people targeted by Coincidence Detector.
© JTA News.


Australia: Racist memes mocking Adam Goodes taken down after AFL demands removal

2/6/2016- Racist internet memes mocking AFL great Adam Goodes have been voluntarily removed from a popular Facebook page, but those responsible maintain they were "just for fun". The AFL had earlier demanded that Facebook take down the posts, which on Wednesday night appeared on a page followed by about 200,000 people. The page's administrators told Fairfax Media they had deleted the images themselves. "I deleted because our page was getting a lot of reports and the best way was to delete them!" an administrator said. "Those posts was just for fun, to make people laugh, that was not racism." A second post was also deleted.

But a new meme appeared about 4.30pm on Thursday, attracting the ire of page followers. "Racism doesn't magically become funny when you repeat it over and over," one man wrote. "So you take down your other two racist posts just to put up another one?" another said. The first meme had been "liked" on Facebook by 5700 people, although most comments were highly critical of the racism. There were 336 people who gave it a laugh-out-loud emoji. The second post had 6800 likes. AFL spokesman Patrick Keane earlier said the league's legal team was in contact with Facebook over the "utterly unacceptable" posts. "We have told our legal team and we are in contact with Facebook to have it removed," Mr Keane said.

Goodes, who retired from his AFL career last year after a campaign of sustained crowd booing at certain grounds, was infamously jeered by a 13-year-old girl at an AFL game in 2013. He pointed the girl out in the crowd and she was ejected. It caused a social storm about racism in sport and in Australia. The girl later apologised to Goodes. Collingwood president Eddie McGuire, days later, apologised after comparing Goodes with King Kong. Mr Keane had said the AFL would use intellectual property rights to try to force Facebook's hand to remove the posts, and possibly the page. "If you are trying to use AFL [intellectual property] in this way it is utterly unacceptable and we will not tolerate it," he said. "As I understand it, copyright gives us the ability to act, but why we want to act is because it is utterly unacceptable," he said. "We are not going to allow people to be vilified." Fairfax Media has chosen not to republish the memes. The creator of the page earlier had responded to the backlash, saying "it's just a joke".
© The Age


Anti-Defamation League mobilizes on anti-Semitism, racism against journalists

1/6/2016- To combat the anti-Semitic and racist comments and threats facing journalists on social media these days, the Anti-Defamation League is assembling a task force that is expected to deliver recommendations by summer’s end on this scourge. “We’re seeing a breadth of hostility whose virulence and velocity is new,” said Jonathan A. Greenblatt, the ADL’s chief executive, in a chat with the Erik Wemple Blog. Among the task force participants are Danielle Citron, a University of Maryland law professor and an oft-quoted voice on online harassment, and Steve Coll, dean of the Columbia University Graduate School of Journalism. The group’s mandate is threefold: to determine the “scope and source” of the attacks on social media against journalists and their ilk; to research their impact; and to come up with countermeasures that “can prevent journalists becoming targets for hate speech and harassment on social media in the future.”

Item No. 1 is a very tricky and frustrating matter, as any sentient social media user can attest. Some journalists who have written skeptically of presumptive Republican nominee Donald Trump have been stung by vile anti-Semitic attacks on Twitter. But who are the people sending the tweets? How many people are doing this? Some of them mention Trump in their Twitter IDs — does that mean they’re Trump supporters? Jeffrey Goldberg, a reporter for the Atlantic and a target of the attacks, posed this question on Twitter: When the Erik Wemple Blog passed that question along to Greenblatt, he responded, “I think, honestly, we just don’t know yet.”

One of the task force members is Julia Ioffe, a freelance reporter who found herself targeted by anti-Semitic tweets following a deeply reported GQ profile of Melania Trump. She later filed a police report over death threats that she’d received. The report states that Ioffe claimed that “an unknown person sent her a caricature of a person being shot in the back of the head by another, among other harassing calls and disturbing emails depicting violent scenarios.” In an interview with the Erik Wemple Blog, Ioffe said, “The Trumps have a record of kind of whistling their followers into action.” The ADL is assisting Ioffe with her case.

Anti-Semitic social media trolls are equal-opportunity harassers. Conservative writers such as Ben Shapiro, John Podhoretz and Noah Rothman have all seen the backlash, as have Ioffe, Goldberg, CNN’s Jake Tapper and Jonathan Weisman of the New York Times. It doesn’t take much to provoke, either. In Weisman’s case, he merely tweeted out a Robert Kagan piece in The Post titled “This is how fascism comes to America” — on Trump’s shoulders, that is. Then came the abuse. In announcing its task force, the ADL made no mention of Trump. Greenblatt explains that the ADL is a 501(c)(3) group that neither supports nor rejects politicians. Furthermore, he said, anti-Semitic attacks have arisen both from the right and the left. In the case of the latter, he cites folks who try to “delegitimize the policies of the Israeli govt and oftentimes the speech used can be rather troubling.”

Should the task force approach Trump on this matter, it should brace for the response he gave to CNN’s Wolf Blitzer, who asked about threats against Ioffe. “Oh, I don’t know about that. I don’t know anything about that.”
© The Washington Post - Blog Erik-Wemple


Why some free-speech advocates 'stand with hate speech'

The European Commission's announcement Tuesday of a new code of conduct against hate speech has raised concerns about political censorship.

2/6/2016- The European Union's new code of conduct aimed at curbing hate speech has some free speech advocates raising concerns of censorship. Microsoft, Twitter, Facebook, and Google promised on Tuesday to police and remove what the European Commission has deemed a concerning rise in hate speech, but critics are raising ideological, political, and technical objections to the plan. "It seems these companies were given 'an offer they couldn't refuse,' and rather than take a principled stand, they've backed down fearing actual legislation," human rights advocate Jacob Mchangama, the director of Copenhagen-based think tank Justitia, told the Christian Science Monitor's Christina Beck earlier this week. "And of course, how will global tech companies now be able to resist the inevitable demands from authoritarian states that they also remove content that these countries determine to be 'hateful' or 'extremist'?"

The Daily Caller's Scott Greer suggests that government insistence on defining and punishing hate speech threatens the delicate principle of free speech by punishing differences of opinion. "Those whom express views in line with the prevailing wind of popular opinion are not the ones who need the comfort of the First Amendment," he wrote in an editorial Thursday. "By instituting hate speech laws, the government declares itself the arbiter of what counts as hate speech, which means they are more likely to go after unwanted opinions." The hashtag #IStandWithHateSpeech became a trending topic on Twitter, as free speech advocates insisted the dangers of censorship exceed those of the hate speech itself. Janice Atkinson, a Member of the European Parliament, told Breitbart London the "Orwellian" policy could be used for political gain as Europe wrestles with difficult immigration problems.

"If an MEP, such as the centre-right Hungarians, the Danish People’s Party, the Finns, the Swedish Democrats, the Austrian FPO, say no to migration quotas because they cannot cope with the cultural and religious requirements of Muslims across the Middle East who are seeking refugee status, is that a hate crime? And what is their punishment?" Ms. Atkinson told Breitbart London. "It's a frightening path to totalitarianism." Others have raised technical concerns, because the companies agree to review and remove hate speech within 24 hours, a process that privatizes the protection of free speech. "The code requires private companies to be the educator of online speech, which shouldn't be their role, and it's also not necessarily the role that they want to play," Estelle Massé, EU policy analyst with the Brussels-based Access Now, told the Monitor.

Daphne Keller, the Stanford Center for Internet and Society's director of intermediary liability, told Buzzfeed that when in doubt of what to remove, these new hate speech police would err on the side of removing controversial – but legal – content for fear of government reprisals. "They take down perfectly legal content out of concern that otherwise they themselves could get in trouble," Ms. Keller told Buzzfeed. "Moving that determination out of the court system, out of the public eye, and into the hands of private companies is pretty much a recipe for legal content getting deleted."
© The Christian Science Monitor


Is Combating Online Hate Speech Censorship Or Protection?

31/5/2016- Facebook, Twitter, Google and Microsoft signed a new EU code of conduct agreement to review content flagged as hate speech and remove it within 24 hours of flagging. While all of these companies have long claimed to have zero tolerance for online hate speech, the new code of conduct gives them a time limit of one day, within which they must respond to complaints. The response on Facebook and Twitter did not take long to come. Many posters wrote about their apprehension that the new rules will effectively shut down free speech on the Internet. Some of those contributing to the discussion prefer to remain more open and ask if this means simply increased but ineffectual restriction to freedom of speech or sincerely combating online hate speech. Commentator responses varied from those who believe this constitutes censorship to those who ask who defines the term, hate speech, to those who are in favor of the anti-hate speech code of conduct as long as it is sufficiently well defined.

Targets of Online Hate Speech
An online site,, lists the potential targets of online hate speech. These include women, the LGBTQI community, Jews, Muslims, and individuals targeted for cyberbullying by people they know in real life. The FBI publishes an annual report on hate crime statistics. Their report concerns actual real-life attacks, but the statistics regarding targets offline may mirror online hate speech targets. The 2014 data are the most recent data available.

FBI hate crime stats may reflect online hate speech

There may be a similarity between actual hate crimes and online hate speech. [Image from:]

Perpetrators of Online Hate Speech
It is difficult to characterize the perpetrators of hate speech on the Internet because of the easy apparent anonymity achieved by the use of fake profiles. The UNESCO brochure, entitled Countering Online Hate Speech, claims that anonymity is not necessarily easily achieved, since high-level technological knowledge is required to successfully hide the user’s identity. However, the identities behind anonymous hate speech perpetrators can often only be discovered by legal authorities. This apparent anonymity encourages many people to post hate messages toward the objects of their hate. Middlesex University psychology professors William Jacks and Joanna Adler discuss the effects of anonymity on online hate speech.

"In an online environment, where individuals often perceive themselves as anonymous and insulated from harm, confrontation between those subscribing to differing ideologies was common, especially on open-access sites. Hate postings were often followed by other hate postings expressing a polar opposite extremist view, which only served to increase the ferocity of both arguments and further reduce the validity of either point of view".

They also characterize the perpetrators of online hate speech, dividing them into browsers, commentators, activists and leaders. Browsers are commonly referred to as “lurkers” on social media, those who read but do not interact openly. Commentators actively respond to the posts of others. Jacks and Adler found that 87 percent of online hate speech was perpetrated by this group. Activists engage in real-life hate activities as well as online hate speech. Leaders go even further.
"A Leader will use the Internet to support, organize, and promote his extremist ideology…. They will be at the forefront of developing Websites, storing large amounts of extremist material relating to their ideology, and organizing hate related activities on and offline".
According to Jacks and Adler, perpetrators of online hate speech seek to purposefully insult a given group, to scorn beliefs of others, to rationalize their own beliefs and to support those thinking as they do. The activists and leaders promote offline events, some of which could be classified as hate crimes.

Can the EU Code of Conduct Combat Online Hate Speech?
In the discussion of their study, Jacks and Adler believe that this is a step in the right direction. They refer to Holocaust expert Debra Lipstadt to support this idea, as she studiously ignored the Holocaust deniers who tried to spread hate in response to her publications.
"Some would suggest that simply ignoring hate content and pressuring Internet service providers to remove content as soon as possible could be the most effective option… By engaging with those who are purporting hate, no matter how vociferous the debate and ridiculous their views, the fact that the debate is happening at all would cause others to perceive the views as legitimate and allow them to enter mainstream consciousness".

On the other hand, the phenomenon of tailored search results, whereby individuals are presented with materials based upon their online behaviors may mean that simply ignoring the hate speech online would have no effect at all. Alternatively, Jacks and Adler conclude that careful interaction with the haters may eventually bear fruit.
"As search engines ‘learn’ about individuals’ extremist views, they will provide searches that preference hate material, increasing the likelihood of further entrenchment. In order to combat this narrowing of search results and affirmation of beliefs, it may be necessary to safely but actively engage and challenge hatred online… For early intervention, the best hope may be through engaging with users on hate sites, posts, walls, and blogs — although the question remains as to whether an alternative point of view will be able to break into a hate user’s cocooned online experience".

The UNESCO report supports this approach of engaging in hate speech incidents as a means of education toward tolerance. They add that the social media giants have a major role to play in combating online hate speech.
"Internet intermediaries, on their part, have an interest in maintaining a relative independence and a “clean” image. They have sought to reach this goal by demonstrating their responsiveness to pressures from civil society groups, individuals and governments. The way in which these negotiations have occurred, however, have been so far been ad hoc, and they have not led to the development of collective over-arching principles".

And changing that ad hoc approach to a more systematic and formal method for combating online hate speech is the purpose of the EU code of conduct. Future studies will demonstrate whether or not this is effective. In the meantime, Jewish groups have had negative experience with reports of hate crime to Facebook. It is questionable whether or not the new EU code of conduct agreement will help in such instances because this is more related to a definition of hate speech than the willingness to remove it. It seems that an agreed upon definition for hate speech is the first order of the day.
© The Inquisitr


EU tells Facebook and others to stop hate speech -- because terrorism???

Facebook, Twitter, Google, Microsoft agree to pointless EU edict. But Commissioner Vĕra Jourová is ever so proud.
By Richi Jennings

31/5/2016- The European Union has told social platforms such as Facebook to do something about hate speech. And, yes, this is indeed something -- something they're already doing. And does it surprise you to learn that this "code of conduct" is being justified in the name of combating terrorism? In IT Blogwatch, bloggers are ever so glad they won't be subjected to hate speech any longer. Your humble blogwatcher curated these bloggy bits for your entertainment.

What’s the craic? Julia Fioretti and Foo "bar" Yun Chee report—Facebook, Twitter, YouTube, Microsoft back EU hate speech rules:
Facebook, Twitter, Google's YouTube and Microsoft...agreed to an EU code of conduct to tackle online hate speech. [They] will review the majority of valid requests for removal of illegal hate speech in less than 24 hours and remove or disable access to [illegal] content.

They will also...promote "counter-narratives" to hate speech. ... The United States has undertaken similar efforts...focusing on promoting "counter-narratives" to extremist content.

Are you impressed? Alexander J. Martin isn't— EU bureaucrats claim credit for making 'illegal online hate speech' even more illegal:
The European Commission has claimed the credit...despite the companies already following practices demanded by EU bureaucrats. ... Under the code, IT companies will have an obligation to [do what] national laws in the EU already require them to do.

[It's] a particularly difficult area for legislation. ... The European Court of Human Rights has stated that...freedom of expression “is [also] applicable to [words] that offend, shock or disturb

Who is responsible for this bureaucratic bungling? Vĕra Jourová is the EU Commissioner for Justice, Consumers and Gender Equality:
The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred. This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected. I welcome the commitment of worldwide IT companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.

Ahhh, I see. Because terrorism! Romain Dillet crunches the background—Facebook, Twitter, YouTube and Microsoft agree to remove hate speech across the EU :
ISIS has been successfully using social media to recruit fighters. [And] the European economic recession has fostered far-right parties, leading to more online antisemitism and xenophobia. [So] now, four tech companies are making a formal pledge at the European level.

[They] will have to find the right balance between freedom of expression and hateful content. ... They’ll have dedicated teams [of] poor employees who will have to review awful things every day.

It’s encouraging to see tech companies working together on a sensitive issue like this

Good grief. Liam Deacon and Raheem Kassam wax multi-cultural—Pledge To Suppress Loosely Defined ‘Hate Speech’:
[It's] been branded “Orwellian” by Members of the European Parliament, and digital freedom groups have already pulled out of any further discussions...calling the new policy “lamentable”. ... The platforms have also promised to engage in...the re-education of supposedly hateful users.

[Independent] Janice Atkinson MEP [said] “Anyone who has read 1984 sees its very re-enactment live. ... The Commission has been itching to shut down free speech. ... It’s a frightening path to totalitarianism.”

European Digital Rights (EDRi) announced its decision to pull out of future discussions...stating it does not have confidence in the “ill-considered code”.

You have been reading IT Blogwatch by Richi Jennings, who curates the best bloggy bits, finest forums, and weirdest websites… so you don’t have to. Catch the key commentary from around the Web every morning. Hatemail may be directed to @RiCHi or  Opinions expressed may not represent those of Computerworld. Ask your doctor before reading. Your mileage may vary. E&OE.

Your humble blogwatcher is an independent analyst/consultant, specializing in blogging, email, spam, and other security topics. He was voted 'Most likely to get up first to sing at karaoke' for 14 years in succession.
© Computer World


WJC welcomes EU guidelines against hate speech, is skeptical regarding implementation

The World Jewish Congress (WJC) on Tuesday welcomed the signing by leading internet service providers Google/YouTube, Facebook, Twitter and Microsoft of a European Union code of conduct aimed at fighting the proliferation of hate speech on the internet, but voiced skepticism about the commitment of these firms to effectively police their platforms.

31/5/2016- WJC CEO Robert Singer said: “YouTube, Twitter, Facebook and others already have clear guidelines in place aimed at preventing the spread of offensive content, yet they have so far utterly failed to properly implement their own rules. Singer recently wrote to Google Inc., which owns the world’s largest online video service YouTube, to complain about the persistent failure of YouTube to delete neo-Nazi songs that glorify the Holocaust or incite to murder from its platform. “Tens of thousands of despicable video clips con-tinue to be made available although their existence has been reported to YouTube and despite the fact that they are in clear violation of the platform’s own guidelines prohibiting racist hate speech. "Nonetheless, YouTube gives the impression that it has been cracking down on such content. Alas, the reality is that so far it hasn't. We expect that real steps are taken by YouTube, as well as other social media platforms, that go beyond well-meaning announcements,” said Singer. The WJC CEO nonetheless praised the European Commission’s code of conduct to combat online racism, terrorism and cyber hate. "This is a timely initiative, and we hope all internet service providers will adhere to the code," said Singer. The guidelines require companies to review the majority of flagged hate speech within 24 hours and remove it, if necessary.
© World Jewish Congress


EU Hate Speech Deal Shows Mounting Pressures Over Internet Content Blocking

1/6/2016- An agreement on Tuesday by four major US Internet companies to block illegal hate speech from their services in Europe within 24 hours shows the tight corner the companies find themselves in as they face mounting pressure to monitor and control content. The new European Union "code of conduct on illegal online hate speech" states that Facebook Inc, Google's YouTube, Twitter Inc and Microsoft will review reports of hate speech in less than 24 hours and remove or disable access to the content if necessary. European governments were acting in response to a surge in antisemitic, anti-immigrant and pro-Islamic State commentary on social media. The companies downplayed the significance of the deal, saying it was a simple extension of what they already do. Unlike in the United States, many forms of hate speech, such as pro-Nazi propaganda, are illegal in some or all European countries, and the major Internet companies have the technical ability to block content on a country-by-country basis.

But people familiar with the complicated world of Internet content filtering say the EU agreement is part of a broad and worrisome trend toward more government restrictions. "Other countries will look at this and say, 'This looks like a good idea, let's see what leverage I have to get similar agreements,'" said Daphne Keller, former associate general counsel at Google and director of intermediary liability at the Stanford Center for Internet and Society. "Anybody with an interest in getting certain types of content removed is going to find this interesting."

Policing content
The EU deal effectively requires the Internet companies to be the arbiters of what type of speech is legal in each country. It also threatens to complicate the distinction between what is actually illegal, and what is simply not allowed by the companies' terms of service - a far broader category. "The commission's solution is to ask the companies to do the jobs of the authorities," said Estelle Masse, policy lead in Europe for Access Now, a digital rights advocacy group that did not endorse the final EU agreement. Masse said that once companies agree to take quick action on any content that is reported to them, they will inevitably review it not only for legal violations but also terms of service violations. "The code of conduct puts terms of service above national law," she said.

The agreement also expands the role of civil society organizations such as SOS Racisme in France and the Community Security Trust in the UK in reporting hate speech. While governments can make formal legal requests to the companies for removal of illegal content, a more common mechanism is to use the reporting tools that the services provide for anyone to "flag" content for review. None of the companies would provide any detail on how many such organizations they work with or who they are. Facebook and Google both said in statements to Reuters that they already review the vast majority of reported content within 24 hours. "This is a commitment to improve enforcement on our policies," said a Facebook representative. Facebook reviews millions of pieces of reported content each week, according to Monika Bickert, the company's head of global policy, and has multilingual teams of reviewers around the world.

'Dangerous precedent'
Yet free speech advocates expressed concern that the EU code of conduct would pressure companies to overcomply and remove lawful content out of an abundance of caution. "This is a dangerous precedent, as any wider discussion between the EU and international human rights groups would have revealed," said Danny O'Brien, international director of the Electronic Freedom Foundation. "It does not address that different speech is deemed illegal in different jurisdictions," he said. The hashtag #istandwithhatespeech was trending on Twitter Monday afternoon as rights advocates objected to the EU deal.

The hate speech agreement raises some of the same issues as a European court ruling that gives EU residents the right to demand that links about them be removed from Google and other search engines, Internet activists say. The so-called right to be forgotten requires Google to review removal requests and determine which ones qualify because they contain "excessive" or "irrelevant" information. According to Google's transparency report, the company has reviewed 1,522,636 Internet addresses, or URLs, since the law went into effect in 2014. It removed the links in 43 percent of the cases.
© Reuters


European Commission and IT Companies announce Code of Conduct

The Commission together with Facebook, Twitter, YouTube and Microsoft (“the IT companies”) today unveil a code of conduct that includes a series of commitments to combat the spread of illegal hate speech online in Europe.

31/5/2016- The IT Companies support the European Commission and EU Member States in the effort to respond to the challenge of ensuring that online platforms do not offer opportunities for illegal online hate speech to spread virally. They share, together with other platforms and social media companies, a collective responsibility and pride in promoting and facilitating freedom of expression throughout the online world. However, the Commission and the IT Companies recognise that the spread of illegal hate speech online not only negatively affects the groups or individuals that it targets, it also negatively impacts those who speak out for freedom, tolerance and non-discrimination in our open societies and has a chilling effect on the democratic discourse on online platforms.

In order to prevent the spread of illegal hate speech, it is essential to ensure that relevant national laws transposing the Council Framework Decision on combating racism and xenophobia are fully enforced by Member States in the online as well as the in the offline environment. While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously reviewed by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame. To be considered valid in this respect, a notification should not be insufficiently precise or inadequately substantiated.

Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, said, "The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred. This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected. I welcome the commitment of worldwide IT companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary."

Twitter’s Head of Public Policy for Europe, Karen White, commented: “Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society. We remain committed to letting the Tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate. In tandem with actioning hateful conduct that breaches Twitter’s Rules, we also leverage the platform’s incredible capabilities to empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance. We look forward to further constructive dialogue between the European Commission, member states, our partners in civil society and our peers in the technology sector on this issue.”

Google’s Public Policy and Government Relations Director, Lie Junius, said: “We’re committed to giving people access to information through our services, but we have always prohibited illegal hate speech on our platforms. We have efficient systems to review valid notifications in less than 24 hours and to remove illegal content. We are pleased to work with the Commission to develop co- and self-regulatory approaches to fighting hate speech online."

Monika Bickert, Head of Global Policy Management at Facebook said: "We welcome today’s announcement and the chance to continue our work with the Commission and wider tech industry to fight hate speech. With a global community of 1.6 billion people we work hard to balance giving people the power to express themselves whilst ensuring we provide a respectful environment. As we make clear in our Community Standards, there’s no place for hate speech on Facebook. We urge people to use our reporting tools if they find content that they believe violates our standards so we can investigate. Our teams around the world review these reports around the clock and take swift action.”

John Frank, Vice President EU Government Affairs at Microsoft, added: “We value civility and free expression, and so our terms of use prohibit advocating violence and hate speech on Microsoft-hostedconsumer services. We recently announced additional steps to specifically prohibit the posting of terrorist content. We will continue to offer our users a way to notify us when they think that our policy is being breached. Joining the Code of Conduct reconfirms our commitment to this important issue."

By signing this code of conduct, the IT companies commit to continuing their efforts to tackle illegal hate speech online. This will include the continued development of internal procedures and staff training to guarantee that they review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary. The IT companies will also endeavour to strengthen their ongoing partnerships with civil society organisations who will help flag content that promotes incitement to violence and hateful conduct. The IT companies and the European Commission also aim to continue their work in identifying and promoting independent counter-narratives, new ideas and initiatives, and supporting educational programs that encourage critical thinking.

The IT Companies also underline that the present code of conduct is aimed at guiding their own activities as well as sharing best practices with other internet companies, platforms and social media operators.

The code of conduct includes the following public commitments:

The IT Companies, taking the lead on countering the spread of illegal hate speech online, have agreed with the European Commission on a code of conduct setting the following public commitments:

The IT Companies to have in place clear and effective processes to review notifications regarding illegal hate speech on their services so they can remove or disable access to such content. The IT companies to have in place Rules or Community Guidelines clarifying that they prohibit the promotion of incitement to violence and hateful conduct.

Upon receipt of a valid removal notification, the IT Companies to review such requests against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA, with dedicated teams reviewing requests.

The IT Companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.

In addition to the above, the IT Companies to educate and raise awareness with their users about the types of content not permitted under their rules and community guidelines. The use of the notification system could be used as a tool to do this.

The IT companies to provide information on the procedures for submitting notices, with a view to improving the speed and effectiveness of communication between the Member State authorities and the IT Companies, in particular on notifications and on disabling access to or removal of illegal hate speech online. The information is to be channelled through the national contact points designated by the IT companies and the Member States respectively. This would also enable Member States, and in particular their law enforcement agencies, to further familiarise themselves with the methods to recognise and notify the companies of illegal hate speech online.

The IT Companies to encourage the provision of notices and flagging of content that promotes incitement to violence and hateful conduct at scale by experts, particularly via partnerships with CSOs, by providing clear information on individual company Rules and Community Guidelines and rules on the reporting and notification processes. The IT Companies to endeavour to strengthen partnerships with CSOs by widening the geographical spread of such partnerships and, where appropriate, to provide support and training to enable CSO partners to fulfil the role of a "trusted reporter" or equivalent, with due respect to the need of maintaining their independence and credibility.

The IT Companies rely on support from Member States and the European Commission to ensure access to a representative network of CSO partners and "trusted reporters" in all Member States helping to help provide high quality notices. IT Companies to make information about "trusted reporters" available on their websites.

The IT Companies to provide regular training to their staff on current societal developments and to exchange views on the potential for further improvement.

The IT Companies to intensify cooperation between themselves and other platforms and social media companies to enhance best practice sharing.

The IT Companies and the European Commission, recognising the value of independent counter speech against hateful rhetoric and prejudice, aim to continue their work in identifying and promoting independent counter-narratives, new ideas and initiatives and supporting educational programs that encourage critical thinking.

The IT Companies to intensify their work with CSOs to deliver best practice training on countering hateful rhetoric and prejudice and increase the scale of their proactive outreach to CSOs to help them deliver effective counter speech campaigns. The European Commission, in cooperation with Member States, to contribute to this endeavour by taking steps to map CSOs' specific needs and demands in this respect.

The European Commission in coordination with Member States to promote the adherence to the commitments set out in this code of conduct also to other relevant platforms and social media companies.

The IT Companies and the European Commission agree to assess the public commitments in this code of conduct on a regular basis, including their impact. They also agree to further discuss how to promote transparency and encourage counter and alternative narratives. To this end, regular meetings will take place and a preliminary assessment will be reported to the High Level Group on Combating Racism, Xenophobia and all forms of intolerance by the end of 2016.

The Commission has been working with social media companies to ensure that hate speech is tackled online similarly to other media channels. The e-Commerce Directive (article 14) has led to the development of take-down procedures, but does not regulate them in detail. A “notice-and-action” procedure begins when someone notifies a hosting service provider – for instance a social network, an e-commerce platform or a company that hosts websites – about illegal content on the internet (for example, racist content, child abuse content or spam) and is concluded when a hosting service provider acts against the illegal content. Following the EU Colloquium on Fundamental Rights in October 2015 on ‘Tolerance and respect: preventing and combating Antisemitic and anti-Muslim hatred in Europe’, the Commission initiated a dialogue with IT companies, in cooperation with Member States and civil society, to see how best to tackle illegal online hate speech which spreads violence and hate.

The recent terror attacks and the use of social media by terrorist groups to radicalise young people have given more urgency to tackling this issue. The Commission already launched in December 2015 the EU Internet Forum to protect the public from the spread of terrorist material and terrorist exploitation of communication channels to facilitate and direct their activities. The Joint Statement of the extraordinary Justice and Home Affairs Council following the Brussels terrorist attacks underlined the need to step up work in this field and also to agree on a Code of Conduct on hate speech online.

The Framework Decision on Combatting Racism and Xenophobia criminalises the public incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin. This is the legal basis for defining illegal online content. Freedom of expression is a core European value which must be preserved. The European Court of Human Rights set out the important distinction between content that "offends, shocks or disturbs the State or any sector of the population" and content that contains genuine and serious incitement to violence and hatred. The Court has made clear that States may sanction or prevent the latter.
© The EUropean Commission


Tech giants agree to EU rules on online hate speech

31/5/2016- Tech companies Facebook, Twitter, Microsoft and Google, owner of video service YouTube, agreed Tuesday to new rules from the European Union on how they manage hate speech infiltrating their networks. The rules push companies to review requests to remove illegal online hate speech within 24 hours and respond accordingly, as well as raise awareness among users on what content is appropriate for their services. In a joint statement from the European Commission and the companies involved, both sides say they recognize the "collective responsibility" to keep online spaces open for users to freely share their opinions.

"This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected," said Vĕra Jourová, the EU's EU Commissioner for Justice, Consumers and Gender Equality, in a statement. In March, 32 people were killed in bombings at an airport and subway station in Brussels. The attacks and recent efforts by terrorist groups to recruit new members through social media including Facebook and YouTube prompted the new rule changes. "The recent terror attacks have reminded us of the urgent need to address illegal online hate speech," said Jourová. "Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racists use to spread violence and hatred."

Some U.S. privacy rights groups expressed concern that the agreement sets a dangerous precedent because the removals will be based on flagging by third parties.
“It does not address that different speech is deemed illegal in different jurisdictions, nor how such 'voluntary agreements' between the private sector and state might be imitated or misused outside Europe,” said Danny O’Brien, international director of the San Francisco-based Electronic Frontier Foundation, an online civic rights group.

However, some U.S. groups concerned about cyber hate hailed the agreement. Rabbi Abraham Cooper, head of the Simon Wiesenthal Center’s Digital Terrorism and Hate Project, called it a significant step in efforts to stop terrorists and extremists from leveraging the power of social media platforms.
He called upon U.S. companies to help in the effort. For example, he said a posting about an ancient slur on Jews that they use the blood of Christian children for ritual purposes would be removed in Germany, but remain untouched if posted through a U.S. server. “Hate is hate and if a social media company would remove such postings from its online pages in Germany, it should do the same globally,” he said.
© USA Today


Are tech firms neutral platforms or combatants in a propaganda war?

31/5/2016- Facebook founder and Chief Executive Mark Zuckerberg pledged two weeks ago to keep his company neutral when it comes to political discussions at home. On Tuesday, he promised the European Union he’d promote propaganda at the behest of Western governments. So how does neutrality allow for activism?

Facebook and other tech companies say they don’t want to house content that incites the sort of violence and hate that leads to terrorism. The social network along with Twitter, YouTube and Microsoft reached an agreement with the European Union to take down offensive speech within 24 hours. In addition, the companies said their platforms would “encourage counter and alternative narratives” to the inflammatory content promoted by extremist groups. But with that promise, analysts say tech firms risk blurring the lines for free speech and bolstering government influence on services that have billed themselves as neutral. How exactly Facebook and the other tech companies plan to promote content that undermines terrorist groups is unclear. Also unclear is what such content looks like. The companies did not respond to a request for interviews.

The agreement poses a potential conflict for the tech firms -- especially Facebook, which is facing questions about whether it’s a neutral platform or an ideologically driven media company. Conservatives last month accused Facebook’s team of news writers of suppressing their viewpoints. Of course, taking a nonpartisan stance on U.S. politics isn’t the same as ignoring the threat of hate and terror on social media. But it’s unlikely the new counterterrorism initiative will do much to quell neutrality concerns as it “smacks of promoting one kind of thought over another,” said Jan Dawson, an analyst at Jackdaw Research. “It's quite another thing to actively promote counter-programming,” he said. “That could run the risk of stoking fears that Facebook and Twitter in particular have particular policy agendas which they will use their platforms to promote. Both companies will have to be very careful to avoid being seen as partisan or favoring one set of acceptable speech over another.”

Silicon Valley has been under growing pressure from authorities worldwide to police its platforms, especially given how terrorist organizations such as Islamic State rely on social networks to recruit. The government has insisted at several meetings over the last year, including with Apple Chief Executive Tim Cook and other industry luminaries, that it needs the tech industry’s help to digitally spar with terrorist organizations that have grown their ranks through social media. With groups such as Islamic State, governments and social networks face a formidable foe for attention online. With the promise of martyrdom and glory, more than 30,000 foreign fighters have been lured to fight for the militant group. Though wary of associating too closely with governments, the tech industry has budged. Facebook is sharing data with activists and nonprofit groups about what shape counter-speech should take to give it the best chance of going viral.

But little is known about whether counter-speech or counter-narratives work effectively online -- largely because questions persist about who to target and how. “A lot of people in the U.S. think a good solution to bad speech is more good speech…. We don’t have much evidence or data to support that idea,” said Susan Benesch of the Berkman Center for Internet & Society at Harvard, who founded the Dangerous Speech Project, which aims to combat inflammatory speech while preserving freedom of expression. The challenge for tech companies, Benesch said, is determining where the line is for offensive material. Could a news report on U.S. drone policy, for example, be used as a terrorist recruiting tool? If so, should it be downplayed on social networks? “There’s content, like an academic article, that isn’t produced with hateful intent, but may have the same negative impact as hate speech,” Benesch said. The EU generally does not protect free speech the same way the U.S. does, but advocates of Internet freedom say the deal could lead to abuse in other countries.

Danny O'Brien, international director of the Electronic Frontier Foundation, said he was “deeply disappointed” with the agreement, which mends some of the troubles U.S. tech firms have faced for years in Europe over privacy concerns and protectionism. The EU has “rubber stamped the widespread removal of allegedly illegal content, based only on flagging by third parties,” O’Brien said. “It does not address that different speech is deemed illegal in different jurisdictions, nor how such 'voluntary agreements' between the private sector and state might be imitated or misused outside Europe.” EU officials said security threats necessitated Tuesday’s agreement.

"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech,” Vera Jourova, the EU commissioner responsible for justice, consumers and gender equality, said in a prepared statement. “Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racists use to spread violence and hatred. This agreement is an important step forward to ensure that the Internet remains a place of free and democratic expression, where European values and laws are respected.”

The tech companies say they can balance the policing of hate speech with freedom of speech. “We remain committed to letting the tweets flow,” Karen White, Twitter's European head of public policy, said in a prepared statement. “However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.” Facebook remains encouraged by the possibility that speech countering extremist groups can weaken the pull of terrorists. Removing content “is not the way that we fix this problem,” Facebook's head of global policy management, Monika Bickert, said during an address at the Washington Institute for Near East Policy that was shared on YouTube. Getting people to challenge terrorists’ messages is “actually accomplished through more speech -- speech that encourages people to actually take a hard look at these groups and what they stand for and question it.”

Earlier this year, Facebook Chief Operating Officer Sheryl Sandberg said sharing stories about people who defected from Islamic State after being lured to the Middle East would be the “best thing to speak against recruitment by ISIS.” The company also has backed an Obama administration program called Peer 2 Peer, encouraging millennials to come up with anti-Islamic State messages. One of the organizers of the program, an education consulting firm called Edventure Partners, said Peer 2 Peer has helped troubled youths across the world while accumulating tens of thousands of likes and followers online. “We believe youth is able to better reach and impact their peers than traditional approaches,” said Tony Sgro, chief executive of Edventure Partners. “Government and adults have proven they can’t do it and are ineffective in creating alternative and counter narratives. Who better to push back on extremism than the very same audience extremists want to recruit?”
© The Los Angeles Times


Netherlands: TV presenter takes action over ‘surge of racist comments’

31/5/2016- Television presenter turned political hopeful Sylvana Simons said on Tuesday she would make a formal police complaint about the ‘surge of racist, sexist and discriminatory reactions’ her decision to go into politics had generated. By making a complaint it would make clear that ‘demonstrable injustice should never go unpunished’, political party Denk, which Simons joined earlier this month, said in a statement. ‘It is time that the social discussion about racism took place in the political arena,’ Simons told reporters. ‘Combating injustice begins with registering it.’ The public prosecution department said last week it was looking into the comments directed at Simons to see if they were punishable by law. After going public with her decision to join Denk, Simons was dismissed on social media as a ‘Netherlander hater’, a ‘winging negro’ and an ‘Erdogan helpmate’. One PVV supporter also launched a Facebook campaign to have her ‘waved out’ of the Netherlands on December 6, alongside Sinterklaas.
© The Dutch News


Russian Manhandled Over Social Media Comment

Police brutally arrest man for alleged extremist online activity.

20/5/2016- Federal police in St. Petersburg have accused a man in St. Petersburg of inciting hatred and enmity following a comment he made on social media. His house was raided, while the man was pinned down on the ground with his hands tied behind his back, Meduza reports. The arrested individual, 36-year-old Artem Chebotarev, is co-founder of a community on Russia’s popular social network VKontakte called “Free Ingria,” a geographical area that partly covers the north of Russia. The social media group connects users who believe that the St. Petersburg region should declare its independence from Russia, Radio Free Europe says. The investigators didn’t specify the contents of the social media comment but said Chebotarev was posting statements against Moscow and its inhabitants, Russia’s business daily Kommersant reported. The federal police officials also added that their tough approach was justified by information that they had received that the man had weapons in his house. Chebotarev was later released, Russia’s Novaya Gazeta says.

# Russia toughened punishment for separatist ideas in 2014, after its annexation of the Crimean peninsula from Ukraine. The new legislation put stricter regulations on online media, as well as increased the prison terms for "public calls for actions violating the territorial integrity of the Russian Federation," The Guardian says.
# In Late April, a man known as one of the “founding fathers” of the Internet, Anton Nossik, was charged with extremism for a 2015 blog post about Syria. A prominent blogger, Nossik has been accused of hate speech according to the criminal code and may be sentenced to up to four years in prison.
# Russia’s top investigator Alexander Bastrykin recently proposed changes in legislation that regulates the Internet, which would be based on the Chinese model. The suggested measures included restrictions on foreign ownership of Internet sites and a stricter definition of what constitutes extremism in relation to Crimea, among other things.
Compiled by Evgeny Deulin
© Transitions Online.


UK: Church Minister investigated over far-right and Islamophobic posts

Father David Lloyd, of the Newcastle parish in Bridgend, has since deleted his social media accounts.

30/5/2016- A minister is being investigated by the Church in Wales after posting on social media in support of far-right and Islamophobic groups. The Reverend Father David Lloyd, from Bridgend , posted on his Facebook page about “idiots” who dismissed the anti-Islam movement Pegida or Britain First video posts. The post read: “Those idiots who dismiss Pegida (Patriotic Europeans against the Islamisation of the West) or Britain First video posts, out of hand, should grow up, overcome their prejudices and WATCH the content before judging. “You might discover these groups are working hard for YOUR freedom and YOUR children’s future while you stand idly by.” Father Lloyd, who represents the Newcastle Parish (central Bridgend) in the Diocese of Llandaff , shared Facebook posts from groups such as Islam Exposed, and told his followers controversial figure Tommy Robinson of Pegida should be “applauded and supported”.

He also posted about comedian Lenny Henry who he claimed was “never happy”. he Reverend added: “BBC is “too white” for him now. He wanted to get rid of the Minstrels Show. Knight ‘em and they go political.” The posts drew criticism online with anti-hate group IRBF calling for his resignation. Father Lloyd has since deleted his social media accounts, but the IRBF have screen grabbed them and posted the messages from their own Twitter accounts. Before deleting his accounts, Father Lloyd posted on Facebook: “Due to abusive phone calls to my wife, me and now my superiors at The Church in Wales, I will no longer be posting as an individual. My parish page will still run. “Thanks for all the fun you’ve shared with me and for helping me through the dark, painful and sleep deprived times. “From an alleged ‘racist and Islamophobe’ and your friend David.”

A spokeswoman from the Church in Wales said Father Lloyd had “apologised” for the messages. She added: ““The Revd David Lloyd’s views expressed in his tweets were his personal ones and not those of the Church in Wales. “He has apologised for any offence they caused and has closed down his social media accounts.” Father Lloyd was approached for comment but has not responded.
© Wales Online


Headlines May 2016

Are EU having a laugh? Europe passes hopeless cyber-commerce rules

When compromise becomes why bother at all

27/5/2016- The European Commission (EC) has approved a series of ecommerce rules designed to make Europe more competitive online. In true European fashion however, the proposals contain a lengthy series of inconsistent compromises and avoid altogether the most complex policy issues, making them largely worthless. Vice-President for the Digital Single Market, Andrus Ansip, said of the measures: "All too often people are blocked from accessing the best offers when shopping online, or decide not to buy cross-border because the delivery prices are too high, or they are worried about how to claim their rights if something goes wrong. "We want to solve the problems that are preventing consumers and businesses from fully enjoying the opportunities of buying and selling products and services online." Except the rules don't do that. While companies in Europe will be obliged to sell to anyone else in the European Union, they won't have to ship goods there.

So a consumer in, say, Poland can now buy goods from, say, Spain. But if that Spanish company doesn't want to ship them, it can inform their Polish customer that they need to travel to Spain to pick them up. In another sign of the hopeless EC bureaucracy mindset, there won't be rules around shipping rates across Europe (which are notoriously inconsistent), but it will spend a lot of money creating a website that will attempt to list all those rates. "The Regulation will give national postal regulators the data they need to monitor cross-border markets and check the affordability and cost-orientation of prices," the EC announced. "It will also encourage competition by requiring transparent and non-discriminatory third-party access to cross-border parcel delivery services and infrastructure. The Commission will publish public listed prices of universal service providers to increase peer competition and tariff transparency." It will most likely be a gigantic waste of everyone's time and become just one more service that the EC offers at great expense but which no one uses.

That's digital economy
Worst, however, is the fact that the Commission has exempted digital goods from its digital single market, so companies will be able to continue to geo-block videos and other digital files. The proposals have found some attention – particularly outside Europe – over the plan to treat the internet in the same way as cable television and seek to require content companies like Netflix to make sure 20 per cent of their programming comes from Europe. A Netflix spokesman responded by saying that over 20 per cent of what the company offers already comes from Europe, but questioned whether a requirement for content providers to purchase the rights to content from a specific geographic area was really going to help the European film and TV industries thrive.

As to the critical issue of "internet platforms" that offer telecommunications – such as Skype or WhatsApp – and what rules should apply to them, the Commission simply punted the issue into the long grass, ensuring that future efforts to put rules in place will be even more difficult. While failing to come up with answers to the kinds of policy questions that the EC exists to produce, it did manage to draw up new rules for others to interpret and enact, in particular a vague "code of conduct" aimed at dealing with hate speech online that companies will have to figure out how to make work, while the EC watches over their shoulders tutting.
© The Register


UK: Yvette Cooper leads campaign to ‘reclaim the internet’ from sexist trolls

Labour’s Yvette Cooper is leading a cross-party campaign to tackle online misogyny.

26/5/2016- The former Labour leadership candidate today launched a campaign to ‘Reclaim the Internet’, fighting back against the online abuse that women face every day online. Cooper launched the campaign alongside the Tory equalities select committee chair Maria Miller, former Lib Dem equalities minister Jo Swinson, and Labour’s Jess Phillips. Think-tank Demos released an analysis of social media misogyny, tracking the use of the words “slut” and “whore” by Twitter users in the UK. It found that more than 6,500 individuals were targeted in the UK, with more than 10,000 tweets sent.

Ms Cooper said: “Forty years ago women took to the streets to challenge attitudes and demand action against harassment on the streets. “Today the internet is our streets and public spaces. “Yet for some people online harassment, bullying, misogyny, racism or homophobia can end up poisoning the internet and stopping them from speaking out. “We have responsibilities as online citizens to make sure the internet is a safe space. Challenging online abuse can’t be done by any organisation alone … This needs everyone.” The campaign seeks to engage with officials from Facebook and Twitter to develop new methods of dealing with abuse, while an online forum aims to gather submissions from the public.

Demos Researcher Alex Krasodomski-Jones said: “This study provides a birds-eye snapshot of what is ultimately a very personal and often traumatic experience for women. “While we have focused on Twitter, who are considerably more generous in sharing their data with researchers like us, it’s important to note that misogyny is prevalent across all social media, and we must make sure that the other big tech companies are also involved in discussions around education and developing solutions.”
© The Pink News


German Pegida row over non-white photos on Kinder bars

Members of the anti-Islam protest group Pegida in Germany have complained about images of non-white children on Kinder chocolate bar packets.

24/5/2016- A Pegida Facebook page in Baden-Wuerttemberg asked: "Is this a joke?" But after being told the photos were childhood photos of Germany's footballers being used in Euro-2016-linked marketing, they admitted they had "dived into a wasps' nest". Kinder said it would not tolerate "xenophobia or discrimination". A photograph of two chocolate bars was circulated by the person behind the Bodensee Facebook group of Pegida (Patriotic Europeans Against the Islamisation of the West). For decades, Kinder packaging has featured a blonde-haired, blue-eyed boy. But in a marketing campaign ahead of the Euro 2016 football tournament, Kinder has started to use photographs of the German team's players when they were children.

'Is this a joke?'
The two that the Pegida group complained about were Ilkay Guendogan and Jerome Boateng, both German nationals who play in the Bundesliga as well as the national team. Seemingly without realising this, the group's admin wrote: "They'll stop at nothing. Can you really buy these? Or is it a joke?" One commenter responded: "Do the Turks and other countries use pictures of German children on their sweets or groceries? Surely not." Soon the comments filled with explanations of the marketing campaign, and a backlash against the Pegida group. One person wrote: "Close the borders and have no exports, no migration! Then you'll get unemployment and local league football." Another wrote: "If one of those men scores a goal he'll be celebrated." The negative reaction forced the original poster to write that it was "best not to respond" and that they had "really dived into a wasps' nest." After being alerted to the ongoing discussion on Facebook, Kinder's manufacturers Ferrero wrote: "We would like to explicitly distance ourselves from every kind of xenophobia and discrimination. We do not accept or tolerate these in our Facebook communities either."
© BBC News


Japan: Diet passes first law to curb hate speech

24/5/2016- Japan’s first anti-hate speech law passed the Diet on Tuesday, marking a step forward in the nation’s long-stalled efforts to curb racial discrimination. But the legislation has been dogged by skepticism, with critics slamming it as philosophical at best and toothless window dressing at worst. The ruling coalition-backed law seeks to eliminate hate speech, which exploded onto the scene around 2013 amid Japan’s deteriorating relationship with South Korea. It is the first such law in a country that has long failed to tackle the issue of racism despite its membership in the U.N.-designated International Convention on the Elimination of All Forms of Racial Discrimination. Critics, however, have decried the legislation as ineffective. While it condemns unjustly discriminatory language as “unforgivable,” it doesn’t legally ban hate speech and sets no penalty.
How effective the law will be in helping prevent the rallies frequently organized by ultraconservative groups calling for the banishment or even massacre of ethnic Korean residents remains to be seen. Critics including the Japan Lawyers Network for Refugees have also pointed out the law is only intended to cover people of overseas origin and their descendants “who live legally in Japan.” The law’s mention of legality, they say, will exclude many foreign residents without valid visas, such as asylum seekers and overstayers. Submitted by lawmakers from the Liberal Democratic Party and Komeito, the bill initially limited its definition of hate speech to threats to bodies, lives and freedom of non-Japanese as well as other incendiary language aimed at excluding them. But at the urging of the Democratic Party, the scope of the legislation was expanded to cover “egregious insults” against foreign residents.

The law defines the responsibility of the state and municipalities in taking measures against hate speech, such as setting up consultation systems and better educating the public on the need to eradicate such language. The Justice Ministry’s first comprehensive probe into hate speech found in March that demonstrations organized by the anti-Korean activist group Zaitokukai and other conservative organizations still occur on a regular basis, although not all involve invectives against ethnic minorities. A total of 347 such rallies took place in 2013, while 378 were held in 2014 and 190 from January through September last year, the Justice Ministry said.
© The Japan Times


The 5 things you say when you're a racist

by Brianna Cox

24/5/2016- As someone who has been writing on the internet for a few years now, I know that trolls come with the experience. But perhaps the most mind-boggling part of this is when you explicitly spell out racist or otherwise overtly offensive things and explain why they are racist or otherwise overtly offensive, people stampede to the comment thread to literally prove the article's point. And the thing is... they always respond with the same old tired arguments. Always.

Many Americans just do not know that much about racism and its systemic nature. So when there's a discussion, they get defensive at the very least, and cruel at the very most. It's not surprising that the responses follow the same pattern when studies have shown that white Americans think "reverse racism" (which is not a thing) is a bigger problem than anti-black racism, despite virtually no peer reviewed evidence to support this. Or, perhaps worse even, many take their uninformed opinion and preach it forward to the next generation, so that their children also do not understand racism (or "see color"). But that doesn't mean it's OK to respond to someone who says "you're racist" or "this is racism" with an attack. So allow me to break down (once again) exactly why these arguments are full of it:

The First Amendment argument
Writing an article or calling out racism/homophobia/xenophobia/transantagonism, etc., is not oppressing free speech in any way, shape or form. Ironically enough, the First Amendment’s existence allows us to shout from the rooftops our displeasure at the awful shit that bigots have to say. Additionally, freedom of speech does not and has never equated to freedom from consequences; there are many instances in which free speech is already regulated in our society (in the public and private sectors). Try again.
The 'You’re the Real Racist' argument (alt: the Obamas have divided the country) (alt: stop making it about race)

When many of us speak about racism, we are speaking about the institutional and systemic way in which nonwhite people in America have openly and covertly been kept from the opportunities of their white counterparts. So in that framework, nonwhite people cannot oppress white people. Even if we could, talking about systemic inequality and the micro aggressions and words and actions that perpetuate it is not oppression in any way. Additionally, the Obamas barely talk about race (I wish they did more), so it seems that what divides the country regarding the Obamas is their very existence as being black in the White House.

The 'If You Stop Talking About Racism, It Will Go Away' argument
When is the last time covering literal feces up with a paper towel made it go away?

The 'Your Objectivity Is Clouded By Prejudice' argument
Because apparently only white men/people are capable of being objective, rather than being influenced by their place in society and experiences because of that place.

The 'You People Are So Easily Offended' argument
I see people angry at “social justice warriors” and people of color speaking out against racism, saying that those of us who do are just overly sensitive — and yet some of those very same folks will say that Star Wars’ casting is white genocide, and that Old Navy hates white babies because they have an ad with an interracial couple. See also: anger and refusal to understand anything about racism or the meme utterance of “white privilege."

The ad hominem attack
Calling a writer ugly, her interracial marriage “gross,” drawing Michelle Obama as a man and creeping on a stranger's Facebook profile to poke fun at their weight are personal attacks that do not at all engage with the actual arguments. That's being both defensive and cruel, and demonstrating you do not have an actual argument to fall back on.
© She Knows


Too fat for Facebook: photo banned for depicting body in 'undesirable manner'

Facebook has apologized for wrongly banning a photo of plus-sized model Tess Holliday for violating its ‘health and fitness’ advertising policy

23/5/2016- Facebook has apologized for banning a photo of a plus-sized model and telling the feminist group that posted the image that it depicts “body parts in an undesirable manner”. Cherchez la Femme, an Australian group that hosts popular culture talkshows with “an unapologetically feminist angle”, said Facebook rejected an advert featuring Tess Holliday, a plus-sized model wearing a bikini, telling the group it violated the company’s “ad guidelines”. After the group appealed the rejection, Facebook’s ad team initially defended the decision, writing that the photo failed to comply with the social networking site’s “health and fitness policy”. “Ads may not depict a state of health or body weight as being perfect or extremely undesirable,” Facebook wrote. “Ads like these are not allowed since they make viewers feel bad about themselves. Instead, we recommend using an image of a relevant activity, such as running or riding a bike.”

In a statement Monday, Facebook apologized for its original stance and said it had determined that the photo does comply with its guidelines. “Our team processes millions of advertising images each week, and in some instances we incorrectly prohibit ads,” the statement said. “This image does not violate our ad policies. We apologize for the error and have let the advertiser know we are approving their ad.” The photo – for an event called Cherchez La Femme: Feminism and Fat – features a smiling Holliday wearing a standard bikini. Facebook had originally allowed the event page to remain, but refused to approve the group’s advert, which would have boosted the post.

The policy in question is aimed at blocking content that encourages unhealthy weight loss – the opposite intent of Cherchez la Femme, which was promoting body positivity. This is not the first time Facebook has come under fire for its censorship of photos. In March, the site faced backlash when it concluded that a photograph of topless Aboriginal women in ceremonial paint as part of a protest violated “community standards”. Critics said that ban was an obvious double standard, noting that Facebook allows celebrities such as Kim Kardashian to pose with body paint covering her nipples. Instagram and Facebook also have faced opposition for policies banning women from exposing their nipples, with critics arguing that the guidelines are prejudiced against women and transgender users.

Cherchez la Femme did not immediately respond to a request for comment on Monday, but has been venting its frustrations on its Facebook page. “Facebook has ignored the fact that our event is going to be discussing body positivity (which comes in all shapes and sizes, but in the particular case of our event, fat bodies), and has instead come to the conclusion that we’ve set out to make women feel bad about themselves by posting an image of a wonderful plus sized woman,” the group said. “We’re raging pretty hard over here.”
© The Guardian.


INACH - International Network Against CyberHate

The object of INACH, the International Network Against Cyberhate is to combat discrimination on the Internet. INACH is a foundation under Dutch Law and is seated in Amsterdam. INACH was founded on October 4, 2002 by and Magenta Foundation, Complaints Bureau for Discrimination on the Internet.