- Czech extremism report: Islamophobic group uses cyber-bullying
- Britain First App Absolutely Slated Online
- UK: #FreeSpeechStories: Should you lose your job for racism online?
- Social media must curb hate speech, says France
- France hands out first fines for anti-gay tweets
- France: Paris attacks: Internet fuels conspiracy theories
- Why has the Westboro Baptist Church been suspended on Twitter?
- Would you unfriend someone for their politics?
- UK: Attorney General steps into social media racism row
- Ireland: Ladies and trolls: Should we make cyberbullying a crime?
- Twitter and Facebook 'allowing Islamophobia to flourish'
- USA: Racism and Bullying Online Issues for Many
- Thousands sign online petition against German anti-Islam PEGIDA movement
- India: Five-member expert group to tackle cyber crimes
- Removing Hatred from Steam leaves awkward questions for Valve
- 10 Racist Bands You Won't Believe Are On iTunes
- Calls to Ban Australian Defence League Following Inflammatory Facebook Post
- Dutch privacy watchdog threatens Google with 15m fine
- USA: That Ferguson Comment You Made on Facebook Could Get You Fired
- Australia: OHPI has launched Fight Against Hate (press release)
- Report: Hate Music Still Being Sold Via iTunes, Amazon and Spotify
- Yahoo News and the Hate Site
- UK: Internet troll admits Facebook abuse
- UK: Crime warning on social media abuse
- UK: White Paper - The Role of Prevent in Countering Online Extremism
- Israel: How to fight anti-Semitism online?
- Romanian Court Rules Facebook Pages Not Private
- Germany: Berlin introduces 'anti-nazi' application
- Racists Getting Fired exposes weaknesses of Internet vigilantism
- Israeli teens see spike in online anti-Semitism
- BDS Group Spreads Photoshopped Image of Concentration Camp Inmates Holding Anti-Israel Posters
- UK: Twitter's 'defensive' on online anti-Semitism is criticised by MPs
- UK: Facebook hosted Lee Rigby death chat ahead of soldier's murder
- UK: New team targeting online crime
- OSCE Centre supports conf. in Kazakhstan on countering terrorist use of Internet
- Censoring the Web Isn't the Solution to Terrorism or Counterfeiting. It's the Problem. (opinion)
- USA: Supreme Court faces a new frontier: Threats on Facebook
- USA: Social media proves racism is far from gone (opinion)
- Homophobic 'Ass Hunters' Game Removed From Google Play After 1000s Of Downloads
- UK: You've got hate mail: how Islamophobia takes root online
- UK: Eastwood blogger Simon Tomlin guilty of harassment
- USA: Neo-Nazi gets 17 years for email threats
- USA: Gay Slur Removed From Google Maps
- Hate Crimes in Cyberspace, by Danielle Keats Citron (book review)
- Ireland: New legislation needed to stop online trolls
- Malta: Website helps report racism
- Gaza war caused explosion of online hate speech in Europe, report finds
- 'Facebook Murder' - Should Crimes Using Social Networks Get Their Own Category?
- Australia: Racist posts on Facebook - how should you respond?
- USA: Law Enforcement Increasingly Reliant on Social Media
- USA: Court Agrees to Reconsider Decision Over Benghazi-Linked Anti-Islam Video
- Racism in Canada finds fertile ground online
- Does Twitter have a secret weapon for silencing trolls?
- UK PM Cameron says Internet must not 'be an ungoverned space'
- UK: How can football tackle the social media hate merchants?
- Disturbing Trend: Pro-Palestinians Promoting Carintifada on Social Media
- Gamehit Clash of Clans allows opportunity for anti-semitism
- UK: Ed Miliband demands zero-tolerance approach to antisemitism
- Canada: B.C. minor hockey coach fired over pro-Nazi Facebook posts
- Why terrorists and far-Right extremists will always be early adopters
- Kremlin Attack on Russian Website for Nazi List of Wealthy Jews Meets Skeptical Response
- Australia: How Facebook decides what to take down
- Why online Islamophobia is difficult to stop
- Social Networks Bringing People Together like Never Before (opinion)
- 'It's hard being openly Jewish'
- Hungary's 'internet tax' sparks protests
- Mob Violence Has No Place in Ireland (press statement)
- UK: J. Mann MP: Berger abuse reveals failure to curb racism on Twitter (opinion)
- UK: PDMS technology powers innovative new website for UK police
- UK: Far Right on Facebook - The group with more likes than all three main parties
- UK: Neo-Nazi gave out internet abuse tips in campaign against MP Berger
- UK: Silencing extreme views, even if they are those of internet trolls, is wrong (opinion)
- Hate Speech Is Drowning Reddit and No One Can Stop It
23/1/2015- The ultra-right scene in the Czech Republic has recently focused on "combating Islam". The most active group is the Czech Defense League (CZDL), represented by "We Don't Want Islam in the Czech Republic" (IvÈRN) on the Facebook social networking site. Those are the findings of a report by the Czech Interior Ministry on the last quarter of 2014. A total of 51 actions were held during that time that were either convened directly by or involving the participation of politically extremist entities, 29 events on the ultra-right side of the spectrum and 22 on the ultra-left. Year-on-year a significant decline in the number of ultra-right actions was noted, as well as a slight decline in ultra-left ones. "The ultra-right scene continues to act in a comparatively fragmented, inconsistent way. Its long-term, frequently-mentioned crisis of financing, issues and personnel has created limits as to what its active entities can do. The scene has had long-term problems, not only in reaching the broader public through its actions and winning adherents, but also in convening and mobilizing its existing sympathizers," the report says. The Interior Ministry says this is exemplified by the comparatively low turnout for an assembly marking the 17 November holiday in Brno last year. Only 80 ultra-right participants attended that event.
Islamophobes use cyberbullying as a weapon
According to the report, the ultra-right scene has recently oriented itself more toward manifesting Islamophobia. The Czech Defense League (CZDL), represented by "We Don't Want Islam in the Czech Republic" (IvÈRN) on the Facebook social networking site, is a group that has long profiled itself in this particular way. "A rising number of cases of the group cyber-bullying those opposed to it have been alleged," the report reads. The Workers Social Justice Party (Dìlnická strana sociální spravedlnosti - DSSS) and its youth organi-zation, Workers Youth (Dìlnická mláde - DM) are also mentioned in the context of Islamophobia. The DSSS has interpreted the Muslim community as a threat, not just from a security perspective, but from that of the demise of European culture and traditions. "The exploitation of that topic is linked to their closer collaboration with West European ultra-right entities in particular, as well as to their efforts to reach out to new followers," the ministry warns.
Nazi salute given at neo-Nazi concert in Brno
At the end of 2014 there were several neo-Nazi concerts in the Czech Republic. Approximately 80 people attended a concert in Brno on 16 November by the bands Squad 96, Karlos Band and the Slovak performer Reborn. Police had to address several audience members giving the Nazi salute there. Activists from the "Generace identity" (Generations of Identity) group organized a concert on 29 November at the Na Kopeèku restaurant in Ústí nad Labem, primarily for an audience who crossed the border from Germany to attend. The ad for the concert described it as taking place in "Central Germany". The German bands Blutzeugen and Confident of Victory performed there, who are infamous on the so-called White Power Music scene.
Left-wing extremists mobilize, support cheated employees
The ministry reports that the ultra-left and, by extension, the anarchist scene continues to mobilize. "A rather high turnout of around 300 people was noted at the ultra-left demon-stration on 17 November in Prague convened under the name 'Dignity, Housing, Income'," says the report. "Part of the anarchist scene has found an opportunity to apply itself by participating in activities and support for the Most District Solidarity Network (Mostecká solidární sí - MSS) and for Solis Praha. The aim of these entities is to draw attentoin to the alleged wrongdoing by employers against employees (e.g., alleged non-payment of wages)," says the report, noting an action against the Prague restaurant Øízkárna, where some employees were not being paid. MSS convened protests at the restaurant and police intervened against those participating. The ministry says these are activists, often from ultra-left environments, who do their best to present themselves as an alternative to both non-state and state institutions. "Demand for the 'services' of such entities has grown. It may be that this group is now testing a strategy against smaller businesses that could then be applied against a larger num-ber of businesses," the report says.
"Speaking hyperbolically, the fact that activities on behalf of working people who have been harmed are now being monitored by the autho-rities is a testament to their success," a participant in the action against the Øízkárna restaurant told news server Romea.cz. The ministerial report also called the occupation of the "Clinic" on 3 December in the ikov quarter of Prague an action by the extreme left. "During December there were actions expressing solidarity and support for the Clinic initiative, and not just in Prague, after the allegedly brutal clearance of the building by the Police of the Czech Republic. A demonstration in Prague on 13 December was attended by roughly 700 people. The scandal also sparked discussion for a certain time on the possible full or partial legalization of squatting," says the report, warning that there was an attempt to set a police vehicle on fire in connection with the Clinic. "The 'Anarchist Solidarity Action' took responsibility for that deed and its performance sparked a postive respon-se from radical activists in particular. The Clinic initiative distanced itself from it," says the report.
Britain First have a new mobile app. HURRAH!
23/1/2015- Available on Google Play and the App Store, it promises: "With this App you can join forces with patriots like you!" But by far the best thing about are the reviews...
Impressed that Chimps can develop an app.
Though we live in a democracy and must accept that morons like Golding and his primate gang of uneducated, benefit scrounging, council estate chimps are entitled to their opinion, doesn't mean we have to agree with it. Having looked through the app, it's poorly constructed and full of racist tosh. I only downloaded it to place this negative review. I will have deleted it by the time you read this. Will be reporting it to Apple for containing hate speech. Would give zero stars if that were possible.
Amazing Oh my deyyyyyzzz. dis ap is amezing. B4 I got it tere were lodz of muslams in ma road and dey like had muslamic ray guns everywhere. But dey r all gone now and sows my unwashed library leftie neihbour. I can fell de British blud running threw me. I luv ma queen and fello Brits so much more alredy. Ma mam always sad I woildnt amont to nuffink well dis ap shows her. I am a beer swigging nazi loving patriotic thug. I fink it's evan improved ma spelling
Useless app, vile racist content
Goodness me what a vile app and a vile bunch of people. Britain First are obviously just racists. The app is just full of lies and hate, and then it keeps trying to get you to donate or buy horrible rubbish from the patriot store. The whole thing is inciting violence and hatred and trying to scam you out of money. Utterly, Utterly VILE VILE VILE! Shame on Apple for allowing this racist neo-Nazi filth on their app store.
The best app for racists there is! As soon as I downloaded this app I noticed an immediate Increase to my patriotism. On top of that, a number of the brown people in my street went back where they came from. I used it on a person wearing a burqa, and boom, they were immediately in a tracksuit and baseball cap, like they should be. Great App, but can it be whiter?
Fascist scum! You can't censor these comments! A group of sly Nazi conmen using war memorials and hijacking the poppy appeal to further their fascist agenda. No pasaran! Does the app come with a pitchfork and dragging knuckles?
Great app you guys! I mean, this way all your gazillion followers (that's what you claim to have now, right?) can immediately show their friends and family how much of a racist, thick bigot they are without even speaking (or grunting!)
Best app for Nazis
Wow was wondering where I could find an app to connect with other like-minded knuckle dragging bigots. The ability to purchase mock patriotic tat and line the pockets of racists is a lovely touch too.
Amazingly, even the positive reviews serve only to reinforce what a lovely bunch of people Britain First supporters are...
"Great app" Love my country, Love britain first, hate all you leftie fuckwits who aint got the bollocks to stand up and do something positive!
© The Huffington Post - UK
19/1/2015- We've been asking you whether a person should lose their job for posting racial slurs online. Earlier we told the story of the Tumblr blog Racists Getting Fired, and spoke to a man who got the sack after using a highly offensive racial slur on Facebook. There have been a number of cases when individuals have lost their jobs over posting controversial comments online. We've been looking at some your responses to our blog, and the comments have been really diverse.
Anne Catherine tweeted "NO. Unless you are on a company account/computer, what you say off hours should not be reason enough to fire you." Mirka from Australia felt that indivi-duals should be allowed to correct their statements. "A reaction of dismissal for a few statements, without providing a person with an opportunity to correct actions? The employer could highlight how some statements may be undesirable. Give people a chance, it is their livelihood." Mirka isn't the only one who thinks that perhaps re-education is the answer. "Give them a choice: re-educate them or get the sack," said one response.
But how far should employers go in making posting offensive content a "sackable offence"? Sean Saunders responded "You can't say what you like online without being held respon-sible by your employer who you represent." Most of the chatter directed our way was about racism and offensive comments, but what about the vigilantes hunting down those comments? "Racism is wrong, and so is thought-policing," said commenter Moominmama in a comment on our blog. "Why does Logan Smith (who runs the @YesYoureRacist Twitter account) think he can be judge, jury and executioner? why should he decide what is racist and what is not. Most people dont know what the word means," said another commenter. And it doesn't matter if you were just intending a comment to be made for a small group of people - social media is public, a point reiterated by Patrycja Szpyra. "The Internet is now a key part of society. The internet is public and if you can't behave, you'll get in trouble."
Reporting by Ravin Sampat
© BBC News
France is calling for the international regulation of social networks in order to crack down on “racist and anti-Semitic propaganda,” a senior minister said on Thursday at the UN’s first-ever summit on tackling anti-Semitism.
23/1/2015- “There are hate videos [online], calls for death, propaganda that have not been responded to, and we need to respond,” Harlem Désir, France’s State Secretary for European Affairs, told reporters on the sidelines of the General Assembly meeting. “[Those who propagate] terrorism, religious fanaticism, jihadism and radical Islam use the Internet enormously,” he said." We must limit the dissemination of these messages.” Désir lambasted social networks for what he described as a failure to take responsibility for “racist or anti-Semitic” content published on their platforms, citing Facebook and Twitter as examples. “We want to be clear with what we have seen – that those networks are used to promote violence, discrimination, and hatred.
The answer from these companies has been to say that ‘we are not responsible for what is said’.” In response, France wants to create a legal framework that would “place the responsibility on those who are passing the message, even if they are not deciding the message,” he said. Désir spoke alongside his German counterpart Michael Roth, two weeks after four Jewish shoppers were shot dead in a kosher supermarket as part of a three-day assault by Islamist terrorists in and around the French capital. The call for stricter online regulations comes as part of a wider programme to heighten security and increase surveillance, as France reels from its worst terror attacks in decades.
‘Difficult’ issue of freedom of expression
Désir was keen to stress that the proposed law would not target freedom of expression, a principle for which some four million people in France marched in support after the ter-ror attacks, which began on the morning of January 7 with the massacre of “blasphemous” cartoonists at satirical magazine Charlie Hebdo. “It’s very difficult because we are pro-foundly attached to the principle of freedom of expression,” he said. “There needs to be a clear distinction between freedom of expression, which is a fundamental right, and the liberty to incite hate, discrimination, and death.” Comparing hate speech with the dissemination of child pornography, he called for a consensual international effort to curtail it. “This can’t be done country by country,” he said, adding that the European Union member heads of state would address the issue on February 12. The US Ambassador to the United Nations, Samantha Power, described Désir’s plan as an “interesting proposal” that would require consulting both the general public and the private sector. “We’re very alert to the extent to which social media platforms are being exploited by violent extremists across the board, including by al Qaeda and Islamic State,” Power said, also stressing the impor-tance of protecting freedom of speech.
Responding to questions by FRANCE 24, Désir said the rise in anti-Semitism had been partly fuelled by the conflict between Israel and the Palestinian Territories. “We don’t want this conflict to be a pretext used by a new anti-Semitism to promote the hatred of Jews,” he said. “You have the right to disagree with the policy of either government. But you don’t have the right to promote discrimination, hatred or violence,” he reiterated. France has long been criticised for failing to address Islamophobia as seriously as anti-Semitism, a critique exacerbated by a spike in Islamophobic incidents following the Paris attacks. Désir insisted that members of the two faiths were treated equally across the continent. “The common basis across Europe is that you can and you are condemned if you promote hatred against Muslims the same way you are if promote hatred against Jews. There is the same condemnation for an attack on a mosque as there is for an attack on a synagogue,” he said.
Earlier in the day, Saudi Arabia's Ambassador to the UN, Abdullah al-Mouallimi, stressed the relationship between Islamophobia and anti-Semitism. “We have witnessed with gro-wing concern the increase in hate crimes around the world, and we are very concerned because some arbitrarily reject their responsibilities in this regard,” al-Mouallimi said on behalf of the 57-nation Organization of Islamic Cooperation. “Anti-Semitism and Islamophobia and all crimes that are based on religious hate are inextricably linked, they’re inse-parable.” Bernard Henri-Lévy, a controversial French philosopher who is ridiculed in France as much as he is respected, opened the summit on Thursday with an impassioned address, describing anti-Semitism as “the mother of all hates”. Lévy said that "faulting the Jews is once again becoming the rallying cry of a new order of assassins, unless it is the same but cloaked in new habits”.
© France 24.
Three French Twitter users were fined this week for sending tweets that included homophobic hashtags. It's the first time a French court has handed out convictions for homophobic abuse on Twitter.
21/1/2015- The convictions date back to offences committed in 2013 when several homophobic hashtags appeared on Twitter in France, including "Gays must die because...", (#Lesgaysdoiventdisparaîtrecar). The three who were convicted in the Paris court this week posted tweets using the hashtag "let’s burn the gays on..." (#brûlonslesgayssurdu). The case against the three had been brought by French charity Comité Idaho, which organizes the International Day Against Homophobia in France. It had filed a complaint against the users for inciting hatred and violence on the basis of sexual orientation. The punishments handed down however were fairly light - one was fined €300 while the other two were forced to pay €500 - given the maximum punishment is up to a year in prison and a €45,000 fine. Although all three were forced to pay the same amount to Comité Idaho, which welcomed the ruling this week.
“It’s a significant victory,” Alexandre Marcel, president of the Comité Idaho, told The Local. “But it’s a small amount to pay for calling for the death of homosexuals.” Gay rights groups in France regularly report homophobic hashtags, which Twitter then removes from trending topics to make them less visible. But in August 2013 the hashtag #lesgaysdoiventdisparaîtrecar (Gays must die because), was displayed at the top of the list, and wasn't immediately taken down. France's then government spokeswoman Najat Vallaud-Belkacem was forced to step in and took to Twitter herself to denounce the trend. “I condemn homophobic tweets. Our work with Twitter and groups against homophobia is essential,” she said at the time. Wednesday's court ruling was also welcomed by "SOS Homophobie", which reports on homophobic tweets.
“We’re positive that this will send out the message that the Internet is not a place with no rules where you can do whatever you want,” Yohann Roszewitch, president of the association told The Local. Last year the anti-homophobia association released a report revealing that the number of homophobic acts in France had increased by 78 percent in 2013, the year in which gay marriage was legalized. According to the association the huge surge in the number of homophobic incidents was without doubt linked to the bitter row over the legalization of gay marriage, which divided France and led to mass demonstrations that frequently ended in violent clashes between police and extremists.
This isn’t the first time that Twitter has been embroiled in controversy in France surrounding hateful messages. In July 2013, The Local reported how the website was forced by a French court’s ruling to hand over information identifying Twitter users who had published anti-Semitic comments, including under the term #UnBonJuif (A good Jew). Speaking after the court’s decision, lawyer Philippe Schmidt told The Local that remarks made on Twitter should be treated the same as if they were made in any public forum.
© The Local - France
Was the Charlie Hebdo attack orchestrated by the French secret service? Or perhaps Israel's Mossad? Almost two weeks after the bloody shooting, conspiracy theories are still rebounding around the web. We take a look at some of them.
19/1/2015- When asked about the Charlie Hebdo attack a group of French schoolchildren reportedly told their teachers they didn't believe the official version of events. Their minds may have been swayed by some of the wild conspiracy theories that have spread around the web. Could the January 7 Charlie Hebdo attack have been a secret service operation, or perhaps an anti-Muslim plot? The wildest conspiracy theories found their way onto the Internet within hours of the Paris bloodbath. Just as it did in the wake of the September 11, 2001, attacks in the United States, the rumour machine moved into top gear from the very moment the first reports emerged. Among the most frequently mentio-ned is the apparent change in colour of the rearview mirrors of a car used by the Kouachi brothers -- white on an image taken near the Charlie Hebdo office where they killed 12 people and black in a later image of the abandoned vehicle.
Experts put the change down to the fact that the mirrors were made out of chrome, a material that can change colour according to the light. Other details providing rich material for the conspiracy theorists included the identity card mislaid by one of the Kouachi brothers and the telephone receiver not properly put back on its hook at the supermarket where gunman Amedy Coulibaly killed four people during a hostage siege two days later. Even the route of the January 11 solidarity march through Paris has been given dubious significance in the minds of some, with claims that it mirrored the outline of Israel's borders.
Official theories too dull
Emmanuel Taieb, a professor at the Sciences-Po Lyon university in central-eastern France and a specialist in conspiracy plots, said that for many the official interpretation of events -- as provided by the police, politicians and analysts -- was simply too dull. "It is considered poor, disappointing. So it is ruled out or questioned in favour of a more appea-ling, worrying analysis," he said. Observers say that young people, for whom the Internet is their main source of information, are particularly vulnerable to believing everything they read online. Mohamed Tria, 49, a business executive and president of the La Duchere football club in a tough area of Lyon, said the mainstream interpretation of the attacks was far from the norm in some places. "I met around 40 kids aged between 13 and 15 in my club. I was astounded by what I heard," he said. "They had not got their information from newspapers, but from social networks, it's the only accessible source for them and they believe what they read there as if it is the truth," he said.
Others said adults now have far less control over what young people opt to believe. "For 30 years, 90 percent of what children learned came from either their parents or school. Now, it's the other way round. We need education about social networks," a teacher at a roundtable discussion in the northern Paris suburb of Sarcelles said last week. For Guil-laume Brossard, co-founder of the website hoaxbuster.com, a site that allows people to check the validity of information, it is as if the self-expression made possible by the Internet was custom-made for rebellious teenagers. "Adolescence is a time when one needs to assert oneself and rebel against adults, the established order, society etc... Alterna-tive theories are therefore a wonderful area of self-expression for them," he said.
"The explosion of social networks has seen what would once have been classroom discussions take place on Twitter, Snapchat or Instagram," he added. Olivier Ertzscheid, a lectu-rer in information science in the western city of Nantes, noted that established media such as the daily Le Monde responded fairly quickly on social networks with counter argu-ments knocking down the various conspiracy theories. Speed was of the essence if a balanced picture was to emerge, he said.
The adamantly homophobic Westboro Baptist Church appears to have had its main Twitter account suspended.
12/1/2015- The church, known worldwide for its “God Hates Fags” slogan, and which regularly pickets the funerals of US servicepeople and pop stars, had the account @WBCSays suspended today. It is unclear why the account has been suspended now, given the vile and offensive tweets sent regularly by WBC, and the church’s other Twitter accounts, @WBCSigns and @WBCSaysRepent are still active, but have not mentioned why the main account has been shut down.
Update: According to @WBCShirl2, the reason the account was suspended was because the church picketed Twitter at the end of last year. Dear @pinknews plz keep up. WBC picketed @Twitter in August. Their response: steal @WBCsays & @WBCShirl #OldNews #CostOfPreachingTruth. The church recently protested a Christian conference, over the performance of openly gay singer Vicky Beeching. The church also recently filed a strange legal brief in the state of Kansas, demanding the right to challenge same-sex marriage. A police officer was derided by the Westboro Baptist Church, after sharing a kiss with his boyfriend in front of their protest.
© Pink News
Can you really click away a political movement?
11/1/2015- Protests against an anti-immigration movement are spilling from Germany's streets to social media with bloggers calling for people to unfriend Facebook contacts if they "like" the Patriotic Europeans against the Islamisation of the West (Pegida) movement. Blogger Marc Ehrich has promoted a tool that allows you to check whether someone has liked a Facebook page. "In April I saw some guys sharing these individual links on their timeline so I thought I would write about it," he said. "I wanted to provoke a little and start some interesting discussions. At first it was just a list with some music bands that I thought would be funny or amusing for people to find out about, and then I added the anti-euro party AfD and the neo-Nazi NPD party. "Of course I wouldn't say to someone, 'hey unfriend this guy because he likes [the singer] Helene Fischer.' But when it comes to AfD and NPD I wanted people to really think about the likes of their friends."
In December he tweaked the tool to include a Pegida checker because he was annoyed with their supporters. The blog post immediately went viral. Despite the pro-minent "unfriend me" title at the top of the page, he says the tool wasn't only meant to be used to drop contacts. But he's unremorseful if that's what people choose to do. "I heard arguments like, 'Hey, I am following Pegida because I want to be informed.' My answer to that is Facebook 'likes' are a kind of currency. The more likes a site has, the more attention it gets, but you can follow without liking." The discreet nature of unfriending means it's hard to measure how widespread this trend actu-ally is, but the idea does seem to be taking off. "The unfriending campaign is pretty big here, I think everybody's aware of it," said Berlin-based social media writer Torsten Muller.
"I'm not sure it will achieve very much beyond stopping people with different views from talking, but maybe it has raised awareness that there are many people who feel strongly against Pegida." Munster-based politics teacher Marina Weisband saw the unfriending blog appear several times in her newsfeed and clicked the link. It turned out she only had one Pegida-liking friend. "He was an old school mate, who joined the police force straight from school I think," she said. "I didn't try to engage him in conversation because he's not a close friend. If he was I might have tried to talk to him, but he wasn't so ..." She's fully aware of the downsides of unfriending people with alternative viewpoints, namely narrowing the conversation and removing the chance for them to be influenced by more moderate views. But for her, the personal connection wasn't there to justify angsting over.
"Pegida is a sensitive topic, but I do think it's important for people to see they don't come from the centre and their views aren't widely accepted. They probably think, 'hey, we're just normal people with family and friends' but that's not actually the case, and maybe they will see that if they start to lose connections." Marina wasn't the only one to respond to the unfriending call. "I have [deleted friends] in self defence, because I caught myself in very unpleasant discussions with him or his 'friends'," one of her friends Ralph Pache said in response to her unfriending thread.
Not everyone is convinced by the strategy though. Christoph Schott is Germany's head of e-campaigns at Avaaz, a global civic organisation that promotes activism. He says the divisive nature of the unfriending campaign worries him. "I feel like it's not the right way to go about things. Pegida is making a big split in Germany and at hard times like this, with what is happening with Charlie Hebdo in France, we don't want to be divided here, we need to face these threats together. "We exist both online and offline, so we can protest on the street and on social media. Unfriending is just one social media campaign but there have been online petitions too. "At Avaaz we've just started Mit Dir to show how united and colourful we are." The idea is for Germans to upload pictures and memes and also post photos of themsel-ves in Germany with someone from another country, race or religion. "Amid this political storm, we're trying to create a love storm," Schott says. "The question of how you resolve this split appearing in our society is a big issue for us but we can only solve it together," he adds.
Blog by Sitala Peek
© BBC News
Pressure is on Twitter and Facebook to remove Islamophobic and racist comments
6/1/2015- The Attorney General has increase the pressure on Twitter and Facebook to remove Islamophobic and racist comments from their websites. Jeremy Wright said he wanted to meet social media companies to urge them to take down bigoted messages – and warn they could be prosecuted if they refused. His intervention came after The Independent disclosed last week that social media companies were refusing to take down hundreds of inflammatory postings from their sites. The number of postings, some of which accuse Muslims of being rapists, paedophiles and comparable to cancer, has increased significantly in recent months. Mr Wright signalled his dismay over the proliferation of racist messages and said he was “very happy” to meet the companies to discuss steps to tackle them.
Keith Vaz, the chairman of the Commons home affairs select committee, asked him: “Do you share my concern about the increase in Islamophobia and racism on sites such as Facebook and Twitter, and the inability of site owners to take the postings down? Will you have a meeting with the companies concerned to urge them to take down these postings, rather than face prosecution?” Mr Wright responded: “Yes, I do share that concern, and I’m very happy to meet with social media providers and others to discuss what more we can do. “As you have said – and as I’m sure the House in general agrees – it’s important that everyone understands that social media is not a space where one can act with impunity and that the social media providers, and all those who use social media, need to understand clearly that criminal law applies here.”
© The Independent
Legal watchdog wants public help on possible new laws
5/1/2015- Ireland’s top legal watchdog is asking for public opinion on whether a new crime of cyber-bullying should be introduced. Members of the public have another two weeks to respond to the independent Law Reform Commission’s (LRC) public consultation on cyberbullying, privacy and reputation. The Commission’s job is to advise the government on legislation and plans to issue recommendations for criminal law and civil remedies including “take-down” orders. The consultation asks whether people believe the harassment offence in section 10 of the Non-Fatal Offences Against the Person Act 1997 should be amended to incorporate a specific reference to cyber-harassment.
Specifically, it wants to know if “interfering with another person’s privacy” through “cyber technology” should be a criminal offence. The consultation document points out that “the breach of privacy would have to be more serious than just causing embarrassment to the victim. There would have to be significant humiliation involved not matched by a public interest in having the information published.” It seems that the leak of celebrity selfies last year was one of the motivations for the consulta-tion as the document mentions J-Law and other “well-known personalities” whose images were distributed online after their iCloud service was hacked. Computer hacking is already an offence in Ireland under the Criminal Damage Act 1991.
However the legal boffins are still worried that current law on hate crime, harassment, etc does not adequately address activity that uses cyber technology and social media, such as so-called “revenge porn” and “fraping” (amending a person's Facebook profile or other social media profile). According to the LRC, individuals online “may feel disconnected from their behaviour” as it occurs at a distance from the victim. “This sense of disconnection is increased by the anonymity frequently invol-ved in online communications and may prompt individuals to act in a manner they would not in the offline world,” it says.
Harassment laws include the element of repetition or persistence of an offence, but the searchability of the web means that damaging content can survive long after the event and can be used to re-victimise the target each time it is accessed. For this reason it is difficult to determine if some forms of cyber stalking fall under current laws. Setting up harmful websites or fake profile pages on social networking sites, in order to impersonate the victim and post harmful or private content in the victim’s name is another area the consultation asks respondents to consider.
Submissions are required before 19 January 2015.
© The Register
Number of postings, some of which accuse Muslims of being rapists, paedophiles and comparable to cancer, has increased significantly.
2/1/2015- Twitter and Facebook are refusing to take down hundreds of inflammatory Islamophobic postings from across their sites despite being alerted to the content by anti-racism groups, an investigation by The Independent has established. The number of postings, some of which accuse Muslims of being rapists, paedophiles and compa-rable to cancer, has increased significantly in recent months in the aftermath of the Rotherham sex-abuse scandal and the murder of British hostages held by Isis. The most extreme call for the execution of British Muslims – but in most cases those behind the abuse have not had their accounts suspended or the posts removed.
Facebook said it had to “strike the right balance” between freedom of expression and maintaining “a safe and trusted environment” but would remove any content reported to it that “directly attacks others based on their race”. Twitter said it reviews all content that is reported for breaking its rules which prohibit specific threats of violence. Over the past four months Muslim groups have been attempting to compile details of online abuse and report it to Twitter and Facebook. They have brought dozens of accounts and hundreds of messages to the attention of the social-media companies. But despite this most of the accounts reported are still easily accessible. On New Year’s Eve the author of one of the accounts reported wrote: “If whites had groomed only paki girls 1 It would be a race hate crime. 2 There would be riots from all Muslim dogs.”
Other examples of extremist postings on Twitter include:
*A user posted an image of a girl with a noose around her neck with the caption: “6 per cent of white British girls will become sex slaves to the Islamic slave trade in Britain”.
*A tweet which reads: “Should have lost World War Two. Your daughters would be getting impregnated by handsome blond Germans instead of Pakistani goat herders. Good job Britain.”
*On Facebook a posting in response to the beheading of Westerners in Syria is also still easily accessible despite being reported to the company weeks ago. It reads: “For every person beheaded by these sick savages we should drag 10 off the streets and behead them, film it and put it online. For every child they cut in half … we cut one of their children in half. An eye for an eye.”
When the comments were reported, Facebook said that they did not breach the organisation’s guidelines.
Fiyaz Mughal, director of Faith Matters, an interfaith organisation which runs a helpline called Tell MAMA, for victims of anti-Muslim violence, said he was disappointed by the attitude of both firms. “It is morally unacceptable that social media platforms like Facebook and Twitter, which are vast profit-making companies, socially engi-neer what is right and wrong to say in our society when they leave up inflammatory, highly socially divisive and openly bigoted views,” he said. “These platforms have inserted themselves into our social fabric to make profit and cannot sit idly by and shape our futures based on ‘terms and conditions’ that are not fit for purpose.” Mr Mughal said that Tell MAMA regularly received reports of anti-Muslim rhetoric and hate from concerned Facebook and Twitter users. He added that the far-right group Britain First relied on Facebook to organise, campaign and misinform followers about Islam and Muslims.
The rise in online abuse would appear to mirror a rise in hate attacks during the past year. In October the Metropolitan Police released figures to show hate crime against Muslims in London had risen by 65 per cent over the previous 12 months. Latest figures also suggest that, nationally, anti-Muslim hate crime has risen sharply following the murder of Lee Rigby in 2013. One man, Eric King, was recently given a suspended sentence for sending a local mosque a picture smeared with dog excre-ment depicting Mohamed having sex with a pig. However his Facebook account, which he used to send abusive messages to the same mosque, is still active and pro-moting anti-Muslim hatred. Mr Mughal added that social media platforms needed to make their content management procedures stricter. “If users were to express such unacceptable opinions about ‘shooting’ Black British citizens or discussed Jews as a ‘cancer’, their speech would not be legal. The same protections should be forwarded to references to the Muslim community,” he said.
In a statement Facebook said it had a clear policy for deciding what was and what was not acceptable freedom of speech. “We take hate speech seriously and remove any content reported to us that directly attacks others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition,” said a spokeswoman. “With a diverse global community of more than a billion people, we occasionally see people post content which, whilst not against our rules, some people may find offensive. By working with community groups like Faith Matters, we aim to show people the power of counter speech and, in doing so, strike the right balance between giving people the freedom to express themselves and maintaining a safe and trusted environment.” A Twitter spokesman said: “We review all reported content against our rules, which prohibit targeted abuse and direct, specific threats of violence against others.”
© The Independent
2/1/2015- The growth of social media has seen online bullying grow. The Internet has also been a place were racism seems to foster its place. Some of that is likely the anonymity that some feel comes from posting online. In some cases, comments on social media articles allow people making those comments to be seen by their friends, family and co-workers for the persons who they really are, rather than who they would like their associates to believe they are. Racist comments and bullying posts appear to generate the most frustration, and un-friending online.
Americans on social media consider racist (29%) and bullying (21%) posts the most offensive, while less than one in ten find sexist (8%), homophobic (8%) or overly personal posts (9%) to be the most personally offensive. A scant 5% were most incensed by political posts. Parents of younger children (under the age of 18) are no more likely to find bullying posts the most offensive compared to those without younger children, but they are more likely (34% compared to 27%) to be offended by overtly racist comments. Women (11%) are more likely most offended by sexist posts compared to only 4% of men. Racist and bullying posts are most likely to get a social me-dia member permanently disconnected by other community members. 33% of social media users claim that they would un-friend or un-follow somebody guilty of racism; this number is consistent amongst both whites (35%) and blacks (35%) although more than half of black social media users (53%) found racist posts the most offensive. 31% would discontinue a connection after a bullying comment or 28% for homophobic comments.
Although political remarks are rarely likely to cause social media users to un-friend or un-follow – only 7% said they would pull the plug on an online friendship after a political comment – they do inspire the most rejoinders. 41% said they would likely comment or reply to a political post. A bullying post is also likely to arouse com-ment, 34% thought they would comment or reply. The most common reaction to offensive posts is simply to ignore them. This number is highest for overly personal (57%), sexist (52%) or homophobic (52%). People are most inclined to talk offline about overly personal posts (16%) or bullying (15%).
© Net News Ledger
An online petition protesting the German right-wing movement PEGIDA has received thousands of signatures. It follows the group's regular protests against Islam and immigra-tion, which are growing in numbers.
27/12/2014- More than 65,000 people have signed an internet petition against the right-wing PEGIDA movement since it was established on Christmas Eve. The signatures are being collected on change.org, with its organizer aiming to reach 1 million. PEGIDA was formed in October in response to growing sentiment within Germany against immigration and Islam, with its protests particularly focused on the eastern German city of Dresden. The group's name loosely translates to "Patriotic Europeans Against the Islamization of the West." The latest protest in Dresden on Monday drew a record 17,500 people. However, resistance to the movement is growing, with thousands joining counter-demonstrations.
"Now is the time to profess that the phrase 'We are the people!', regardless of origin, color, religion or whatever, has been and must continue to hold true," organizer Karl Lempert said. The petition follows German President Joachim Gauck's Christmas speech on Wednesday, in which he urged tolerance and openness in accepting refugees. Gauck did not men-tion PEGIDA by name but said he believed such sentiments were in the minority. "The fact that the great majority of us do not follow those who want to seal off Germany has been truly encouraging for me this year," said Gauck. He added that solutions to wider problems could not be found "with eyes full of fear."
© The Deutsche Welle.
25/12/2014- Finally, India seems to have woken up to the pitfalls and dangers of cyber-crimes that are hitting at both country's infrastructure and also at individual levels. Union home minister Rajnath Singh on Wednesday announced constituting a five-member expert group of leading academicians and professionals to prepare a roadmap for effectively tackling the menace.
What are the challenges for country's cyber environment?
India, a fast growing economy, is highly susceptible and vulnerable to international and domestic Cyber attacks. There has been almost 40 percent annual increase in cyber-crimes registered in the country during the past 2-3 years. Phising and ransomware are the new threat in the virtual world of computing devices that can rob individuals of their precious money in ingenious ways. India is considered to be the ransomware capital of Asia Pacific with 11 per cent victims of this form of virtual extortion. In addition to Ransomware, 56 per cent of cybercrime victims in India have experienced online bullying, online stalking, online hate crime or other forms of online harassment. Besides, highly secure government websites that have vital data and secrets are also become prone to cyber attacks. In the recent past even the websites of PMO and defence ministry have been hacked by the hackers based in China and Pakistan.
What is the current status of Cyber security in India?
India is among the worst performing countries in terms of cyber security. It does not have a robust system of firewalls and no body of cutting edge professionals who can prevent serious cyber attacks. The cyber crime cells even in Metros are in a nascent stage to crack cases as they do not have experts and requisite softwares. To thwart cyber attacks, India depends largely on Cert-In (computer emergency response team) under the department of electronics and information technology (DEITY) which itself is in nowhere compared to developed countries. Even the top level intelligence and security agencies have to take outside professional help in times if exigencies.
What has the expert group been asked to do?
The expert group has been asked to prepare a Road Map for effectively tackling the Cyber Crime in the country and give suitable recommendations on all its facets. It has also been assigned to recommend possible partnerships with public and private sector, NGOs, international bodies and international NGOs to suggest any other special measures and steps to tackle Cyber Crimes.
Who all constitute the expert group?
The five-member Expert Study Group comprises of director general CDAC (Pune) Dr Rajat Moona, Indian Institute of Science (Bengaluru), Professor Krishnan, director general Cert-In, Dr Gulshan Rai, Professor of Computer Science, IIT- Kanpur, Dr Manindra Aggarwal Professor IIIT- Bengaluru Dr D Dass. Joint Secretary of Centre State division in union home ministry Kumar Alok will be the convenor.
© DNA India
A gatekeeper such as Steam has a responsibilities. But it also must be reliable and predictable, and this is anything but
16/12/2014- At first glance, the removal of mass-murder simulator Hatred from Valve’s Steam digital distribution platform seems like a rare example of corporate responsibility. While “mass-murder simulator” sounds like a tabloidism, the sort of description preachy moralists give to games like Grand Theft Auto, it’s an accurate description of Hatred. Produced by developers linked to Polish far-right groups, the game is explicitly and solely about setting out to shoot innocent people. With an aesthetic that emphasises the brutality of the player’s actions, it is a thoroughly nasty concept. So news that Valve had removed the game from Greenlight, the main entry point for indie games on to its Steam store, was greeted by many with relief. The company told Eurogamer that “based on what we’ve seen on Greenlight we would not publish Hatred on Steam. As such we’ll be taking it down.”
In a world where a harassed politician has to fight to get explicitly antisemitic abuse removed from Twitter, it’s refreshing to see a company act quickly to remove hate. But there’s something about the seeming capriciousness with which Valve made the decision to pull Hatred that makes me uncomfortable. The company has an undeniable level of power in the PC gaming space. Last year, it controlled an estimated three quarters of the global market for digital PC games, a market which is itself 92% of the overall market for PC games. That proportion seems likely to have gone up since, and with the launch of SteamOS, there are now a few customers who have no choice but to buy their PC games from the store.
For a company wielding that level of power over a creative medium, Valve owes more explanation of its process than a two-sentence statement. And for any developer wanting to stay on the right side of what the company would publish on Steam, its entire content guidelines are given in one sentence in its FAQ:
Your game must not contain offensive material or violate copyright or intellectual property rights.
Hatred isn’t the first game to have been pulled from Steam with scant explanation. In 2012, sex game Seduce Me was taken off Steam, and the company’s spokesperson told Kotaku that “Steam has never been a leading destination for erotic material. Greenlight doesn’t aim to change that.” Of course, if your erotic material is presented through the lens of a triple AAA game - like GTA V’s strip clubs - you can be sure that Steam will happily accomodate your product. And if your mass-murder simulator is made in a slightly more parodic fashion, as with 2011’s Postal 3, it will still be welcome on Steam as well.
It’s not just Valve’s platform. Apple’s App Store has the same problems. On the one hand, it has a far more comprehensive set of guidelines, which at least allow developers a bit of help in working out what games will be accepted into the store; on the other hand, those guidelines are applied with wild inconsistency. The critically acclaimed Papers Please, for instance, was forced to self-censor after breaching a rule about “pornographic material”, which Apple defined as “explicit descriptions or displays of sexual organs or activities intended to stimulate erotic rather than aesthetic or emotional feelings”. The game’s depiction of full-body scanners in use at the border controls of an authoritarian state is dis-tressing and evocative, but fairly far from stimulating erotic feels. Nonetheless, Apple put its foot down.
The internet is increasingly controlled by a few powerful gatekeepers. Steam, Apple’s App Store, Google search and Facebook’s news feed all have a level of concentrated power over our cultural and social discourse that has rarely been seen in history. Against that background, old canards about companies having the right to stock what they want are increasingly worn out. Nation states evolved a justice system, and rule of law, precisely to exercise their power responsibly, fairly and predictably. But in far too many situations, the best response companies can provide artists wanting to know if they are going to be censored is “wait and see”. And if they fall foul of an unknown rule, there’s no jury, no appeals process, and rarely any explanation.
In the case of Hatred, it’s hard not to feel that Valve made the wrong decision. That’s not because there’s nothing objectionable in the game, but because a ban plays into the developers hands. Their game had already been cynically marketed to supporters of the Gamergate campaign as something that “social justice warriors” would hate, to the extent that fans were asking for downloadable content which would add women like Anita Sarkeesian into the game as murder targets. Removing it from the Steam Store just plays into that image as “the game they tried to stop”. What’s more, the opacity with which Valve makes its decisions means that there is nothing to point to to counter that impression. Rather than letting a cynical game die in obscurity, it’s now poised to become the forerunner of the 21st century equivalent of video nasties.
© The Guardian
18/12/2014- According to an investigation by the Southern Poverty Law Center, Apple’s iTunes service, along with Amazon and Spotify, sell music produced by hate groups. Even though Apple is the only company to report taking any action in response to this claim, it only removed some of the hate music being distributed through its service.
H8Machine is an American right-wing rock and hatecore band from New Jersey that plays white power rock music. They were the head of the U.S. hatecore movement and are close to the Hammer Skins, whose emblems they point to in various recordings. They have 50 songs available on iTunes. Lyrics from their song “Bound for Glory” read: “Dreadlocked bastard, gangster rapper/Raping white women is all that he’s after/You’ve fooled the rest, but I know what you’re about/Your threats will fall silent when you’re the body count.”
Brutal Attack/Ken McLellan
McLellan’s group has been labeled by the Anti-Defamation League as “one of the oldest hate bands in continuous existence,” according to SPIN. The self-described “white power” anthems have lines like “This is the final solution/ Our turn/They’ll burn.” They have five songs available for download. Their music is also sold through NSM88 records, which is owned by the National Socialist Movement, the largest neo-Nazi group in the U.S.
Arghoslent is an American death metal band formed in the summer of 1990. The band’s lyrics have been the source of much controversy for advocating racism, particularly promoting white supremacist views about the Trans-Atlantic Slave Trade and the Holocaust. The group has been criticized for its racist ideology.
Blood Red Eagle
This band has more than 30 songs available on iTunes. Their albums are on the list of Nazi/white power bands sold by the National Socialist Movement.
The name of the band is the symbol for the swastika. The National Socialist/Pagan metal band’s lyrical themes are Aryanism and Anti-Christian War. They have more than 20 songs available on iTunes.
The band Geimhre (the name in Goidelic means ”winter”) was founded in 2001 as a satanic black metal band. Soon the ideology began to show Nazi influences. The themes of their lyrics covered hate and Aryan nationalism.
Stuart was best known for starting the band Skrewdriver. His band evolved into one of the first neo-Nazi rock bands, with the “White Power” single in 1983, which was followed by the second album “Hail the New Dawn” in 1984. They became one of the most prominent white power bands. Stuart has 104 songs available on iTunes.
No Remorse was a British neo-Nazi rock band. They were associated with Ian Stuart’s Blood & Honour group. The group has five songs on iTunes.
Spear of Longinus
The raw occult National Socialist black metal band from Brisbane, Queensland, in Australia uses Nazi influences in some of their lyrics. Their album “Nada Brahma” is on iTunes.
Youth Defense League
Youth Defense League was an Oi!/New York Hardcore band formed in 1986. The band was featured in the Revelation Records compilation album “New York City Hardcore,” which featured several NYHC bands, including Sick of It All and Gorilla Biscuits. They were one of the most well-known skinhead bands from the U.S. and encouraged nationalism and white power. They have one song on iTunes.
© Atlanta Black Star
16/12/2014- The Australian Defence League (ADL) are threatening to ignite anti-Islamic prejudices in the wake of Monday’s siege in Sydney. The group posted the following message on Facebook:
@nswpolice please protect people from these pathetic monsters of the Australian Defence League. https://twitter.com/BoyCalledAnn/status/544341055319986177/photo/1pic.twitter.com/iVG3u65Lo3
The Lakemba suburb is home to one of the largest mosques in Australia and is often portrayed in the media as being home to a predominantly Arab and Muslim popula-tion. However, people around the world have retaliated to this call by creating the hashtag #illridewithyou on Twitter, an expression of solidarity against Islamopho-bia. Dozens of people were held captive by the gunman, revealed to be self-styled, radical cleric Man Haron Monis, in a cafe in Sydney’s busy shopping district. Monis made some of the hostages hold up an Islamic flag, provoking an outcry against Islam from the far-right ADL - an offshoot of the violent English Defence League - who have taken to Facebook to express their views under the motto “ban Islam”. "Here it is folks,” reads a post on the group's Facebook page, “homegrown Islamic terrorism in our backyard, courtesy of successive Australian governments and their brainwashed voters.”
When labelled as racist by the Australian Channel 10 news station, the group replied on Facebook: “To Channel 10, you call us racists and get an Islamist to spew your left wing bigotry. Go to an Islamic country and see how you fair.” Earlier today Ralph Cerminarra the president of the ADL, had to be escorted away by police from near the scene of the siege after he began shouting abuse. “Half the reason we've got this problem today is because of left wing bigots,” he yelled angrily before being dragged away by police. “These people may be murdered because of your left wing bigotry... It's finally happened," he continued. The hashtag #illridewithyou started trending on Twitter shortly after the siege began, as people offer to meet Muslims at their local bus and train stations and ride with them on their journeys, as a safeguard against possible retaliatory attacks.
It’s thought to have been inspired by a young Sydney woman’s post in which she described encountering a Muslim woman on Monday:
"The (presumably) Muslim woman sitting next to me on the train silently removes her hijab. I ran after her at the train station. I said 'put it back on. I'll walk with u' [sic]. She started to cry and hugged me for about a minute - then walked off alone.”
According to Twitter Australia, over 40,000 people used the hashtag within the first two hours of it being created, and this number has now reached over 120,000 as people urged solidarity and support for Muslims, fearing a backlash due to the events on Monday.
Australia's top Muslim cleric Ibrahim Abu Mohamed also issued a statement condemning the hostage siege:
"The Grand Mufti and the Australian National Imam Council condemn this criminal act unequivocally and reiterate that such actions are denounced in part and in whole in Islam," Mohamed said.
Muslim leaders across Australia have also denounced Monis’ actions. "We reject any attempt to take the innocent life of any human being, or to instill fear and terror into their hearts," they said in a statement on behalf of almost 50 prominent organisations within the Muslim community. The 16-hour siege is now over after armed police stormed the building. There have been reports of loud bangs and gun fire from the scene. Paramedics were seen carrying some of the hostages out on stret-chers, and there have been unconfirmed reports that two people have died, whilst three are badly injured.
The ADL have yet to update their Facebook page in light of the recent developments but there is now a Change.org petition calling for all ADL pages to be shut down. The English Defence League (EDL) have also been been posting on their Facebook page following the events in Australia. The group have railed against The Guardian newspaper which they label a “leftist rag” and mocked the UK prime minister for defending Islam as a peaceful religion. They have yet to respond to Newsweek’s request for comment.
15/12/2014- Dutch privacy watchdog CBP is threatening internet giant Google with a fine of up to €15m for contravening Dutch privacy legislation. Since 2012, Google has been combining information about users from Gmail, Google maps, YouTube and search results into a single profile. This, broadcaster Nos points out, allows the company to offer more targeted advertising. However, the CBP says Google is not informing users properly about its actions or asking them permission. This, the CBP says, contra-venes Dutch law. Privacy regulators in Britain, Germany, Spain and Italy are also taking action, the CBP says. ‘Google continues to make great, innovative, happy products but don’t fool us by collecting our personal info behind our backs,’ Dutch watchdog chief Jacob Kohnstamm told Radio 1. Google said it is disappointed in the CBP’s reaction and that it has recently made a string of proposals to European privacy regulators. ‘We are looking forward to discussing them in the short term,’ a spokesman said.
© The Dutch News
The Ferguson battleground shifts to the virtual world, and people are losing their jobs
14/12/2014- Have you posted a status about the Ferguson riots and race relations in America? If you have, your comments, racist or otherwise, could cost you your job.
As unrest on the streets of Ferguson dissipates, a new battlefront is opening up on social networks. Social media users are taking comments they deem offensive and forwarding them to the employers of the offending Facebookers. Vocativ picked up on what might be a burgeoning trend when the administrator of “Ferguson/Saint Louis Riot Updates,” a 70,000-strong Facebook page created to keep the community updated on riots and civil unrest, posted a message he received. The unedited message from Jackie Williams: “Thanks to your racist fb page….I’ve gotten at least 2 ppl fired from their job by screen shotting their racist comments and emailing them to the companies they work for. One was a 10 year employee at Anheuser Busch.”
The page’s administrator claims he revealed the message to encourage the page’s followers to enhance the privacy settings on their profiles. Outrage ensued. Hundreds of the page’s followers suggested that the self-anointed whistle-blower was just as racist as those whose jobs she jeopardized, and they have started to turn her own technique against her. Several people identified Easter Seals Midwest, an NGO that helps people with developmental disabilities, as her place of work, and began posting comments on the page. Her address and work phone number were also shared on the page.
Some of the recent reviews posted to Easter Seals Midwest regarding Jackie Williams.
Megan McClintock Malloy posted: “It’s sad that a company such as Easter Seals employs such racist people. People who have no care at all for fellow citizens. Is this what you want your company known for? If so I’ll spread the word!”
Easter Seals Midwest responded to the complainants that Williams’ views “do not reflect the views of the organization and plan to investigate this situation.” Then the administrator of “Ferguson/Saint Louis Riot Updates” announced that he had scheduled the page to be deleted in an effort to prevent further exposure of the followers’ racist statuses. It’s not the only page involved, however. Another user said she was calling out similar slurs on the “Justice for Mike Brown” page, and had already forwarded the comments of one worker to his employer, FedEx.
9/12/2014- On the evening of December 9th, as International Human Rights Day grew close, The Hon Paul Fletcher MP pressed the big red button to launch 'Fight Against Hate' a new reporting tool created by Australia's Online Hate Prevention Institute (OHPI). Mr Fletcher, who has headed the Australian Government's push on online safety, praised the new tool and the change it will make to efforts to combat online hate and the harm it can cause, particularly to children.
The launch event also features a panel discussion with representatives of some of the groups regularly subjected to attacks online. The panel included Talitha Stone whose campaign against US rapper Tyler the Creator saw her subjected to thousands of death and rape threats. Also the panel was Julie Nathan, the Executive Council of Australian Jewry's Research Officer and the author of the 2014 report into antisemitism in Australia. Representatives of the Indigenous Australian community, Muslim community, the peak body for ethnic communities, and the peak body representing parents were also on the panel.
The software, which people can now register a free account with at http://fightagainsthate.com, allows members of the public to report online content that contains a wide variety of hate speech. The software currently handles report of content on Facebook, YouTube and Twitter, and the hate can take the form of antisemitism, anti-Muslim hate, misogyny, racism against Indigenous Australians, homophobia, cyber-bullying and others forms of hate. People using the software are asked to first report the content to social media companies directly, then to report it through the software so that the response of the social media companies can be reviewed and measured. So far 35 organisations has signed up as supporters of the software, and these supporters, and more which are in the pipeline, including from Government, will be able to access the content the public reports through the system.
â€œThis system will empower people and ensure the time they put into reporting online hate is not wasted. Even if a platform provider rejects their complaint initially, once the item is in Fight Against Hate, human rights organisations, government agencies, or the media may choose to follow up on that item. Rather than being forgotten, online hate that is not resolved may be seen as a failure of self regulation by the social media companies. The longer the incident stays unresolved, the greater the failure The new system will empower not only the public, but key stake holders like governments as well. Dr Andre Oboler, CEO of the Online Hate Preven-tion Institute, explained.
Jeremy Jones AM, a co-chair of the Global Forum to Combat Antisemitism explained that the software had the support of the Global Forum and the Israeli Government. He explained that a report into the antisemitic data gathered through the system will be released at the Global Forum to Combat Antisemitism in Jerusalem in May 2015. The launch event was attended by 90 people from a wide range of community organisations, human rights organisations, government agencies and members of the public. With Fight Against Hate now live, the next challenge is building up a sufficiently large user base of people reporting online hate.
© The Online Hate Prevention Institute
Racism is alive and on sale through online retailers who have yet to remove racist and offensive content from their sites.
11/12/2014- Racist hate music is more about influence than making money. The Intelligence Report, by the Southern Poverty Law Center, says that the racist music business was a multimillion-dollar industry in the 1990s. The genre also doubles as a recruiting tool. In 1999, the National Alliance, formerly the most prominent neo-Nazi organization in the U.S., bought Resistance Records and “were selling more than 70,000 CDs annually by the early 2000s.” Even though the sale of physical copies has slowed, the SPLC says that iTunes and other distributors have provided a “new and unprecedented tool to effectively distribute hate music.” An investigation into hate music by the SPLC revealed that as of September 2014, there were 54 “white power” bands with songs being sold on iTunes. After the report was released, Apple removed only 30 of the groups as of Wednesday, according to The Daily Beast.
According to the SPLC report, iTunes’ “Submission to the iTunes service” says that submitted materials “shall not infringe or violate the rights of any other party or violate any laws, contribute to or encourage infringing or otherwise unlawful conduct.” Despite the policy, songs like “Jigrun” by the Bully Boys were being sold on iTunes. Part of the song says, “We’re going on the town tonight / Hit and run / Let’s have some fun / We’ve got jigaboos on the run / And they fear the setting sun.” The mainstream media caught on to the influence of hate music after Wade Page, the white supremacist who killed six people at a Sikh temple in Wisconsin, was known to have played in a few “white power” bands, according to The Daily Beast.
At least Apple took some action. Amazon and Spotify still allow the hate music to be sold and purchased. Bands such as Skrewdriver, Max Resist and Brutal Attack are available for download on Amazon even though its policies claim that offensive products are prohibited from its site. Spotify bases its removal of content according to Germany’s Federal Department for Media Harmful to Young Persons. “We take this very seriously…We’re a global company, so we use the BPjM [Bundesprüfstelle für jugendgefährdende Medien/Ger-many’s Federal Department for Media Harmful to Young Persons] index as a global standard for these issues,” Spotify said in an e-mail to The Daily Beast. Content not removed by the index is handled on a “case by case basis,” the company added. As of Monday, Spotify hadn’t removed any hate music.
© Atlanta Black Star
One of the World's Largest Internet Companies is Promoting Anti-Semitic Site Veteran News Now
5/12/2014- Yahoo is one of the most visited sites on the internet. How fortunate that is for Debbie Meron, an old-school anti-Semite whose hate site Veterans News Now has been promoted on Yahoo's front page several times in recent weeks. Let's jump straight to the substantiation for that last sentence, because it should go without saying that if its three key points are all true — in other words, if Yahoo considers Veterans News Now (VNN) a legitimate news source and prominently features it on its front page; if Veterans News Now is in fact an extremist site; and if VNN is run by a fanatical Jew-hater — then Yahoo has a serious problem, which it must quickly remedy. CAMERA has recently received several complaints from readers shocked to see VNN articles promoted on Yahoo and Yahoo News. The following image, a screen shot of the Yahoo homepage on Dec. 4, proves point number one: Yahoo does treat VNN as a legitimate news site and, at least for some readers, gives it one of the most coveted spots on the internet.
The second point, too, is easy enough to substantiate. Is Veterans News Now a site that peddles in hate and conspiracy? If Holocaust denial and 9/11 trutherism fit the bill, it clearly is. One recent article on VNN, for example, rails indignantly at a commentator whose crime was to describe the Holocaust as "a horrific genocide":
Recently, Abby Martin, the host of "Breaking the Set" on the Russia Today network, released two segments on the subjects of the Nazis and the "holocaust," an event which she described as "a horrific genocide that forever changed the world." One wonders why Martin – like her compatriots in the Zionist-dominated Hollywood establishment — places exceptional status on the "holocaust" when in fact a far greater number of non-Jews — particularly Germans, Russians and Chinese — perished during the Second World War than even the highest exaggerations of the sacred Shoah.
It only gets worse from there. About Auschwitz, the author approvingly states that "some historians estimate less than 100,000 people died in that camp, primarily from disease and starvation caused by Allied bombing." He continues:
The camp's true purpose bares little resemblance to the picture painted in Hollywood movies and mainstream history books. It is an irrefutable fact that Auschwitz had facilities one would never expect to find in a bona fide "death factory," such as a swimming pool, a soccer pitch, a theater, a library, a post office, a hospital, dental facilities, kitchens and so on. Inmates were encouraged to participate in orchestras, theater productions, soccer matches and other cultural and leisure activities.
The takeaway: Auschwitz was a "labour camp," not a death camp. The gas chambers did not exist. Nazis used Zyklon B to save Jewish lives, not extinguish them. The claim of 6 million Jewish victims is a hoax. The Jews are largely responsible for communist Russia and its crimes. And of course, "the media's obsession with the holocaust is part and parcel of the Zionist campaign to cast a spell over the collective consciousness of the Western world in order to desensitize the public to the suffering of the Palestinians and shield Israel from criticism."
VNN also specializes in conspiracy theories about the 9/11 attacks. One piece explicitly backs "the rising alternative thesis within the community of truth seekers, which claim that the masterminds were a Zionist network close to the Israeli Likud." Another argues that Israel is responsible not only for 9/11, but also for the anthrax attacks that followed. Yet another piece, which documents what the author calls "a textbook Zionist mind-control twist," purports to prove Israeli responsibility for 9/11 in this way:
Actions trump lies. Evidence does not lie . . . so how has the American public been so brainwashed by lies, in light of so much evidence? Are Zionists that intelligent, or is the American public that unintelligent—and how did even that obvious question become a "third-rail issue"? Totally uncool, our tradition of being outsmarted by Zionists even to the point of "Rothschilding" our descendants' future. Is it possible for the American public to think their way out of Zionist enslavement . . . or is Gaza a preview of our future? … 9/11 was trademark Zionist false-flag testing of what they might get away with, a pushing of boundaries that, magically, stayed in bounds. Zionists third-rail magicians still brag about 9/11.
Got that? Good. Then on to the third point: Could it be that Yahoo links to VNN because, problematic as the site might be, it is run by a credible, ethical journalist? Is it possible be that these unhinged articles (and the many other similar ones on the site) were posted without the knowledge of VNN's editor-in-chief Debbie Meron? It is certain that Meron knows about the Holocaust denial article mentioned above — she weighed in about the piece in its comments section:
Nor are the 9/11 conspiracy theories outliers. One of the pieces cited above is a currently featured item on the VNN home page, and the site's section dedicated to 9/11 "truth" is always one click away as one of the handful of topics on the menu bar at the top of every page. In fact, Meron seems to be a perfect fit for the hate site. The piece to which Yahoo linked on Dec. 4 was an article by Phil Weiss, which had originally appeared on his anti-Israel site Mondoweiss but was republished by Meron on both VNN and her other website, My Catbird Seat. In the comment section under the latter reposting, Meron fantasized about the Nazi Waffen SS "rip[ping] … into rubbish" the Israeli army, before calling for "Zionists" everywhere to be made personae non gratae. "If you want the best future for your people, never allow them entry into your country let alone any opportunities in your country," Meron wrote.
"Zionist," of course, is so often a euphemism for Jews, and in the case of Meron her feelings about a people who dangerously undermine their host countries are clearly directed at the Jews in general. Under an article posted on her site My Catbird Seat, she posted the text of a supposed interview with Harold Wallace Rosenthal in which the former Senatorial aide quoted admitting to the Jewish conspiracy that secretly runs the United States. In her comment, Meron highlights what appears to be her favorite part: "We Jews have put issue upon issue to the American people. Then we promote both sides of the issue as confusion reigns. With their eye's fixed on the issues, they fail to see who is behind every scene. We Jews toy with the American public as a cat toys with a mouse." (Unsurprisingly, there is no credible source for the interview, which can be found sprinkled throughout the anti-Semitic dregs of the Internet.)
So Veterans News Now is indeed run by an anti-Semite. And it indeed publishes Holocaust denial. And most disturbingly, it is indeed promoted on Yahoo's news feed. CAMERA has contacted Yahoo News to ask why it legitimizes and propagates a hate site, but did not immediately receive a response. Yahoo owes its users an explanation about why it legitimizes and promotes an anti-Semitic hate site. But unfortunately, the company has been slow in the past to respond to anti-Semitism on its site. A question posted several years ago on Yahoo Answers asked why "Judaism glorifies genocide"; the answer explained that "Orthodox jews consider mass-murder to be very honorable, and any reader of the Old Testament is acutely aware of the creepy preoccupation with killing babies…." The page was repeatedly flagged as a violation of Yahoo's guidelines, and emails were sent to the company asking why the hateful page was not removed. These were ignored, and the hateful "question" remained online for weeks, until CAMERA finally went public with the issue.
Now Yahoo has another chance to show it treats seriously concerns about hate-speech on the site. Will it forthrightly respond to those concerns and assure readers that Veterans News Now and other extremist sites will no longer be featured on its news feed? Or will it continue to mainstream anti-Jewish bigotry and 9/11 conspiracy?
A man has admitted posting offensive comments on Facebook about an Edinburgh boy beaten to death by his mother.
4/12/2014- Shaun Moth posted abuse about Mikaeel Kular on the social networking site the day before the three-year-old boy's body was found in a wood in Kirkcaldy. The 45-year-old, who lives in Aberdeenshire, posted the comments on an anti-racism page as a police search was underway for the boy in January. Rosdeep Adekoya, 34, was jailed for 11 years in August for her son's death. Adekoya had originally been charged with murder, but admitted the reduced charge of culpable homicide. Moth, from Whitehills, pleaded guilty to conducting himself in a disorderly manner, posting grossly offensive comments on Facebook and breaching the peace, aggravated by religious prejudice when he appeared at Aber-deen Sheriff Court on Thursday. He is due to be sentenced at a later date.
Fiscal depute David Bernard said: "A post was put on the page for a group entitled Scotland United Against the racist SDL. "During the evening of 16 January, one of the administra-tors for that Facebook page noticed a comment about the missing child which was made at 17:45 that day by a user named Shaun Moth. Other racist comments were also posted by Moth, one of them ending: "My work is done here. wpww 14/88." Mr Bernard said the acronym wpww was understood to stand for White Power World Wide and 14/88 was a Neo-Nazi term for "Heil Hitler". Describing himself as a Nationalist Socialist, he told officers that he often went on to the Facebook page for debate and classed it as a left wing Marxist page for a "communist types". Moth was asked if he was racist and said he was an intelligent man and "not a mindless yob". Moth was remanded in custody.
© BBC News
People who use social media to "peddle hate or abuse" will not escape justice by hiding behind their computers or phones, Scotland's top law chief has warned amid new guidelines on whether messages posted online constitute a crime.
4/12/2014- The Crown Office and Procurator Fiscal (COPFS) said it wants to reassure the public that it takes such offences as seriously as crimes committed in person. It has set out four categories of behaviour, including "grossly offensive, indecent or obscene" comments. However it said there is no danger to freedom of speech, and stressed that people will not be prosecuted for satirical comments, offensive humour or provocative statements. Lord Advocate Frank Mulholland QC said: "The rule of thumb is simple - if it would be illegal to say it on the street, it is illegal to say it online. "Those who use the internet to peddle hate or abuse, to harass, to blackmail, or any other number of crimes, need to know that they cannot evade justice simply by hiding behind their computers or mobile phones. "I hope this serves as a wake-up call to them. "As prosecutors we will continue to do all in our power to bring those who commit these crimes to justice, and I would encourage anyone who thinks they have been victim of such a crime to report it to the police."
The Crown Office said it has chosen to publish its guidance to ensure there is absolute clarity both in terms of its approach and the difference between criminal and non-criminal communications. It said it will take a "robust approach" to communications posted via social media if they are criminal in content, in the same way as such communications would be handled if they were said or published in the non-virtual world. The four categories of communication which prosecutors will consider are those which:
@ Specifically target an individual or group of individuals, in particular communications which are considered to be hate crime, domestic abuse, or stalking;
@ May constitute credible threats of violence to the person, damage to property or incite public disorder;
@ May amount to a breach of a court order or contravene legislation making it a criminal offence to release or publish information relating to proceedings;
@ Do not fall into categories 1,2 or 3 but are nonetheless considered to be grossly offensive, indecent or obscene or involve the communication of false information about an individual or group of individuals which results in adverse consequences for that individual or group of individuals.
In an interview on BBC Radio Scotland, the Lord Advocate was asked how "grossly offensive" could be defined when it could be seen as relative. He replied: "The guidance sets out that it would not include, for example, humour, satirical comment, which is part of the democratic debate, so there's guidance to prosecutors as to what's not included. "It doesn't include offensive comment because we recognise that, in a democratic society, with use of social media you can have offensive comment which wouldn't be criminal but it's really the category above the high bar grossly offensive which has a significant effect on the recipient of the comment. "We've all seen on the media reports of what you described, internet trolls, where this kind of comment, grossly offensive comment, is sent out to directly wound and has quite a significant effect." He added: "There's very detailed guidance of all the factors that prosecutors will take into account when they assess whether or not to raise criminal proceedings in relation to grossly offensive comments posted on social media."
© The Herald Scotland
2/12/2014- The following White Paper addresses the role of the UK government and social media companies and Internet service providers (ISP) in monitoring and policing the Internet for extremist and/or terrorism-related content. This paper seeks to analyse the effectiveness of the UK government’s Prevent strategy and provide recommendations for its improvement in line with the current nature of the threat. Currently, the two biggest challenges for UK counter-terrorism are the radicalisation and recruitment of individuals by the jihadist organisation Islamic State (IS) and the use of the Internet by IS and other extremist organisations to spread unwanted and potentially dangerous ideologies and narratives internationally. This subject is of great importance, especially as government debates how best to tackle extremism and adequately implement counter-extremism measures both in real terms and online. Sections 2 and 3 discuss the framework of the government’s Prevent strategy, while sections 4 through 9 detail the challenges extremism and terrorism-related content online pose. Section 10 addresses the role of Prevent in countering online extremism in the UK.
The Role of Prevent in Countering Online Extremism (full report - pdf)
© Quilliam Foundation
What do you do when you see hate speech on your Facebook or Twitter feed? Do you calm yourself down, swallow the bitter pill and move on, or do you comment bravely and report the image/page/user/group?
3/12/2014- For Israeli student Shay Amiran- Pugachov, fighting anti-Semitism and hate speech online has become a full-time job. Amiran Pugachov is the Program Coordinator of the national program ISCA - “Israeli Students Combating Antisemitism.” Each year, 30-40 top students from Israel’s various high-education facilities are elected to take part of this special program, where they monitor anti- Semitic behavior and discourse online, mainly on social networks like Facebook, Twitter and Youtube. Every day, they take time out from studying in order to make our world a little better. Just this year alone, this group of students took down more than 5,000 anti-Semitic Facebook pages, users and groups and helped expose and bring to the public’s eye the French comedian who invented the reverse Nazi salute (the Quenelle,) who has been publicly condemned and had his show cancelled. Days before the program kicks off its fourth year, Amiran Pugachov sat down with “Israelife” to talk about the world of online anti-Semitism, and the very special and influential program, which is dedicated to making a change in Facebook and Twitter’s Community Standards as well as in people’s very own personal standards.
Why gather students to fight anti-Semitism? Where did that idea come from?
"The idea to gather students to fight anti-Semitism came from the need to leverage the students’ academic experiences and various fields of education and talent, into counter - anti-Semitism. In our program, there are students for Computer Studies, Languages, History etcetera, who can contribute to our battle against hate speech and anti-Semitism. By being aquatinted with the program, they become more educated about the various ways and forms in which anti-Semitism appears online, thus being able to detect and react."
When did you join the program and what drove you to do that?
"I find anti-Semitism very disturbing. From my point of view, anti-Semitism is ignorance. It is blind hatred, regardless of what actually happens in reality. I’m talking about people who follow ancient blood-liables and honestly believe Jews drink Palestinians blood, control the world (from politics to the media) with their money, and other stories you wouldn’t believe people actually stick to. As a Political Science and Communications Studies student, I find the new form of anti- Semitism very interesting: Since the end of WWII, anti-Semitic behavior and discourse were considered out of line, and taboo. Haters attempted to hide their personal opinions and hide in the shadows. Now, things are different. The fast-pace growth and development of social media helps haters spread the anti-Semitic discourse and reach the younger audience, who later use this false informa-tion on school assignments. But it’s not only the young folks. The unaware public is easily affected by the information online. We must always be present on social media to provide them with the correct information."
Do you think anti-Semitism is as big of a threat to Jews now as it was 80 years ago?
"First, let me just say that although anti-Semitism targets Jews, it does not affect the Jewish people only. Anti- Semitism is also an indicator of xenophobia and minority persecu-tion: Whenever anti-Semitism is on the rise, we can see others who are being affected by it, such as Gypsies, Armenians, and LGBT. We witnessed it recently Jobbik and Golden Dawn - political anti-Semitic parties in Hungary and Greece that persecuted various minorities, not only Jews. In recent years, as the world of social media continued to become a meaningful part in our lives, allowing people to express themselves while hiding behind a keyboard - anti-Semitic discourse is becoming more and more popular, especially amongst younger audience. Before the age of internet and social media, it could have been prevented more easily, as anti-Semitis books were banned from stores, for example. Nowadays, anti-Semitism is becoming more and more common, and you don’t even need to make an effort in order to find it on Twitter, Facebook and Youtube."
Lately, complaints have been heard on Facebook's "permissive" policy when it comes to Antisemitic content. Do you agree?
"This year, we have witnessed some improvement, but unfortunately, it’s far from being enough. Many of our reports to Facebook of Community Standards violations are on con-tent which is bluntly anti-Semitic, but Facebook still refuses to remove it. I believe it’s because they only examine parts of the content in questions and don’t see the full picture, literally. For instance, you can post a photo of sweet little cats- nothing anti-Semitic there - but add a description saying “those cats are against Zionist rats.” I also believe there are some words in Facebook’s algorithm that assist them with flagging inappropriate of hateful content. Sadly, this is not enough. Therefore, Facebook must hire more people of various nationalities who speak various languages to truly enforce those Community Standards."
What can we do to help fight anti- Semitism online?
"First, follow ISCA’s channels on Facebook and Twitter. We are flagging hateful content occasionally and ask our followers to help remove them. Second- do not be afraid to report inappropriate of hateful content, by using the “report” option on Youtube videos, Tweets and Facebook posts/pages/groups/users. By reporting, you flag the content as harming or hurtful and tell Youtube/Facebook/Twitter that you don’t like it. The more people report- the clearer the message will be, and the chance for removal will be bigger. Third, and most important - Be yourself. If you see injustice - correct it, and don’t be afraid to deal with anti-Semitism online. The worst that can happen is you being blocked or ignored. It is far less traumatic than encountering a neo- Nazi group in the real world, and can help preventing it from happening. Know that we are here for you, and you can ask us for help and let us know if you encounter anti-Semitism online."
How would you respond to the claim that Israelis jump on every criticism of Israel's policy and scream "anti- Semitism!"?
"There are people with legitimate criticism about Israel’s policy. While I mostly disagree, I can accept it. Not all criticism is anti-Semitism. The problem is that anti-Semitic try to disguise their true selves by hiding behind supposed legitimate criticism. If you dig deeper into their claims, you’ll find that their criticism is nothing but legitimate. When some-one opposes human rights violations and decides to boycott several countries, including Israel - I can disagree, I can explain why he is wrong, but I can’t call him an anti-Semite. When Israel is excluded, is the only country being targeted when someone makes a “human rights” type of claim, and the Holocaust is claimed to be only second to the so called “Palestinian Holocaust” - it is anti-Semitism. When the media coverage disproportionately focuses on Israel’s actions in Gaza while ignoring places like Syria, Iraq or Qatar, I can only assume that there are other considerations involved, other than pure journalism.
Let’s use an example: Americans are probably familiar with the discussion revolving the US aid and financial/military support to other countries. Some are for it, some are against, claiming the taxpayer money needs to be spent on interior matters and save the local economy. There’s another group of people, though. They claim that the US should stop aiding Israel, with the same explanation about using the money to help the local community. Since I am not an American, I can’t really make a judgment call on that, but as an Israeli, I can’t help but wonder what stands behind the second type of a claim. It is one thing to oppose the idea of foreign aid, and another to exclude Israel alone. The US support other countries as well, including Egypt, Saudi Arabia and Qatar, so why those countries haven’t been mentioned by this group of people, who claim to oppose to foreign military aid altogether? Israel is a newborn country, with a lot to learn and many places to develop. It isn’t perfect and there are plenty of things that need to be fixed. Pointing out problems is okay, but criticism must be balanced and fair in order to be a legitimate criticism intended to improve and not hurt."
Is there an online experience you remember in particular from your time in the program?
"I remember encountering a Facebook user who accused Israel of leading a global scam in order to create a new world order. I joked and replied: “Yeah, right. Israel was esta-blished by an alliance that seeks to control the world.” To that he replied: “Yes. All Israeli inventions are part of it, and they want to use them to make experiments on the people in Gaza.” Another fellow I remember kept posting on Twitter quotes by Iranian Presidents about Israel’s “war crimes” and human rights violations. I asked him to tell me how the Iranian government follow freedom of speech, freedom of press and freedom of sexual orientation. I reminded him that they execute homosexuals and opposition leaders. These examples express the new form of anti-Semitism, which can be harder to find than the classical form we all know from decades ago. It pretends to be criticism."
Why is it so important to fight anti- Semitism online as well, and not settle with battles "off line?"
"The battle against anti-Semitism should be a combination of the “offline” with the “online.” We have to remember that behind every online user there is an “offline” extremist who believe that Jews should be extinct. It today’s world, people’s “likes” and “shares” are some sort of a social acknowledgement of their thoughts and beliefs. We cannot allow that anti-Semitic discourse gain popularity through growing numbers of online social acknowledgements. Moreover, words often grow into actions. We all witness it on almost a daily basis with the bullying phenomenon. Just like we won’t tolerate bullying, we must make a clear statement to not allow anti-Semitism as well. We must help prevent it from spreading online, thus help prevent attacks against Jews in the “real world,” similar to the 2012 murder outside of a Jewish school in Toulouse. We can only hope that the proper measures are being taken by the authorities against anti-Semitism in the “real world,” but online we can actually take action. We have to make sure that the various social media channels constantly enforce their Community Standards, protecting minorities and private people from persecution."
Where can we find anti- Semitism online?
"You can find anti-Semitism online in various forms. The most common is the comments (“Talkbacks”) on articles and op-eds regarding Israel on news websites: Mostly they appear on the website itself, below the article, but there are people who make their comments by sharing the story on social networks. There are also Facebook pages and groups with a clear anti-Semitic message. Others are dedicated to anti-Israel propaganda with hidden anti- Semitic motives. They do that by presenting quotes out of context, inventing non-existing quotes by Israeli/world leaders, sharing photos of bleeding children taken in Syria and presenting them as the actions of the Israeli army in Gaza, etcetera. We can also find anti-Semitism in “outcast” websites, Youtube channels or Facebook pages, run by extremists who use their hatred as an engine to gain more popularity. There are also politicians and public figures like British PM George Galloway, filmmaker Allain Soral, and French Comedian Dieudonne’ M’bala M’bala, who find no shame in denying the Holocaust and spread hatred.
But the most dangerous form of anti-Semitism online, in my opinion, is “Yahoo! Answers,” which is being used mostly by youth for school assignments instead if the Encyclopedia. Some pupils ask an innocent question, like “What caused WWII,” and haters use that platform for rewriting history, posting answers like “Because the Jews wanted to take over the world and make all countries fight against each other.” In this specific case, I stepped in and wrote the person who asked the question and gave his a proper answer, but there are countless of twisted answers there, which we try to replace. I recently heard of a student in an American college who got an A+ on an assay denying the Holocaust. Everything about the assay’s structure was right: the right font, the right references and the right structure, but the content was far from being accurate. Therefore, we must always have presence in all online platforms that may contain ignorance and inaccuracies and shed some light there with the truth."
© The Jewish Journal
In the first case of its kind in Romania, the country's highest court ruled that an ‘offensive’ message that a man wrote on his Facebook page was not private.
4/12/2014- The High Court of Cassation and Justice ruled that Mircea Munteanu, a clerk in the Transylvanian city of Tirgu-Mures, who wrote a message on his Facebook page quoting the Nazi slogan ‘Arbeit Macht Frei’, must pay a fine imposed for publishing offensive material. Munteanu wrote the note two years ago in a criticism of anti-government protesters. He was quoted by a local newspaper, and soon afterwards, Romania's anti-discrimination council, CNCD, fined him 1,000 lei (some 225 euro) for “nationalist propaganda that offends human dignity and is an offence against a group of people”. Munteanu decided to dispute the decision in the supreme court, saying his message was just a personal note on his Facebook page.
But the High Court of Cassation and Justice, the country’s highest court, decided that Facebook pages are public space, so Munteanu has to pay the fine. “The Facebook social network can’t be equated with a mailbox, in terms of controlling the posted message. A person’s personal profile on Facebook, although accessible only to a small number of people, is still public space, as any of the ‘friends’ can distribute the information posted by the page owner,” the court said in its decision. The decision, the first of its kind in the country, could set a precedent for future cases, although in the Romanian legal system, which is based on the French system, each case is judged separately, and not based on precedent. Facebook is extremely popular in Romania, with around 7.5 million accounts currently active.
© Balkan Insight
2/12/2014- With the support of the city-state of Berlin, a German association has launched a new mobile telephone application which aims to update users on nazi activity in the nation's capital and how to combat it. "Every app user will receive, if they wish, automatic notifications about all neonazi group actions in Berlin," Bianca Klose, the director for the Berlin Association for Democratic Culture (VdK), told AFP today. "In that way the user will be able to decide how to combat those extremist movements, be it through participating in counter-demonstrations which are also notified in the app or, for example, putting a flag in their window," she added. The "Against the Nazis" app can be downloaded for free on Android and iPhone mobiles and is available in three languages, German, English and Turkish. While the far-right in Berlin are a minor electoral force small groups are active in certain neighbourhoods of the German capital, prompting VdK and other organisations to form the "Berlin against Nazis" movement in March 2014 in an effort to stamp out extremist activity.
By Soraya Nadia McDonald
2/12/2014- At first glance, it seems like a clever bit of Internet Darwinism: If you don’t possess the savvy to privatize your obviously racist social-media activity, then your emplo-yer receives a call — or 10 or 20 — about your inappropriate online brain-drippings. That’s the mission of the new Tumblr blog “Racists Getting Fired.” It seems like a natural progression in a world that’s already home to the “Yes, You’re Racist” Twitter account. The account publicly shames Twitter users expressing racist sentiments by retweeting them, especially if their tweets are qualified by “I’m not racist, but …”
For example: “Man I ain’t racist but a Mexican family is so annoying.” And: “I’m not a racist but when I saw a black African coon stormtrooper I was taken aback. Stormtroopers arent colored.” Racists Getting Fired doesn’t just publicly shame — it adds consequences by rounding up those willing to call a business to say they don’t want to patronize a place with an employee who says things like “#Ferguson one less n—– on #foodstamps.” The Tumblr quickly took off. Racists Getting Fired gained nearly 40,000 followers in a matter of days, with 15,000 submissions in the first eight hours of the blog’s existence, according to its moderator.
But there was a hitch that revealed a problem with Internet blood-lust: Sometimes the torch-wielding throngs get it wrong. Such was the case with Brianna Rivera, a woman who certainly appeared to have posted racially charged hate speech to her Facebook account. Later, Racists Getting Fired was alerted Rivera was a victim herself; the account was a hoax created by an ex-boyfriend and submitted to the Tumblr with the aim of not just getting Rivera fired from her job at a movie theater, but smeared as well. This led to new submission guidelines: among them, that users only submit authentic public profiles with links to corroborate screen shots, and of posts that are obviously, explicitly racist. The moderator vowed to vet future posts before publishing them.
Because Racists Getting Fired grew so popular so quickly, the moderator, an anonymous, queer-identified woman, found herself staring down threats from 4chan, she said. Early Tuesday morning, she published a request seeking new moderators to take over Racists Getting Fired, or even create a new blogs with the same goal if hers was shuttered. After soliciting legal advice in an earlier post, she shared this message:
I began this blog much to publicly, with an excess of personal information bc i could not have forseen the skyrocketting of attention this blog has received. i was not prepared for the legal threats, nor for being hunted by 4chan doxxers and other anti-social justice websites. so it is too late for me. moving forward with the knowledge of my mistakes, i am looking for new moderators to completely take over this blog. for your safety and for the longevity of this movement, you need to be damn well versed in the language of online security, you need to be able to mask your presence and cover your tracks. i will only consider serious inquiries.
Despite this misstep, Racists Getting Fired has already proved to be highly effective. The moderator created a section called “gotten” documenting those fired once their employers were made aware of their workers’ racist online ramblings. It’s full of sullen apologies from the terminated offenders — and victory e-mails like this one from Brown’s Car Stores in Virginia:
Earlier today, Brown’s Car Stores was made aware of racist and other inappropriate posts made by an employee. Brown’s does not condone nor does it tolerate racism, bigotry or any other expression of prejudice or discrimination against anyone of any race, gender or religion. We have taken immediate action and the individual is no longer a part of the Brown’s family.
In recent days, there’s been much discussion of how to get people to keep paying attention to the problems highlighted by Ferguson once protesters are no longer blocking freeways and marching through city streets. And one way to do it has been to force businesses to take a hit. Protesters in Missouri shut down malls and occupied major stores on the biggest shopping day of the year. Most media outlets, including The Washington Post, did not attribute the 11 percent drop in Black Friday sales to the efforts of #BlackoutBlackFriday and #BoycottBlackFriday, however.
There’s an argument that it should be financially untenable to support racism. To that end, Mother Jones recently compiled a list of Fortune 500 companies it said were “funding the political resegregation of America” by donating to the Republican State Leadership Committee, an organization the magazine charges with bankrolling GOP gerrymandering efforts that created majority-minority voting districts. That list, or the thinking behind it, could easily be traced back to the same sentiment that fueled the creation of the now-defunct BuyBlue.org. James Watson, the “father of DNA” has certainly found that there was a steep price for making racist comments in 2007: He’s now selling his Nobel prize as a way of making up for his lost income. That argument probably has something to do with why Racists Getting Fired has been successful: It’s not just that employers are horrified by workers’ use of the n-word or clearly racist stereotypes, it’s that they can’t afford to become painted as racist, either. Though this tactic doesn’t do much to target structural, institutionalized racism, Racists Getting Fired provides clear means for attacking individual racism, and it probably provides a gleeful jolt of dopamine when there are actual results.
So there are questions: What are the goals of Racists Getting Fired, beyond well, getting racists fired? Does it want to teach a lesson? Does it want to ensure there are meaningful consequences to publicly spouting racist venom? Is the idea to eradicate this sort of thinking or simply put it out of view of polite society? One would imagine that a man who loses his job after calling the president a “p—y a– n—–” and threatening to kill undocumented immigrants isn’t going to execute an about face when it comes to his racial politics. In fact, there’s a good chance he digs in his heels and simply learns how to better conceal his prejudices. Is that victory? Whatever the fate of Racists Getting Fired, its moderator has let her thoughts be known. “I will retire, but this will not die,” she wrote. “YOU CAN’T KILL US ALL … WHITE SUPREMACY MUST PAY.”
Soraya Nadia McDonald covers arts, entertainment and culture for the Washington Post with a focus on race and gender issues.
© The Washington Post
Anti-Defamation League poll finds Israelis witness steep increase in ‘anti-Israel expression’ in 2014
2/12/2014- Jewish-Israeli teenagers faced more anti-Semitism and “anti-Israel expression” on the Internet in 2014 than they did last year, according to an Anti-Defamation League poll. The survey, which was announced Tuesday, polled 500 Jewish Israelis aged 15 to 18 in November. It found that 51 percent of the participants reported encountering “attacks” on the Internet because of their nationality, compared to 36% last year. Eighty-three percent of the teens reported seeing anti-Semitism online in some form through “hate sym-bols, websites, and messages found on social media and in videos and music,” compared to 69% last year. The respondents noted that online anti-Semitism increased significantly during Israel’s war in Gaza this summer. “The more teenagers in Israel are using the Internet to connect with friends and share social updates, the more they are coming into contact with haters and bigots who want to expose them to an anti-Israel or anti-Semitic message,” Abraham Foxman, the ADL’s national director, said in a news release issued by the organization.
The survey also found that the teens encountered more anti-Semitism on social media websites such as Facebook and Twitter. Eighty-four percent reported seeing anti-Semitism in Facebook posts or tweets, compared to 70% last year. Sixty-five percent of the teenagers noted that they took action in response to the posting of anti-Semitic content by contacting website administrators or responding with comments of their own. The poll was conducted in Hebrew by the Israeli polling company Geocartography. It has a margin of error of plus or minus 4.4%.
© JTA News.
A Facebook page supporting the anti-Israel Boycott, Divestment and Sanctions (BDS) movement on Wednesday uploaded a Photoshopped image of Nazi concentra-tion camp prisoners holding anti-Israel signs.
28/11/2014- The picture, posted by a page named “I Acknowledge Apartheid Exists”, shows skeletal survivors holding up signs that read “Israel Assassins,” “Break the Silence on Gaza,” “Stop the Holocaust in Gaza” and “Stop US Aid to Israel.” A sign in the far back of the image says Gaza is “the world’s biggest concentration camp,” while another poster shows a Palestinian flag along with the words “Free Palestine.” A slogan at the bottom of the offensive image reads, “Whatever happened to ‘never again?’”
The Facebook page, which boasts over 91,000 members, captioned the post “Viva Palestine.” At the time of publication, the picture has been “liked” by 307 users and “shared” on the social media site by 110 users, including the Central NY Committee for Justice in Palestine. Many Facebook users expressed disgust over the image, calling it “inappropriate,” “shameful” and asking for the picture to be taken down. One user said, “I find this really disturbing. It’s not a case of ‘not getting it’. How can exploiting and image of other people’s suffering be an acceptable thing to do? Is that not what we’re supposed to be against??”
Another commenter said the picture is not just distasteful but “outright anti-Semitic, incredibly unpleasant, inappropriate and sullies the name of everyone who is trying to oppose Israel’s actions on Palestine.” Responding to the criticism, the Facebook page claimed the image is intended to teach a lesson. The page’s administrator said, “I am not going to stop posting something because some people do not get it. We have to teach them at some point. If people think we should not post because some people do not get it, we may as well not post anything at all.” The page was created in March 2013. It claims its mission is to “promote the narrative that Palestinians deserve the same rights and liberties that Israeli’s enjoy.”
© The Algemeiner
MPs this week criticised Twitter’s “defensive” response to concerns about rising online anti-Semitism after a meeting with the social media giant, in the wake of vile abuse aimed at Jewish Labour MP Luciana Berger.
27/11/2014- John Mann, chair of the All-Party Parliamentary Group Against Anti-Semitism, joined Hendon MP Matthew Offord and others in raising the issue during talks in Dublin with both Twitter and Facebook on Monday. While Facebook was praised for its approach, the parliamentarians were less impressed with Twitter, which defended its response to online anti-Semitism by saying: “There’s so much out there,” according to those present. “They likened the tweets to hearing an offensive conversation in the street, meaning that it’s soon gone as you pass by,” said Offord of the micro-blogging site’s argument. “Needless to say, we don’t see it like that.”
The parliamentary group said Twitter refused to comment on the details of individual cases, although the issue of Ms Berger was brought up. “Facebook was amenable, open and willing to engage with our concerns,” said Offord. “We did not feel the same about Twitter. They were very defensive and not as proactive.” The criticism comes in the same week that social media giants were forced to defend their response to online threats made by Fusilier Lee Rigby’s killers, just days before he was murdered in Woolwich in May last year. Both Mann and Berger have suffered online abuse in recent weeks, with neo-Nazi group members having been arrested and jailed.
The All-Party group will now press their case for changes with the Ministry of Justice and the Home Office. Among the ideas being discussed is a so-called ‘Internet ASBO,” first proposed by Mann in the House of Commons. Currently, a court order can ban sex offenders from using the internet, and some MPs want this to be extended to those determined to perpetrate race hate. “When someone is banned from one social media site, they just move to another platform, and we need to prevent this,” said Offord. “We’re also pressing for better identification, although this can be difficult because 80 percent of all posts are made from hand-held devices.”
Last year, Twitter was eventually forced to give French authorities data that identified users responsible for a spate of vile anti-Semitic tweets, but only after a long- running court battle launched by Jewish students.
© Jewish News UK
Facebook was the firm that hosted a conversation by one of Fusilier Lee Rigby's killers five months ahead of the attack, the BBC has learned.
25/11/2014- Michael Adebowale said he wanted to kill a soldier and discussed his plans in "the most graphic and emotive manner", according to the UK's Intelligence and Security Committee. The ISC said the social network did not appear to believe it had an obligation to identify such exchanges. Facebook said it does tackle extremism. "Like everyone else, we were horrified by the vicious murder of Fusilier Lee Rigby," said a spokeswoman. "We don't comment on individual cases but Facebook's policies are clear, we do not allow terrorist content on the site and take steps to prevent people from using our service for these purposes." The ISC's report said, however, that the company should do more. "Had MI5 had access to this exchange, their investigation into Adebowale would have become a top priority," it stated. "It is difficult to speculate on the outcome but there is a significant possibility that MI5 would then have been able to prevent the attack."
The ISC does not identify Facebook as the host service in the edition of its report released to the public, but the BBC understands it does do so in the complete version given to the Prime Minister. In it, the committee states that the company's failure to notify the authorities about such conversations risked making it a "safe haven for terrorists to commu-nicate within". It highlights that the UK's security agencies say they face "considerable difficulty" accessing content from Facebook and five other US tech firms: Apple, Google, Microsoft, Twitter and Yahoo. The companies in question have said in the past that they have a duty to protect their members' privacy. "If the government believes that it needs additional powers to be able to access communication data it must be clear about exactly what those powers are and consult widely on them before putting proposals before Parliament," said Antony Walker, deputy chief executive at TechUK, a lobbying body that works with Facebook.
The ISC's report identifies a "substantial" online exchange during December 2012 between Adebowale and a foreign-based extremist - referred to as Foxtrot - who had links to the Yemen-based terror group AQAP, but was not known to UK agencies at the time. Foxtrot is reported to have suggested several possible ways of killing a soldier, including the use of a knife. After the murder of Lee Rigby an unidentified third-party provided a transcript of the conversation to GCHQ. The information was also said to have revealed that Facebook had disabled seven of Adebowale's accounts ahead of the killing, five of which had been flagged for links with terrorism. This had been the result of an automated process, according to GCHQ, and no person at the company ever manually reviewed the contents of the accounts or passed on the material for the authorities to check.
GCHQ notes that the account that contained the phrase "Let's kill a soldier" was not one of those closed by Facebook's software. The agency added that the social network had not provided a detailed explanation of how its safety system worked. ISC said that among the information Facebook did disclose was the fact it enabled users to report "offensive or threatening content" and that it prioritised the "most serious reports". However, the committee reflected that such checks were unlikely to help uncover communications between terrorists. It acknowledged that in some other cases, Facebook had indeed passed on information to the authorities about accounts closed because of links to terrorism. However, it said the failure to do so after deactivating Adebowale's account had been a missed opportunity to prevent Lee Rigby's death.
"Companies should accept they have a responsibility to notify the relevant authorities when an automatic trigger indicating terrorism is activated and allow the authorities, whether US or UK, to take the next step," its report concluded. "We further note that several of the companies attributed the lack of monitoring to the need to protect their users' privacy. However, where there is a possibility that a terrorist atrocity is being planned, that argument should not be allowed to prevail." But one digital rights campaign group has taken issue with these recommendations. "The government should not use the appalling murder of Fusilier Rigby as an excuse to justify the further surveillance and monitoring of the entire UK population," said Jim Killock, executive director of the Open Rights Group.
"The committee is particularly misleading when it implies that US companies do not co-operate, and it is quite extraordinary to demand that companies pro-actively monitor email content for suspicious material. "Internet companies cannot and must not become an arm of the surveillance state."
© BBC News
Police have responded to the growing threat of cybercrime by setting up a new specialist unit.
22/14/2014- Cybercrime can include a whole range of illicit online activities from hacking, fraud and scamming to stalking, hate crime and even human trafficking. Hertfordshire Constabulary’s Cyber and Financial Investigation Unit will be focussing on serious and complex cyber-enabled crime, supporting colleagues dealing with cyber-related investigations in other units and investigating and preventing fraud. The new team is launching a section on the Herts Police website dedicated to dealing with cybercrime, at: www.herts.police.uk/advice/cybercrime.aspx, which contains information about current issues, emerging threats and advice on how to be safe online.
© Borehamwood & Elstree Times
An OSCE-supported conference on countering the use of the Internet for terrorist purposes took place today in Astana, Kazakhstan.
25/11/2014- The event was co-organized by the OSCE Centre in Astana, the Committee on Religious Affairs of Kazakhstan’s Culture and Sport Ministry and the Institute for Strategic Studies under the President for some 100 government officials, parliamentarians, information technology and information security specialists, academics, theologians and journalists, including international experts and scholars from Austria, Azerbaijan, Germany, Kazakhstan, Italy, Moldova, the Russian Federation, Turkey, the UAE, the UK, the US, Uzbekistan as well as representatives from United Nations agencies and the CIS Antiterrorist Centre for Central Asia. Advisers from the Office of the OSCE Representative on Freedom of the Media and the OSCE Transnational Threat Department/Action against Terrorism Unit shared the OSCE best practices.
The conference provided a platform to discuss issues related to terrorist organizations receiving support via Internet technology and assessed the merits of developing practical guidelines on preventing the use of the Internet for terrorist purposes, setting a legal framework and enhancing international co-operation to counter the dissemination of violent extremist ideology and illegal content on the Internet and social networks. “The success in our fight against terrorism mainly depends on the effectiveness of national policies, practices and flexibility in reacting to emerging challenges. By preventing cybercrime in its different manifestations we also avert serious terrorist actions and ensure security for the people and the nation as a whole”, said Ambassador Natalia Zarudna Head of the OSCE Centre in Astana. “Since 2005, the OSCE has actively and consistently promoted and facilitated the elaboration and implementation of targeted measures in order to thwart the use of the Internet for terrorist purposes with a focus on respecting human rights and fundamental freedoms.”
Baglan Asaubayuly Mailybayev, Deputy Head of the Presidential Administration of the Republic of Kazakhstan said: “By now we have learned that all countries need to co-operation closely to effectively counter terrorism. New efforts are necessary for conceptual and practical work at the international level in combating terrorism. Only by joining efforts, exchanging ideas, opinions and experience can we create a real barrier to propaganda of the cult of violence, terrorism and extremism.” The event is part of the OSCE’s comprehensive contribution to global efforts against terrorism.
© The OSCE
By Jeremy Malcolm, Senior Global Policy Analyst
25/11/2014- In politics, as with Internet memes, ideas don't spread because they are good—they spread because they are good at spreading. One of the most virulent ideas in Internet regulation in recent years has been the idea that if a social problem manifests on the Web, the best thing that you can do to address that problem is to censor the Web. It's an attractive idea because if you don't think too hard, it appears to be a political no-brainer. It allows governments to avoid addressing the underlying social problem—a long and costly process—and instead simply pass the buck to Internet providers, who can quickly make whatever content has raised rankles “go away.” Problem solved! Except, of course, that it isn't. Amongst the difficult social problems that Web censorship is often expected to solve are terrorism, child abuse and copyright and trade mark infringement. In recent weeks some further cases of this tactic being vainly employed against such problems have emerged from the United Kingdom, France and Australia.
UK Court Orders ISPs to Block Websites for Trade Mark Infringement
In a victory for luxury brands and a loss for Internet users, the British High Court last month ordered five of the country's largest ISPs to block websites selling fake counterfeit goods. Whilst alarming enough, this was merely a test case, leading the way for a reported 290,000 websites to be potentially targeted in future legal proceedings. Do we imagine for a moment that, out of a quarter-million websites, none of them are false positives that actually sell non-infringing products? (If websites blocked for copyright infringement or pornography are any example, we know the answer.) Do we consider it a wise investment to tie up the justice system in blocking websites that could very easily be moved under a different domain within minutes? The reason this ruling concerns us is not that we support counterfeiting of manufactured goods. It concerns us because it further normalizes the band-aid solution of content blocking, and deemphasises more permanent and effective solutions that would target those who actually produce the counterfeit or illegal products being promoted on the Web.
Britain and France Call on ISPs to Censor Extremist Content
Not content with enlisting major British ISPs as copyright and trade mark police, they have also recently been called upon to block extremist content on the Web, and to provide a
button that users can use to report supposed extremist material. Usual suspects Google, Facebook and Twitter have also been roped by the government to carry out blocking of their own. Yet to date no details have been released about how these extrajudicial blocking procedures would work, or under what safeguards of transparency and accountability, if any, they would operate. This fixation on solving terrorism by blocking websites is not limited to the United Kingdom. Across the channel in France, a new “anti-terrorism” law that EFF reported on earlier was finally passed this month. The law allows websites to be blocked if they “condone terrorism.” “Terrorism” is as slippery a concept in France as anywhere else. Indeed France's broad definition of a terrorist act has drawn criticism from Human Rights Watch for its legal imprecision.
Australian Plans to Block Copyright Infringing Sites
Finally—though, sadly, probably not—reports last week suggest that Australia will be next to follow the example of the UK and Spain in blocking websites that host or link to allegedly copyright material, following on from a July discussion paper that mooted this as a possible measure to combat copyright infringement. How did this become the new normal? When did politicians around the world lose the will to tackle social problems head-on, and instead decide to sweep them under the rug by blocking evidence of them from the Web? It certainly isn't due to any evidence that these policies actually work. Anyone who wants to access blocked content can trivially do so, using software like Tor.
Rather, it seems to be that it's politically better for governments to be seen as doing something to address such problems, no matter how token and ineffectual, than to do nothing—and website blocking is the easiest “something” they can do. But not only is blocking not effective, it is actively harmful—both at its point of application due to the risk of over-blocking, but also for the Internet as a whole, in the legitimization that it offers to repressive regimes to censor and control content online. Like an overused Internet meme that deserves to fade away, so too it is time that courts and regulators moved on from website blocking as a cure for society's ills. If we wish to reduce political extremism, cut off the production of counterfeits, or prevent children from being abused, then we should be addressing those problems directly—rather than by merely covering up the evidence and pretending they have gone away.
© Electronic Frontier Foundation
26/11/2014- When the Supreme Court comes face to face Monday with a free speech case involving threats made on Facebook, Paulette Sullivan Moore and Francis Schmidt will have decidedly different reactions. Sullivan hears regularly from women who are harassed and threatened online. A licensed professional had to change her name and take a lower-paying job. An Arizona woman moved nine times in 18 months and changed jobs four times. An Illinois woman confronted Facebook images of herself, her house and children with the caption, "You think you can hide from me?" "What we know about abusers is that when they can't get physical access to the person they were abusing, they start using other methods," says Sullivan, vice president of public policy for the National Network to End Domestic Violence.
Schmidt was suspended from his job as an art and animation professor at a New Jersey college after posting on Google+ a photo of his 7-year-old daughter with a T-shirt that read, "I will take what is mine with fire and blood." The phrase, well-known to Game of Thrones fans, was interpreted by school officials as a threatened school shooting. "Our school is the laughingstock of academia because of this," Schmidt says. "If you look up my name on the Internet, I think the third hit is something about school shootings." Those are the two sides of the debate in the case of Anthony Elonis, whose threats were more intense than Schmidt's alleged threats, though perhaps no more intentional.
Upset at the breakup of his marriage, the 27-year-old Pennsylvania man repeatedly posted threatening remarks not only about his wife, but also about his former workplace, a kindergarten class, local police and FBI agents. Eventually, he was convicted on four federal counts of transmitting threats across state lines and sentenced to 44 months in prison. The question for the justices: Is it enough that Elonis' targets felt threatened, as two lower federal courts ruled? Or must a jury decide that he intended to instill fear or inflict physical harm? Elonis' attorneys say his dark posts — such as "I'm not gonna rest until your body is a mess" and "Hell hath no fury like a crazy man in a kindergarten class" — were a form of therapy, an imitation of rap lyrics and an expression of his First Amendment rights. On the Internet, they say, context is lost and words can be misinterpreted.
The federal government says the standard used by lower courts — that Elonis' words on Facebook could be viewed as threats by a reasonable person reading them — is sufficient, and his intent does not have to be proved. "Juries are fully capable of distinguishing between metaphorical expression of strong emotions and statements that have the clear sinister meaning of a threat," its brief says. Rap music has thrived under the "reasonable person" standard, it notes, without ensnaring popular rappers such as Eminem.
'It's Definitely Terrifying'
While context can be lost on the Internet, the government contends that what's important in Elonis' case is a different kind of context — what was going on in his life. His wife left with their two children. Despondent, Elonis' work suffered, and he lost his job at an Allentown, Pa., amusement park. Using a Facebook pen name, he lashed out at the employer, the ex-wife and many others — but with occasional references to his free speech rights. "It's illegal for me to say I want to kill my wife," he said in a typical post. "I'm not actually saying it. I'm just letting you know that it's illegal for me to say that." The same post included this addendum: "Art is about pushing limits. I'm willing to go to jail for my constitutional rights. Are you?"
Those in the business of helping victims of domestic violence and hate crimes aren't swayed by the arguments about artistic expression or free speech. Electronic communication provides ever more ways to threaten victims, they say, while other technological advances enable stalkers to track their targets' movements. "With new media communications, the message instantly finds its target, regardless of time, distance, or location," says a brief submitted by the Anti-Defamation League. "And with social media, such as Facebook, an individual can threaten a target privately, or in full view of his or her peers. In these ways, the Internet has lowered the barriers to issuing a true threat." In a survey of 759 victims' service agencies, the National Network to End Domestic Violence found that nearly 90% of them had cases of threats delivered via technology. Text messages were the most prevalent form, followed by social media and e-mail. Women between the ages of 18 and 24 were the most frequent targets.
"These threats are not artistic expression. They are not performance art or fantasy violence," says the brief submitted by the National Network to End Domestic Violence. "They are a key part of the in-person abuse to which the victims have been subjected, sometimes for years, and for which they have tried desperately to escape." Carissa Daniels, who goes by a pseudonym, can attest to that. The 58-year-old Washington state resident spent eight years in an abusive relationship and the next 16 "playing cat and mouse and hiding" because her ex-husband hasn't stopped harassing her online. "What happens on social media needs to be seriously looked at," she says. "It's a lot more psychologically damaging, and it's definitely terrifying."
Another victim, Tammy M., was married with four children when it was discovered that her husband had been secretly taking voyeuristic photos of the family and others. They split up, and after being turned in to police and charged with a misdemeanor, he set up a fake Facebook page using her name and pretended to be soliciting sex from strangers. "I've had them show up at the door. It was really scary," she says of her would-be suitors. "And I'm blind on top of it. It's hard to fight something that you can't see."
The flip side of that, others say, can be innocent people being penalized — perhaps even winding up in prison. In Texas, 19-year-old Justin Carter was thrown in jail for comments he made on Facebook while arguing with friends about an online video game. "I think I'ma shoot up a kindergarten and watch the blood of the innocent rain down," he wrote, later adding, "and eat the beating heart of one of them." He was jailed for several months on $500,000 bond and is awaiting trial. "Law enforcement is completely out of touch with the way our citizens are communicating with each other," Carter's attorney, Don Flanary, says. "They are operating based on fear and not on common sense."
In Kentucky, James Evans, 31, was arrested for posting lyrics to a song by the band Exodus about the 2007 Virginia Tech shootings that resulted in 33 deaths. Evans was charged with a felony that carries a five-year mandatory-minimum sentence. He spent eight days in jail before the charges were dropped. Even middle-aged college professors like Schmidt can run into technological trouble. For posting the Game of Thrones photo of his daughter, Schmidt was banished from campus, told to see a psychiatrist and forced to promise he would not wear clothes with "questionable statements." A brief submitted to the Supreme Court by the Student Press Law Center and other groups warns that under the standards used in Elonis' case, online speakers could face "life-ruining consequences." The result, they say, would "chill constitu-tionally protected speech."
© USA Today
By James Bright
25/11/2014- Controversial cases have always yielded controversial verdicts. In recent years the highly publicized trials of Casey Anthony, George Zimmerman and now Darren Wilson unleash a bevy of "legal scholars" on the world of social media. We as a society have developed a narcissism that coincides with tweeting and status updating. We love to pretend like we actually know what we are talking about. I'm guilty of it too - I mean I do use this space every week to prattle on and on about topics I deem of interest after all. Under this narcissism there are other truths that bubble to the surface in the wake of criminal proceedings. What I learned last night is we are not nearly as evolved racially as we like to think we are. A very obvious divide still exists. Tweets utilizing racially insensitive vernacular for Caucasian and African Americans filled Twitter.
There's no denying such racism has become taboo in the world, but apparently only when it's offline. Online many Americans digress into a society of fear filled bigots ready to point the finger of blame at any one of a different color. The media makes exhaustive efforts to be politically correct in an attempt to showcase our evolved sensibilities. All this does is sweep reality under the rug. Cyberspace has shown who many people truly are. These people may not use such vile terms in public, but social media has created an unrealistic sense of security for many. There are Tweeters, bloggers and posters who feel impermeable on the web and they use this avenue to vent and create animosity amongst people who feel equally impermeable.
© Chickasha News
24/11/2014- An anti-gay hunting game which made it onto the Google Play store has been removed, but not before it was downloaded 10s of thousands of times. Called 'Ass Hunter' the sick game asks players to "kill gays as much as you can or escape between them to the next level". The game was noticed by a reader of Gay Star News who then spoke to the paper after complaining to Google. According to the Mirror the game did receive a wave of negative reviews before being removed. Chad Hollinghead said: "This is sickening. I have zero tolerance for hate. We the people should always promote tolerance, love and understanding. "Vicious games, exterminating of minorities should be banned." The fact the app actually appeared on Google's app store has led to concerns that the company may need to reassess the way it processes app submissions, possibly leading to stricter checks. This isn't the first time an Android game has hit the headlines over its content with the 'Bomb Gaza' app causing similar outrage at being allowed through the net onto Google's app store.
© The Huffington Post - UK
By Imran Awan, Senior Lecturer and Deputy Director of the Centre for Applied Criminology at Birmingham City University
21/11/2014- In late 2013 I was invited to present evidence, as part of my submission regarding online anti-Muslim hate, at the House of Commons. I attempted to show how hate groups on the internet were using this space to intimidate, cause fear and make direct threats against Muslim communities – particularly after the murder of Drummer Lee Rigby in Woolwich last year. The majority of incidents of Muslim hate crime (74%) reported to the organisation Tell MAMA (Measuring Anti-Muslim Attacks) are online. In London alone, hate crimes against Muslims rose by 65% over the past 12 months, according to the Metropolitan Police and anti-Islam hate crimes have also increased from 344 to 570 in the past year. Before the Woolwich incident there was an average of 28 anti-Muslim hate crimes per month (in April 2013, there were 22 anti-Muslim hate crimes in London alone) but in May, when Rigby was murdered, that number soared to 109. Between May 2013 and February 2014, there were 734 reported cases of anti-Islamic abuse – and of these, 599 were incidents of online abuse and threats, while the others were “offline” attacks such as violence, threats and assaults.
A breakdown of the statistics shows these tend to be mainly from male perpetrators and are marginally more likely to be directed at women. After I made my presentation I, too, became a target in numerous online forums and anti-Muslim hate blogs which attempted to demonise what I had to say and, in some cases, threaten me with violence. Most of those forums were taken down as soon as I reported them.
It’s become easy to indulge in racist hate-crimes online and many people take advantage of the anonymity to do so. I examined anti-Muslim hate on social media sites such as Twitter and found that the demonisation and dehumanisation of Muslim communities is becoming increasingly commonplace. My study involved the use of three separate hashtags, namely #Muslim, #Islam and #Woolwich – which allowed me to examine how Muslims were being viewed before and after Woolwich. The most common reappearing words were: “Muslim pigs” (in 9% of posts), “Muzrats” (14%), “Muslim Paedos” (30%), “Muslim terrorists” (22%), “Muslim scum” (15%) and “Pisslam” (10%). These messages are then taken up by virtual communities who are quick to amplify their actions by creating webpages, blogs and forums of hate. Online anti-Muslim hate therefore intensifies, as has been shown after the Rotherham abuse scandal in the UK, the beheading of journalists James Foley, Steven Sotloff and the humanitarian workers David Haines and Alan Henning by the Islamic State and the Woolwich attacks in 2013.
The organisation Faith Matters has also conducted research, following the Rotherham abuse scandal, analysing Facebook conversations from Britain First posts on August 26 2014 using the Facebook Graph API. They found some common reappearing words which included: Scum (207 times); Asian (97); deport (48); Paki (58); gangs (27) and paedo/pedo (25). A number of the comments and posts were from people with direct links to organisations such as Britain First, the English Brotherhood and the English Defence League.
Abuse is not a human right
Clearly, hate on the internet can have direct and indirect effect for victims and communities being targeted. In one sense, it can be used to harass and intimidate victims and on the other hand, it can also be used for opportunistic crimes. Few of us will forget the moment when Salma Yaqoob appeared on BBC Question Time and tweeted the following comments to her followers: “Apart from this threat to cut my throat by #EDL supporter (!) overwhelmed by warm response to what I said on #bbcqt.” The internet is a powerful tool by which people can be influenced to act in a certain way and manner. This is particularly strong when considering hate speech that aims to threaten and incite violence. This also links into the convergence of emotional distress caused by hate online, the nature of intimidation and harassment and the prejudice that seeks to defame groups through speech intending to injure and intimidate. Some sites who have been relatively successful here include BareNakedIslam and IslamExposed which has a daily forum and chatroom about issues to do with Muslims and Islam and has a strong anti-Muslim tone which begins with initial discussion about a particular issue – such as banning Halal meat – and then turns into strong and provocative language.
Most of this anti-Muslim hate speech hides behind a fake banner of English patriotism, but is instead used to demonise and dehumanise Muslim communities. It goes without saying that the internet is just a digital realisation of the world itself – all shades of opinion are represented, including those Muslims whose hatred of the West prompts them to preach jihad and contempt for “dirty kuffar” Clearly, freedom of speech is a fundamental right that everyone should enjoy, but when that converges with incitement, harassment, threats of violence and cyber-bullying then we as a society must act before it’s too late. There is an urgent need to provide advice for those who are suffering online abuse. It is also important to keep monitoring sites where this sort of thing regularly crops up; this can help inform not only policy but also help us get a better understanding of the relationships forming online. This would require in detail an examination of the various websites, blogs and social networking sites by monitoring the various URLs of those sites regarded as having links to anti-Muslim hate.
It is also important that we begin a process of consultation with victims of online anti-Muslim abuse – and reformed offenders – who could work together highlighting the issues they think are important when examining online Islamophobia. The internet offers an easy and accessible way of reporting online abuse, but an often difficult relationship between the police and Muslim communities in some areas means much more could be done. This could have a positive impact on the overall reporting of online abuse. The improved rate of prosecutions which might culminate as a result could also help identify the issues around online anti-Muslim abuse.
© The Conversation
An Eastwood man who sent death threats to a former friend and harassed a police officer has been found guilty of five charges relating to malicious communications.
20/11/2014- Simon Tomlin, 46, of Lawrence Avenue, was convicted at Nottingham Magistrates Court today (Thursday) in his absence, after failing to attend a two-day trial. Tomlin was found guilty of criminal harassment of former friend Melony McElroy and PC Richard Reynolds, of sending a series of tweets containing grossly offensive material which refe-renced Ms McElroy on October 9, 2014, and of repeatedly referring to her as a ‘neo-Nazi’ on his blog, The Daily Agenda. When explaining his decisions, the magistrate said that despite Tomlin’s denial of harassment in police interviews, it was clear he deliberately caused alarm and distress and was aware that it would constitute harassment. He said of Melony McElroy: “He caused fear that she was at risk of murderous reprisals and sent matter that was grossly offensive and menacing. “It is clear from Ms McElroy’s evidence that he caused her fear and distress and I am quite sure that is what the defendant intended.” Tomlin was also convicted of sending by public communications network pictures of police officers’ private vehicles on October 5, 2014, which the magistrate described as ‘hate material’ against the police. He added: “In view of the number of followers and the nature of the website that the Facebook page was associated with, the officers whose cars were identified had every reason to fear damaging consequences.” A warrant for Tomlin’s arrest was issued and he will be sentenced at a later date.
© The Eastwood & Kimberley Advertiser
21/11/2014- A Virginia man arrested during a 2012 raid on a central Florida white supremacist compound has been sentenced to 17 ½ years in prison for threatening Florida officials. The U.S. Attorney's Office reports that a federal judge in Orlando sentenced 38-year-old William White on Friday. He was convicted in September of sending interstate threats with the intent to extort and using personal information without lawful authority in furtherance of a crime of violence. The new sentence will run consecutively with a nearly-8-year sentence he's already serving for a separate federal case out of Virginia. Authorities say the self-professed neo-Nazi sent a number of email threats to former State Attorney Lawson Lamar, Circuit Judge Walter Komanski and an FBI task force agent in May 2012. The emails included threats to recipients' family members, including children and grandchildren.
© The Associated Press
A mountain couple makes an unsettling discovery on Google Maps.
20/11/2014- Jennifer Mann and Jodi McDaniel say they've never had any problems living at their home in Canton. But, on Google Maps, instead of a street name, their driveway was labeled with a gay slur. Mann and McDaniel say the gay slur was hurtful and amounts to a hate crime. They’d like to find out who did it and take legal action. “And if I can I'm going to get legal advice about it,” Mann said. “I really don't know what to say to him other than grow up,” McDaniel said. “I have no problems with them....none. They're good neighbors,” Fay Capps said. There's an option on Google Maps to report problems like inappropriate content. Google Maps says its policy considers discrimination based on sexual orientation as a hate crime. As a result of News 13’s attention, Google removed the slur. A spokeswoman says there is a mapma-ker tool where people can edit maps. She says they don't know who did it or when. But she says the gay slur slipped through their check systems, perhaps because it was such a small road-driveway. She says they'll continue investigating. McDaniel’s has a message for whoever did it. “Live your life and leave us alone you know. We don't bother anybody.”
A compelling argument for strong-arm tactics against those who perpetrate abuse on the net.
By Helen Fenwick
20/11/2014- This book sets forth a compelling argument that the internet should not be allowed to maintain its “Wild West” anarchic status, because its ability to facilitate cyber-bullying outweighs the virtues of maintaining that status. It argues that the virtues of the web – in particular, anonymity, which fosters truth-telling and self-expression – also translate into vices: people become de-individuated in anonymous postings, and the lack of identification fosters the refusal to conform to social norms. The result is online harassment and bullying that can take extreme forms.
Hate Crimes in Cyberspace’s main strength lies in its sustained and detailed exploration of the bizarrely convoluted, sustained and extremely hurtful nature of online abuse of individuals. Danielle Keats Citron, a legal scholar, pertinently compares the social response to online bullying (which informs the legal one) to the response to domestic violence and workplace sexual harassment in the 1970s. At that time it was thought that both could be relegated to the sphere of the private choices of women – that the responsibility lay with the woman to deal with the problem, by growing a thicker skin or by simply packing her bags and leaving. Feminist campaigns from the 1970s onwards changed that perception and triggered legal change. Citron argues that the tendency to trivialise online abuse (as frat-boy banter) and to blame the victim for failing to shrug it off is highly prevalent and is retarding the development of stronger laws and law enforcement. She makes her case successfully for changing social perceptions and creating a far more effective legal response, particularly by utilising civil rights laws.
Nevertheless, her book is somewhat selective in its approach. Its very broad title is misleading – it might easily have been titled Cyber-Based Sexual Harassment and Proposals for US Legal Reform. Clearly, that title would have been less snappy and less appealing. But it would have been more accurate. The book focuses very strongly on the harassment and denigration of women via online abuse, and this is the right approach to take, rather than focusing on the harassment of white heterosexual males, who suffer significantly less online abuse. Its pioneering research could and should be used to support the case for introducing a criminal offence of gender-based hate speech in various countries, including the UK.
However, the book only touches on abuse suffered by lesbian, gay, bisexual and transgender persons and on racial grounds, largely disregards the abuse of persons due to other characteristics, and also largely disregards group-based online hate crimes (or hate speech as hate crime). So, for example, it does not discuss Salafi/Wahhabi or Christian fundamentalist online hate speech that is aimed at gay people, which can clearly have an impact on individuals. Many such groups – as has been brutally illustrated in recent months by the actions of Islamic State – understand the impact and utility of social media all too well.
Citron’s proposals for law reform are practical but also selective. They are very US-centric – which is understandable up to a point, but also ironic, given the book’s message about the nature of cyberspace and the difficulties of prosecuting in a borderless space. International initiatives aimed at cyber-bullying could have been considered, as could examples from other countries, since for obvious reasons this is an international problem. A strong, compelling, readable exploration of this problem is proffered here, but the call for action that it represents requires a wider focus.
Hate Crimes in Cyberspace
By Danielle Keats Citron
Harvard University Press, 352pp, £22.95
Published 20 October 2014
© Times Higher Education
The report published today looks at the need for regulation specific to online harassment.
19/11/2014- The law reform Commission has today published a paper that aims to tackle the issue of online harassment by “trolls”. In the paper, a number of issues relating to online bullying and anonymous posting are raised – and the adequacy of the current legislation is questioned. At the moment, the law requires sustained harassment for a offence to be committed online, while one-off incidents are not considered in the same way. The paper questions the current legislation that is in place, and whether it is sufficient to deal with the new challenges posed by online abuse, particularly in relation to hate speech.
It is asked whether there should be a new legislation for instances where:
@ There is a serious interference with privacy.
@ Content that goes online has the potential to cause serious harm due to its international reach and permanence.
@ There is no sufficient public interest in publication online.
@ The accused intentionally or recklessly caused harm.
As the law stands
At the moment the law deals with online offences through legislation designed to deal with general circumstances. Online bullying is considered under the Non-fatal Offences Against the Persons Act 1997, which makes harassment – also commonly referred to as ‘stalking’ – an offence. In the issues paper it is suggested that updates to the Prohibition of Incitement to Hatred Act 1989 could be updated in line with suggestions from the EU Commission. Such a change would bring Ireland into line with the 2008 EU Framework Decision on combating xenophobia and racism.
The ‘Issues Paper on Cyber-crime Affecting Personal Safety, Privacy and Reputation, including Cyber-bullying’ is being published as part of the Commission’s Fourth Programme of Law Reform. In it, the issue of dealing with how civil law remedies problems that comes from websites located outside of the state are also considered.
Speaking on Newstalk’s Pat Kenny show, Raymond Byrne, the Director of Research at the Law Reform Comission, said: We happen to have a lot of the big social media companies here in Dublin. We have the opportunity to do something here that is a good guide for other countries as well. Byrne went on to point out that penalties for offences in Ireland were more severe than elsewhere. “The harassment offence carries up to seven years imprisonment so that is pretty tough in terms of a sentencing. Most of the sentences we’ve had here in terms of malicious telephone calls –they are already higher than the comparison in England– where the maximum is six months at the moment. They are putting that up to two years. We are already way beyond that here in Ireland,” said Byrne.
© The Journal Ireland
Victims and witnesses of racism will, as from today, be able to report abuse through a website created to address its low reporting rate and offer support.
16/11/2014- The site – reportracism-malta.org – is intended to increase the reporting of such incidents, inform individuals about the remedies available and support them through the process. It was launched by human rights think tank The People for Change Foundation and also aims to gather data to understand the reality of racism in Malta and provide evidence to inform legal and policy development in the area. Anyone who witnesses or experiences racism can fill in an online form – available in Maltese, English and French – asking questions such as where and when the incident occurred, what it consisted of and whether a police report was filed. People can also send in evidence, such as photos or footage, to back up their claims. If the person filing the report agrees to be contacted, the foundation will offer its support. This will include information, as well as help with filing official reports and following them up.
85% - the percentage of racism victims who keep quiet
“The need for such a system is clear from the high levels of incidence and low levels of reporting of racist incidents,” the foundation said in a statement. Maltese authorities receive very low numbers of racism reports. A National Commission for the Promotion of Equality report showed that 85 per cent of victims of racism keep quiet. In contrast, a report published by the European Union Agency for Fundamental Rights found that 63 per cent of Africans in Malta experienced high levels of discrimination, the second highest incidence in the EU. In addition, 29 per cent fell victim to racially motivated crime. Taken together, these figures highlight a gap between reports and incidents. This could be due to the lack of access to information and a reporting system, the foundation said, as it pointed out that a Fundamental Rights Agency report found that only 11 per cent of African immigrants in Malta knew of the existence of the National Commission for the Promotion of Equality.
“We hope that this website will promote a culture of reporting racist incidents, while developing a better understanding of the state of play of racism in Malta through the compilation of information about such incidents,” the foundation said.
© The Times of Malta
The summer war between Israel and Hamas generated an explosion of online anti-Semitic hate speech in several European countries, an international watchdog reported.
14/11/2014- The assertion came in a report on 10 European countries released Wednesday by the International Network Against Cyber Hate and the Paris-based Inter-national League Against Racism and Anti-Semitism — or INACH and LICRA respectively. In the Netherlands, the Complaints Bureau Discrimination Internet, or MDI, recor-ded more instances of online hate speech against Jews during the two-month conflict than during the entire six months that preceded it, revealed the report, which the groups presented in Berlin at a meeting on anti-Semitism organized by the Organization for Security and Co-operation in Europe, or OSCE.
More than half of the 143 expressions of anti-Semitism documented by MDI in July and August, when Israel was fighting Hamas in Gaza, contained incitements to vio-lence against Jews, the report stated. Roughly three quarters of the complaints documented in that period occurred on social media. In Britain, the Community Security Trust recorded 140 anti-Semitic incidents on social media from January to August, with more than half occurring in July alone. And in Austria, the Forum against Antisemitism recorded 59 anti-Semitic incidents online during the conflagration of violence between Israel and Palestinians — of which 21 included incitements to violence — compared to only 14 incidents in the six months that preceded it.
The data on online anti-Semitic incidents corresponded with an increase in real-life assaults, LICRA and INACH wrote. The report’s recommendations included a submis-sion by the Belgian League Against Anti-Semitism, which called for OSCE member states to adopt the “Working Definition of Anti-Semitism” that the European Union’s agency for combating xenophobia enacted in 2005 but later dropped. The definition includes references to the demonization of Israel.
© JTA News.
14/11/2014- Is there such a thing as a Facebook murder? Is it different than any other murder? Legally, it can be. From a common sense point of view, there is no 'hate crime' status that should make a murder worse if a white person kills a latino person or a Catholic instead of a white person or a Protestant, but legally such crimes can be considered more heinous and get a special label of hate crime. But social media is ubiquitous and criminal justice academics are always on the prowl for new categories to create and write about so a 'Facebook Murder', representing crimes that may somehow involve social networking sites and thus be a distinct category for sentencing, has been postulated.
Common sense should prevail, says Dr. Elizabeth Yardley, co-author of a paper on the subject in the Howard Journal of Criminal Justice. Yes, perpetrators had used social networking sites in the homicides they had committed but the cases in which those were identified were not collectively unique or unusual when compared with general trends and characte-ristics - certainly not to a degree that would necessitate the introduction of a new category of homicide or a broad label like 'Facebook Murder'.
"Victims knew their killers in most cases, and the crimes echoed what we already know about this type of crime," said Yardley. "Social networking sites like Facebook have become part and parcel of our everyday lives and it's important to stress that there is nothing inherently bad about them. Facebook is no more to blame for these homicides than a knife is to blame for a stabbing--it's the intentions of the people using these tools that we need to focus upon." So banning guns or Facebook would not prevent murders any more than banning spoons would prevent people from getting fat. The justice system will be happy not to have another set of arcane guidelines to follow.
By Monica Dux
14/11/2014- Everyone has a right to be a bigot, or so Oberleutnant Brandis insists. But does that mean we're also obliged to put up with bigots on Facebook? We hear a lot about trolls and online bullying, but what if the problem is not an anonymous hater but someone you know? Perhaps even a member of your own family? My friend Claudia recently wrestled with this question after she reconnected with a distant cousin via Facebook. Friendly messages were exchanged - reminiscences about eccentric relatives and long-ago family Christmases. There was even reckless talk, as there so often is on Facebook, of meeting up in person. Then the racist posts started appearing in Claudia's feed: rants about refugees rorting the welfare system, people who come to this country but don't bother to learn English, and burqa-wearing housewives plotting to take over Parliament.
Feeling that she could not let this pass unchallenged, Claudia commented on one of the posts, calling it out as offensive rubbish. In a sense this had the desired effect in that the racist posts stopped appearing in her feed. But they had not disappeared because Claudia's cousin had seen the error of her ways. Claudia had simply been unfriended. One of the things I like about Facebook, when I like it at all, is its plurality. In the real world most of us socialise with a relatively small cohort of like-minded people. By contrast, on Face-book, we typically rub digital shoulders with a far more diverse collection of "friends", from life-long pals to some guy you met briefly at a party and have never seen again, although you are regularly updated on what he's having for breakfast.
With such a varied collection of people, your Facebook feed will inevitably contain many posts that you don't agree with. When this happens, you might choose to engage in friendly online debate, or you can just let it pass, huffing and puffing in the silence of the real world. But things get trickier when the opinions being expressed don't just offend your sensibilities or your political leanings but challenge your concept of basic human decency. If we choose to ignore repugnant, racist views, don't we become complicit? We're told that the only thing necessary for evil to thrive is for good people to remain silent. But if we are morally obliged to speak up in the face of bigotry, are we not under an equal obligation to post?
After all, challenging racism is far easier on Facebook than in the real world. When you're at your family Christmas and Uncle Bob starts to sound off about how Australia ought to be reserved for Australians, calling him out as a disgraceful racist will probably mean that Christmas is ruined, everyone goes home angry and you'll all have to drink even more next year to get through the ordeal. On the other hand, at least Bob's racism will have been publicly debunked. Or will it? Perhaps the real reason so many of us hesitate to slam the Uncle Bobs of this world is not a cowardly desire to avoid conflict but an understanding that doing so will achieve nothing, aside from making you feel good about your own moral righteousness. For whatever you might say, Bob's mind will probably not be changed.
Obviously it is important to speak up to institutional racism, such as that evidenced in our government's draconian treatment of asylum seekers. Similarly, calling out and critiquing the drivel expressed by people with an influential public voice, such as shock jocks, is vital. But what about the unanalysed racism of people like Claudia's cousin, which is so often born of ignorance and disempowerment? People with little education, or radically different life experiences, who have been encouraged by a dog-whistling government to focus their fears and frustrations on vulnerable groups within our society? This kind of bigotry has many and varied roots, and it'll take a lot more than a withering comment on their Facebook page to dig them out.
The social media is often criticised for creating a false sense of intimacy, while actually distancing people from genuine, meaningful interaction. But perhaps this distance is sometimes a positive. Because stepping back and being a mere bystander, a witness, can provide you with a valuable opportunity to see how others think, acting as a reminder that the world is filled with people who hold views radically different from your own. And that tackling those ideas will require far more than simply clicking the unfriend button.
© The Sydney Morning Herald
Law enforcement professionals throughout the US are increasingly leveraging social media to assist in crime prevention and investigative activities, according to a new study released by LexisNexis Risk Solutions.
13/11/2014- The LexisNexis 2014 Social Media Use in Law Enforcement report solicited feedback from 496 participants at every level of law enforcement—from rural localities to major metropolitan cities and federal agencies—to examine the law enforcement community’s proclivity to use social media for crime investigation and prevention. The study, a follow-up to an initial study conducted in 2012, found that eight out of 10 law enforcement professionals are actively using social media for investigations, with 25 percent using social media on a daily basis. “The benefits of social media from an information-gathering and community outreach perspective became very evident during the subsequent investigations of the Boston Marathon bombings and the Washington Navy Yard tragedy,” said Rick Graham, Law Enforcement Specialist, LexisNexis Risk Solutions and former Chief of Detectives for the Jacksonville (Fla.) Sheriff’s Office. “It is imperative that agencies invest in formal social media investigative tools, provide formal training, develop or amend current policies to ensure investigators and analysts are fully armed to more effectively take advantage of the power social media provides.”
Use of social media by law enforcement grew in 2014 and the upward trend is likely to continue. Over three-quarters of respondents indicate plans to use social media even more in the next year. Moreover, the value of social media in helping solve crimes more quickly and assisting in crime anticipation is increasing. 67 percent of respondents believe that social media is a valuable tool for anticipating a crime. Law enforcement officials cited a number of real-world examples in which social media helped thwart impending crime, from stopping an active shooter to tracking gang behavior. Although social media use among law enforcement personnel is high and is likely to continue to grow, few agencies have adopted formal training in the use of social media to boost law enforcement efforts. In fact, there has been a decrease in formal training since the 2012 study, with most law enforcement personnel indicating that they are self-taught. “Lack of access to social media channels is the single biggest driver for non-use and has increased from 2012. Whereas, lack of knowledge has decreased significantly as a reason for not using social media,” states the study.
Fortunately, although agency support of social media training for law enforcement officials remains low, three quarters of law enforcement professionals are very comfortable using social media, showing a seven percent increase over 2012 despite a decrease in availability of formal training. As law enforcement personnel become more comfortable and familiar with social media tools, they are increasingly discovering new and effective ways to utilize it in criminal investigations. For instance, one law enforcement respondent used Facebook to discover criminal activity and obtain probable cause for a search warrant. “I authored a search warrant on multiple juveniles’ Facebook accounts and located evidence showing them in the location in commission of a hate crime burglary. Facebook photos showed the suspects inside the residence committing the crime. It led to a total of six suspects arrested for multiple felonies along with four outstanding burglaries and six unreported burglaries,” said one respondent. Another law enforcement official achieved success in using social media to identify networks of criminals, by using Facebook to identify suspects that were friends or associates of other suspects in a crime. “My biggest use for social media has been to locate and identify criminals,” the respondent stated. “I have started to utilize it to piece together local drug networks.”
Law enforcement officials have also used social media to collect evidence, identify witnesses, conduct geographic searches, identify criminals and their locations, and raise public safety awareness by posting public service announcements and crime warnings to Facebook. “As personnel become even more familiar and comfortable using it, they will continue to find robust and comprehensive ways to incorporate emerging social media platforms into their daily routines, thus yielding additional success in interrupting criminal activity, closing cases and ultimately solving crimes,” the report concluded.
Editor's note: Also read the report, The Rise of Predictive Policing Using Social Media, in the Oct./Nov. issue of Homeland Security Today.
© Homeland Security Today
A copyright claim on the "Innocence of Muslims" will be reviewed by the full 9th Circuit Court of Appeals.
12/11/2014- A federal Appeals Court on Wednesday agreed to reconsider its decision to order Google to take down an anti-Islam propaganda film that was linked to the 2012 Benghazi attack. Earlier this year, a three-judge panel sided with Cindy Lee Garcia, who sued Google for infringing on her copyright by hosting the video—titled Innocence of Muslims—on YouTube. The actress argued that she was fooled into appearing in the video after following up on an ad posting purporting to be for another movie. The video was taken down following the decision. Now, the full U.S. Court of Appeals for the 9th Circuit will review that decision, and the three-judge panel's ruling will not hold precedent in the full Court's review. Garcia originally had her case dismissed by a trial judge.
The case presents a thicket of thorny issues, including a debate over the balance between copyright protections and free speech in the Internet age. Open-Internet activists and several tech companies argued that the February ruling facilitates overly burdensome copyright limits. Facebook, Twitter, Yahoo, eBay, and Netflix have all supported Google's position. "This is very welcome decision," said Corynne McSherry, intellectual-property director at the Electronic Frontier Foundation. "The court's ruling was mistaken as a matter of law and a terrible precedent for online free speech. What happened to Cindy Garcia was truly shameful, but the 9th Circuit took a bad situation and made it worse." And the tensions over the case are ratcheted up by the video's controversial nature—as well as its connection to the September 2012 attack on the U.S. consulate in Benghazi.
According to an extensive New York Times piece published last December, the video partially contributed to the violence, in which four Americans were killed. "Contrary to claims by some members of Congress, [the violence] was fueled in large part by anger at an American-made video denigrating Islam," according to The Times. The role of the video is hotly debated, and many conservatives accuse the Obama administration of overstating its impact to deflect attention from a terrorist attack in the run-up to the 2012 presidential election. Earlier this year, a second actor in the film, Gaylor Flynn, filed a separate lawsuit also arguing that Google had reproduced his performance without consent.
© The National Journal
Joanne St. Lewis case is just one that shows how internet easily spreads racist message.
12/11/2014- When Joanne St. Lewis wrote a critical evaluation of a student racism project, she could not have known the grief it would cause. And certainly not the years it would take to finally erase the racial slur that accompanied her name in every online search. It began six years ago, and continues today in spite of an Ontario Superior Court decision in June. The decision found an Ottawa blogger had defamed St. Lewis by attaching a racial epithet meaning to "sell out," stemming from the black slave experience, to her name. St. Lewis, a University of Ottawa law professor, has taken steps most would find daunting. Going to court, winning a decision and now fighting an appeal. "It's extremely expensive. It’s difficult. It’s imperfect. It’s painful. And it may not always even remotely be an opportunity or a remedy for someone," she said. But for St. Lewis, standing up against the slur, written in a blog and repeated by others, it was a sense of duty and dignity. "If it is my fate to be the first black Canadian so publicly defiled, then it is my hope to be the last. It was essential that no other suffer as I have," she wrote after a jury found the words used against her were defamatory. In accordance with the court’s decision, the blog post has been removed from the internet, but the term can still be found in Google searches of her name. St. Lewis was also awarded $350,000 in damages. "I think there’s a recklessness, a casual cruelty, a complete indifference and egotism that the internet permits," she said in an interview with CBC News. "What it seems to do is allow people to be bullies and behave like feral pack animals on the internet to target activists."
Researcher tries to quantify online racism
There is little research to quantify the extent of online racism in Canada. Irfan Chaudhry is trying to change that. A PhD Candidate at the University of Alberta, Chaudhry is tracking Twitter for terms that would be considered racist and offensive. With Twitter, Chaudhry is able to look at racist terms and references, and which cities they originate from. Spe-cifically he looked at Edmonton, Winnipeg, Calgary, Vancouver, Montreal and Toronto. He chose those cities because in 2010, they reported some of the highest rates of hate crime in the country. His three-month study found about 750 instances he considered overt racism. "People were tweeting about things that you’d probably want to have left in your mind," he explains. He cites examples such as people boarding a bus or plane and tweeting: "About to board, stuck beside a --- and a --- #thanks." Other cases were far more direct. "It was someone saying ‘I hate’ and then insert racialized group here." He found those sorts of statements were more likely directed at aboriginal populations in Winnipeg and Edmonton, while in Toronto and Montreal, racist comments were largely aimed at people of colour. "When you break down the amount of tweets... it kind of reflected different demographic patterns," he notes.
In Thompson, Manitoba, a community with a large aboriginal population, a local newspaper was forced to shut down its Facebook page in response to a large number of racist comments. Lynn Taylor, general manager of the Thompson Citizen, said racist sentiments have long simmered in the community, but recently surfaced online. The tipping point came when someone posted a photoshopped picture showing the front of the newspaper’s building with racist comments painted over it.. She hopes to reopen the site next year, with better monitoring of comments before they are posted. Other media outlets, including the CBC, closely monitor or disable comments to minimize the risk of racist material being posted. St. Lewis said part of the problem is the medium itself. "It allows people to behave in a way that if they did it in the bricks and mortar universe amongst flesh and blood people, we know it’s not acceptable. We know there’s legal consequence. But somehow, that piece of being virtual, that piece of being on the internet seems to give this incredible permission," she said.
© CBS News
A British lawmaker complained of abuse. Suddenly, the abuse stopped.
12/11/2014- Luciana Berger, a member of British Parliament, has been receiving a stream of anti-Semitic abuse on Twitter. It only escalated after a man was jailed for tweeting her a picture with a Star of David superimposed on her forehead and the text "Hitler was Right." But over the last few weeks, the abuse began to disappear. Her harassers hadn’t gone away, and Twitter wasn't removing abusive tweets after the fact, as it sometimes does, or suspending accounts as reports came in. Instead, the abuse was being blocked by what seems to be an entirely new anti-abuse filter.
For a while, at least, Berger didn’t receive any tweets containing anti-Semitic slurs, including relatively innocuous words like "rat." If an account attempted to @-mention her in a tweet containing certain slurs, it would receive an error message, and the tweet would not be allowed to send. Frustrated by their inability to tweet at Berger, the harassers began to find novel ways to defeat the filter, like using dashes between the letters of slurs, or pictures to evade the text filters. One white supremacist site documented various ways to evade Twitter’s censorship, urging others to "keep this rolling, no matter what."
In recent months, Twitter has come under fire for the proliferation of harassment on its platform—in particular, gendered harassment. (According to the Pew Center, women online are more at risk from extreme forms of harassment like "physical threats, stalking, and sexual abuse.") Twitter first implemented the ability to report abuse in 2013, in response to the flood of harassment received by feminist activist Caroline Criado-Perez. The recent surge in harassment has again resulted in calls for Twitter to "fix" its harassment problem, whether by reducing anonymity, or by creating better blocking tools that could mass-block harassing accounts or pre-emptively block recently created accounts that tweet at you. (The Blockbot, Block Together, and GG Autoblocker are all instances of third party attempts to achieve the latter.) Last week, the nonprofit Women, Action, & the Media announced a partnership with Twitter to specifically track and address gendered harass-ment.
While some may welcome the mechanism deployed against Berger’s trolls as a step in the right direction, the move is troubling to free speech advocates. Many of the proposals to deal with online abuse clash with Twitter’s once-vaunted stance as "the free speech wing of the free speech party," but this particular instance seems less like an attempt to navigate between free speech and user safety, and more like a case of exceptionalism for a politician whose abuse has made headlines in the United Kingdom. The filter, which Twitter has not discussed publicly, does not appear as if it's intended to be a universal fix for harassment that is experienced by less-important users on the platform, such as the women targeted by Gamergate. Prior to the filter being activated, Luciana Berger and her fellow MP, John Mann, had announced plans to visit Twitter’s European Headquarters, to talk to higher-ups about the abuse. Parliament is currently discussing more punitive laws against online trolling, including a demand from Mann for a way to ban miscreants from "specific parts of social media or, if necessary, to the Internet as a whole."
In a letter to Berger that is quoted in part here, Twitter’s head of global safety outreach framed efforts over the past year as including architectural solutions to harassment. "Our strategy has been to create multiple layers of defense, involving both technical infrastructure and human review, because abusive users often are highly motivated and creative about subverting anti-abuse mechanisms." The letter goes on to describe known mechanisms, like the use of "signals and reports from Twitter users to prioritize the review of abusive content," and hitherto unknown mechanisms like "mandatory phone number verification for accounts that indicate engagement in abusive activity." However, the letter says nothing about a selective filter for specific words. To achieve that result, the company appears to have used an entirely new tool outside of its usual arsenal. A source familiar with the incident told us, "Things were used that were definitely abnormal."
A former engineer at Twitter, speaking on the condition of anonymity, agreed, saying, "There’s no system expressly designed to censor communication between individuals. … It’s not normal, what they’re doing." He and another former Twitter employee speculated that the censorship might have been repurposed from anti-spam tools—in particular, BotMaker, which is described here in an engineering blog post by Twitter. BotMaker can, according to Twitter "deny any Tweets" that match certain conditions. A tweet that runs afoul of BotMaker will simply be prevented from being sent out—an error message will pop up instead. The system is, according to a source, "really open-ended" and is frequently edited by contractors under wide-ranging conditions in order to effectively fight spam.
When asked whether a new tool had been used, or BotMaker repurposed, a Twitter spokesperson replied: "We regularly refine and review our spam tools to identify serial accounts and reduce targeted abuse. Individual users and coordinated campaigns sometimes report abusive content as spam and accounts may be flagged mista-kenly in those situations." It’s not clear whether this filter is still in place. (I attempted to test it with "rat," the only word that I was willing to try to tweet, and my tweet did go through. The filter may have been removed, the word "rat" may have been removed from the blacklist, or the filter may have only been applied to recently created accounts). It’s hard to shed a tear for a few missing slurs, but the way they were censored is deeply alarming to free speech activists like Eva Galperin of the Electronic Frontier Foundation. "Even white supremacists are entitled to free speech when it’s not in violation of the terms of service. Just deciding you’re going to censor someone’s speech because you don’t like the potential political ramifications for your company is deeply unethical. The big point here is that someone on the abuse team was worried about the ramifications for Twitter. That’s the part that’s particularly gross."
What’s worrisome to free speech advocacy groups like the EFF about this incident is how quietly it happened. Others may see the bigger problem being the fact that it appears to have been done for the benefit of a single, high-profile user, rather than to fix Twitter’s larger harassment issues. The selective censorship doesn’t seem to reflect a change in Twitter abuse policies or how they handle abuse directed at the average user; aside from a vague public statement by Twitter that elides the specific details of the unprecedented move, and a few, mostly-unread complaints by white supremacists, the entire thing could have gone unnoticed. Eva Galperin thinks incidents like these could be put in check by transparency reports documenting the application of the terms of services, similar to how Twitter already puts out transparency reports for government requests and DMCA notices. But while a transparency report might offer users better information as to how and why their tweets are removed, some still worry about the free-speech ramifications of what transpired. One source familiar with the matter said that the tools Twitter is testing "are extremely aggressive and could be preventing political speech down the road." He added, "are these systems going to be used whenever politicians are upset about something?"
© The Verge
UK prime minister David Cameron has called for “extremist material” to be taken offline by governments, with help from network operators.
14/11/2014- Speaking in Australia's Parliament on a trip that will also see him attend the G20 leaders' summit, Cameron spoke of Australia and Britain's long shared history, common belief in freedom and openness and current shared determination to fight terrorism and extremism. Cameron said [PDF] poverty and foreign policy are not the source of terror. “The root cause of the challenge we face is the extremist narrative,” he said, before suggesting bans on extremist preachers, an effort to “root out” extremism from institutions and continuing to “celebrate Islam as a great world religion of peace.”
He then offered the following comment:
“A new and pressing challenge is getting extremist material taken down from the internet. There is a role for government in that. We must not allow the internet to be an ungoverned space. But there is a role for companies too. In the UK, we are pushing them to do more, including strengthening filters, improving reporting mechanisms and being more proactive in taking down this harmful material. We are making progress, but there is further to go. This is their social responsibility, and we expect them to live up to it.” Cameron's remarks have a strong whiff of a desire to extend state oversight of the internet. The UK already prohibits “Dissemination of terrorist publications” under Part 1, Section 2 of the the Terrorism Act 2006. The country also operates a plan to reduce hate crime, in part by removing hate material found online.
A May 2014 report [PDF] on that plan's progress notes difficulties securing co-operation from ISPs and social networks, especially those outside the UK. Security is not on the G20 agenda, but what the leaders choose to discuss around the table is fluid. Might the sentence “We must not allow the internet to be an ungoverned space” therefore be an attempt to steer talks in the direction of international co-operation around internet regulation? The summit runs over the weekend and Vulture South has accreditation to the event, mea-ning we can get our hands on any communiqués the leaders emit. Most of the output of such events is negotiated in advance, but we'll keep an eye on things in case Cameron's thought bubble expands and also because a major initiative to combat multinational tax avoidance is expected to be one of the event's highlights.
© The Register
Many observers were encouraged to see Manchester City midfielder Yaya Toure speak out via the BBC last week against those who had racially abused him over Twitter just hours after he had reactivated his account.
11/11/2014- As one of the sport's most high-profile figures, it felt as if the Ivory Coast international had made a stand on behalf of an ever-growing number of similar victims in the game - because Toure is far from alone in being subject to such treatment. Already this season, Liverpool striker Mario Balotelli has been racially abused over the internet after he made fun of Manchester United following their defeat to Leicester City. Last year I interviewed former footballer and Professional Footballers' Association (PFA) chairman Clarke Carlisle at his house. He showed me his laptop and the torrent of vile racial abuse he had received via Twitter, abuse he did not want his wife or children to see, and which had left him feeling numb. And all because he had been commentating on a match that week on TV. Last season, 50% of all complaints about football-related hate crime submitted to anti-discrimination organisation Kick It Out (KIO) related to social media abuse. So severe is the problem, KIO now employs a full-time reporting officer whose job is to act on such incidents and refer them to the relevant authorities. Greater Manchester Police are investigating the Toure case, but don't be too surprised if no-one is ever punished. The anonymity users can gain on social media can make it very difficult to track down offenders.
But Kick It Out is also frustrated by what it feels is a lack of a co-ordination between the police and Twitter and the need for better communication between the two. It feels there needs to be more education for local police forces on the misuse of social media and how complaints are dealt with. In some cases, KIO says, it has made a report but has not heard back from the police, something one source there described as "very disheartening". In addition, KIO wants more clubs to be proactive in coming out to publicly support their players when they are the victims of discrimination online, by calling upon the authorities to work closely with the relevant platform to investigate and track down the offenders. Such concerns are nothing new. Accounts with false identities often mean the police need Twitter to provide them with an IP address for the account if they hope to find them. The Association of Chief Police Officers (ACPO) has said that Twitter only provides this information with a US court order, something which it is difficult to get because of the value and protection afforded there to freedom of speech.
Elsewhere - like in the UK - it is optional, although Twitter insists it is co-operating with law enforcement here more than ever before. During the first half of 2014, Twitter received 78 account information requests, 46% of which resulted in some information being produced, the highest proportion to date. It says it has made it easier for users to report malicious posts, claims it has become more vigilant in blocking offensive tweeters, and is developing technology that prevents barred trolls from simply opening up a new account.
Progress being made
The police insist important progress is being made, and that platforms are now beginning to appreciate the responsibility they have for what is posted on their networks. Last year, following a long legal battle in France when prosecutors argued Twitter had a duty to expose wrong-doers, the site agreed to hand over details of people who allegedly posted racist and anti-Semitic abuse. Although that set an important precedent, Twitter admits it could do better. Earlier this year it promised to change its policies after Robin Williams's daughter Zelda was targeted by trolls following his suicide. But there are signs that such abuse will not be tolerated.
In 2012 Liam Stacey, from Swansea, received a prison sentence after racially abusing former Bolton player Fabrice Muamba on Twitter. Last year, a man who admitted sending racist tweets to two footballers was ordered to pay £500 compensation to each of them. And police were heartened last month when a Nazi sympathiser was jailed for four weeks for sending anti-Semitic tweets to Jewish MP Luciana Berger. But these cases, of course, while dissuading some, will not prevent further incidents from occurring. Twitter admits it is impossible to monitor all of the 500 million or so postings going through its networks each and every day. One expert I spoke to told me that some of the cases the UK media has picked up on would simply not register in the US, where such abuse is often disregarded and denied the publicity some trolls crave. Others will insist that it is absolutely right that such vitriol is exposed and condemned.
Paul Giannasi, hate crime lead officer at ACPO, said the challenge was huge, but efforts to combat the problem were constantly evolving. ACPO sits on an international cyber-hate working group lead by the US based Anti-Defamation League. This group brings parliamentarians, professionals and community groups together with industry leaders to help find solutions that balance protection from offensive comments with the right to free speech. "The police will draw on the guidelines issued by the Director of Public Prosecutions and The College of Policing to assess whether the threshold for communications which are grossly offensive, indecent, obscene or false is met. "The CPS guidance is very clear that a high threshold applies in these cases. We encourage officers to work with the CPS at an early stage of an investigation to determine whether proceeding with a prosecution is in the public interest." Certainly, with its tradition of rivalry and tribal passions, football seems particularly vulnerable to the dark side of social media.
Twitter and other platforms have enabled fans and the players they idolise to get closer than ever. Amid the anodyne world of bland footballer interviews, it is refreshing that players' true emotions and opinions can often be glimpsed online even if sometimes it results in them being fined. But it also enables a sad and cowardly minority to abuse and insult in a way that would never be tolerated - and that they would never dare to - in a public, physical place. Amid unprecedented interest and media exposure, footballers can be followed by millions of supporters. This makes them an attractive target for the trolls who crave attention through a retweet, and seek maximum impact from their messages of hate. The question is how to tackle them without endangering the freedom that makes social media such a special place to so many.
How Twitter tackles abuse
Over the past year it has expanded the number of people working on abuse reports, reporting 24/7 cover. It has invested in technology to make it harder for serial abusers to create accounts, and perpetuate abusive behaviour. It has worked with the Safer Internet Centre and charities that specialise in developing strategies to counter hate speech.
© BBC News
11/11/2014- In the wake of a series of terrorist “run over” attacks, where Israeli pedestrians have been mowed down by Palestinian terrorists, more than 90 Facebook pages glorify-ing the attacks and urging more violence against Israeli civilians have been identified. The social media campaign, which uses the Arabic term “Daes” (Run-over), which is a play on the word “Daesh” (ISIS), praises the attacks as a form of resistance, according to the Anti-Defamation League. Some of the posts on these pages describe the “run-overs” as part of a new revolution, a form of “car Intifada.” Many of the pages also enable users to give vent to expressions of violent anti-Semitism. “This campaign is the latest example of how social media is being used to promote and glorify terrorism and anti-Semitism,” said Abraham H. Foxman, ADL National Director. “Social media platforms were not created to spread anti-Se mitism and terrorism to the masses.” The campaign is also starting to spread on Twitter, according to ADL. The “Daes” hashtag has attracted numerous terrorist sympathizers. Several pages include anti-Semitic posts depicting religious Jews with hooked noses running away from vehicles attempting to run over them. ADL is in the process of notifying those social media companies about those accounts promoting the campaign.
The huge gaming hit Clash of Clans allows its players the opportunity for anti-semitism. Among the millions of players are groups that call themselves 'holocaust', for example. Players also come up with provocative anti-semitic captions.
5/11/2014- In Clash of Clans various clans do battle. Clans are made up of at most 50 players, who combat players from other clans. A search by BNR Nieuwsradio found at least 45 clans calling themselves 'holocaust'. Other names used are - among others - 'jew raiders' and 'we kill jews'. Some captions used are 'we burn jews for fun' en 'Anne Frank was easy to find'. Many games, such as World of Warcraft, try to prevent this kind of behaviour. They employ moderators who police players’ illegal or offensive practices. It is not clear if Supercell, the Finnish game development company behind Clash of Clans, does this as well. Clash of Clans was launched in 2012. Supercell responded by email saying that ‘it is not possible to prevent the anti-semitic expressions from taking place, given the millions of people who play their games. “We will close down clans that use abusive language when we see it happening.”
© BNR (dutch)
Labour leader condemns spike in antisemitic attacks, and calls on social media sites to do more to identify online trolls.
4/11/2014- A recent spike in antisemitic attacks should serve as a “wake-up call” for anyone who thinks the “scourge of antisemitism” has been defeated in Britain, Ed Miliband warned on Tuesday. In a post on his Facebook page, the Labour leader called for a “zero-tolerance approach” to antisemitism and said that some Jewish families had told him they felt scared for their children. Miliband intervened after the Community Security Trust, which provides training for the protection of British Jews, recorded a 400% increase in antisemitic incidents in July this year compared with the same month in 2013. The Labour leader highlighted what he described as “shocking attacks” on Luciana Berger, the shadow public health minister, and Louise Ellman, the chair of the Commons transport select committee. The two senior Labour MPs, who are Jewish, were targeted by antisemitic trolls after a man was jailed for four weeks after he admitted sending what Miliband described as a “vile” tweet.
The Jewish Chronicle reported that Garron Helm was jailed after tweeting a photograph of Berger superimposed with a yellow star - as used by the Nazis to identify Jews during the war. Miliband called on social media sites to do more to identify the perpetrators. He wrote: “There have been violent assaults, the desecration and damage of Jewish property, antisemitic graffiti, hate-mail and online abuse. The shocking attacks on my colleagues Luciana Berger and Louise Ellman have also highlighted the new channels by which antisemites spread their vile views. That is why it is vital that Twitter, Facebook and other social media sites do all they can to protect users and crack down on the perpetrators of this sickening abuse.” He said that the rise in attacks took place during the recent conflict between Israel and Hamas in Gaza and that it was important to be temperate in discussing Israel.
“More than half of the anti-Semitic incidents recorded by the CST in July involved direct reference to the conflict and the previous highest number of monthly incidents recorded by CST (January 2009) also coincided with a period of fighting between Israel and Hamas. We need to tackle this head on because I am clear that this can never excuse antisemitism, just as conflicts elsewhere in the Middle East can never justify Islamophobia. All of us need to use calm and responsible language in the way we discuss Israel, especially when we disagree with the actions of its government. A zero-tolerance approach to anti-Semitism and prejudice in all its forms here in Britain will go hand-in-hand with the pursuit of peace in the Middle East as a key focus of the next Labour government’s foreign policy.” Miliband, who is Jewish, was recently criticised by the actor Maureen Lipman after he voted in favour of recognising Palestinian statehood.
In an article in Standpoint, Lipman wrote: “Just ... when our cemeteries and synagogues and shops are once again under threat. Just when the virulence against a country defending itself, against 4,000 rockets and 32 tunnels inside its borders, as it has every right to do under the Geneva convention, had been swept aside by the real pestilence of IS, in steps Mr Miliband to demand that the government recognise the state of Palestine alongside the state of Israel.” The New York Times recently reported on Miliband’s vote in favour of Palestinian statehood under the headline: British Labour Chief, a Jew Who Criticizes Israel, Walks a Fine Line. Its London correspondent Stephen Castle wrote: “Britain’s center-left Labour Party often sympathizes instinctively with the Palestinian cause, and Mr Miliband is not the first party leader to criticize Israel. Yet his willingness to speak about his family’s story and connections to Israel – showcased in a high-profile visit there this year – has brought a personal dimension to a loaded issue.”
© The Guardian
A 33-year-old minor hockey coach from Langley, B.C. has been fired after posting a series of shocking Nazi propaganda images to his Facebook page.
5/11/2014- Christopher Maximilian Sandau coached players in North Delta before league officials were alerted to his posts, some of which question the Holocaust death toll and suggest prisoners at the Auschwitz concentration camps were well-cared for. Another post features a swastika and reads, “If this flag offends you, you need a history lesson.” The North Delta Minor Hockey Association issued a statement confirming Sandau was let go over the weekend and condemning the material he shared online. “The posts contained extreme and objectionable material believed to be incompatible with an important purpose of our Minor Hockey Association: To promote and encourage good citizenship,” presi-dent Anita Cairney said in a statement. “The NDMHA requires that our coaches present themselves as positive role models for our children athletes.” The association said it won’t be commenting further on the advice of its legal counsel, but that alternative coaching arrangements have already been made. On Wednesday, Sandau told CTV News he’s been treated unfairly. The former coach said he was passionate about his job, and gave his players extra practice time every week free of charge.
“I was doing a good job and I wasn’t trying to impose my political beliefs or anything on anyone,” he said. “From the time I stepped onto the parking lot of the arena to the time I left, I was all about hockey and trying to help the kids get better.” Sandau acknowledged his opinions are likely to offend people, but insisted he’s not a neo-Nazi, merely a “history buff” who believes German atrocities during World War II have been misconstrued, or fabricated altogether. Apparent hostility toward Jewish people is a recurring theme in his posts, however. One features the image of a World War II soldier, claiming he was killed “so the Jews could control your banks,” and “so foreigners could run your civil and public services.” Asked about the post, Sandau conceded that “it does generalize a little too much, obviously,” and said he might consider taking that one down. Sandau said he was given a chance to keep his job by changing his Facebook settings and making his posts private, but turned it down on principle. Parents with children on either of the two North Delta minor hockey teams Sandau coached have been informed of his dismissal.
© CTV News
Jamie Bartlett explains why the battle for hearts and minds has moved online
4/11/2014- The head of GCHQ has warned that firms such as Facebook and Twitter are "in denial" about the use of their sites by terrorists and criminals. And he's right: extremists of all kinds have indeed "embraced the web". This is only natural. The battle for hearts and minds is a vital part of any conflict. To be seen as on the side of right; to create a groundswell of popular support; to reach new supporters. Whether it’s Isil or the extreme Right, the aim is to convince people to take your side. If not on the battlefield itself, then emotionally, morally, vocally, financially – and now, digitally. This battle used to be waged from on high: propaganda air dropped from governments and media broadcasters. Now it’s on Facebook and Twitter.
It barely needs saying that social media has been a boon to society – allowing anyone with a message or campaign to reach out to millions of people at almost zero cost. That includes charities, campaigning groups, political dissidents, and the rest. But for angry or violent groups social media is the perfect vehicle to spread a message and win new fans: a free and open way to share and disseminate propaganda to millions of people. What’s more, the cost of producing high-quality videos and multimedia content is now practically nothing. This means that small groups can exaggerate their influence and extend their reach more easily than ever before. And that’s exactly what they are doing.
Let’s start with Isil. So far, they have organised hashtag campaigns on Twitter to generate internet traffic. They then get those hashtags trending, which generates even more traffic. They hijack other Twitter hashtags – such as those about the World Cup, and more recently the iPhone 6, which they use to start tweeting Islamist propaganda – to increase their reach further still. They have posted real time footage from the battlefield, and directed it against their enemies. They use social media "bots" to automatically spam platforms with their content. In short, they are very active indeed: social media is an important part of their modus operandi. Although we’re constantly told that Isil are marketing geniuses, this is all pretty standard for any second-rate advertising company. And why wouldn’t it be? Many Isil supporters are young, Western men for whom social media is second nature. What they have done, crucially, is to create the impression of a much larger groundswell of popular support than they have – and generate enormous amounts of free publicity from the world’s media. (They do this quite deliberately too – directing tweets at the BBC and CNN in an effort to get coverage).
It goes something like this: this media mujahideen – most of whom aren’t even in Syria – post lots of tweets, attaching a hashtag to their tweets to ensure it reaches more people (such as #iphone6). People notice, and start using the same hashtag to criticise the group. Journalists write about how much support and traffic Isil is generating on Twitter, which then gets them mainstream media coverage. Isil will often include the Twitter accounts of major media outlets when they post. @BBCWorld and @BBCTrending were important Twitter accounts through which word spread about the threats Isil made to America. Between 3 and 9 July a BBC article, Americans scoff at Isil Twitter threats was the most shared article in tweets containing the tag #CalamityWillBefallUS. We’re doing their work for them.
According to Ali Fisher, a specialist who has been monitoring how Islamists use social media for the last two years, these Jihadist propaganda networks are stronger than ever. "They disseminate content through a network that is constantly reconfiguring, akin to the way a swarm of bees or flock of birds constantly reorganises in flight. This approach thrives in the chaos of account suspensions and page deletions’. Fisher calls this a 'user-curated' swarmcast." The UK’s far-Right is possibly even more impressive than Isil. Although it might be politically convenient to draw moral equivalences, they are quite different to Isil in their values, radicalism, brutality and threat to national security. Nevertheless, in September the BBC suggested that the far-Right is on the rise in the UK, as a result of Islamic State and sex abuse stories involving men of Pakistani descent. According to a senior Home Office official, the UK government underestimates the threat. He claimed that, since last year, at least five new far-Right groups have formed.
I’m not sure exactly what "far-Right group" means anymore, because the far-Right are also very gifted at using the net to give the impression they are bigger than they really are. For the most part the UK’s far-Right is relatively small and disjointed. Online, though, it's different. Just like Isil, the modus operandi of much of the far-Right has moved online: Facebook, Twitter, YouTube, forums, and blogs. There are hundreds of pages and forums dedicated to every shade of extreme nationalism. New groups pop up and disappear every day, and it’s very hard to work out if they are legitimate or not. Just with Isil, it’s often a handful of people making a lot of noise, without it necessarily becoming a significant force in the real world. The latest far-Right movement is called Britain First. They've been around for a while – and are perhaps the most cunning users of Facebook of any political movement. They have half a million Facebook "Likes" – far more than the Tories or the Labour Party. They produce and share very good content online: campaigns about the armed forces, about animal cruelty, about child sex abuse. Things that people with little interest in politics would share.
But according to Hope Not Hate, an anti-fascist campaign group, these general campaigns mask a more sinister motive. They argue that Britain First have been involved in intimidating British Muslims, including invading mosques, and call them "confrontational, uncomprising and dangerous". According to Hope Not Hate, Britain First has a core membership of only around 1500 people – most of whom were followers of former leader Jim Dowson, an anti-abortion campaigner. There are, reckons Matt Collins (a former National Front member who now works for Hope Not Hate) around 60 – 70 hardcore activists who are "willing to put on their badges and march on the street". But, Collins claims, their use of Facebook to increase their reach is "far beyond" anything he’s seen before. He also claims some of their Likes have probably been paid for. That’s the problem: it’s very hard to know.
NSA whistleblower Edward Snowden has complicated this story considerably. Since his revelations, there has been a significant growth in the availability and use of (usually free) software to guard freedom and keep internet users anonymous. There are hundreds of people working on ingenious ways of keeping online secrets or preventing censorship, designed for the mass market rather than the computer specialist: user-friendly, cheap and efficient. These tools are, and will continue to be, important and valuable tools for democratic freedoms around the world. Unfortunately, along with journalists, human rights activists and dissidents, groups like Isil and the far-Right will be the early adopters.
Censorship is not the answer. The Home Secretary has called for more action on tacking extremism – and I agree that it's necessary – but it's far easier to say than to do. Online, groups and organisations can be shut down and then relaunched quicker than the authorities can phone Facebook’s head office. And here’s the Gordian knot: the more we censor them, the smarter they get. When Isil was kicked off Twitter, some went to Diaspora, which is one of several new decentralised social media platforms run by users on their own servers, meaning, unlike YouTube or Twitter, their content is hard to remove.
The answer is found in riddle. Extremists are motivated, early adopters of technology – and their ideas and propaganda spread person to person, account to account. The battle for ideas used to be waged from on high. But today it’s more like hand-to-hand combat, played out across millions of social media accounts, 24 hours a day. Censorship doesn’t work in this distributed, dynamic ecosystem. But the same tools used by extremists are free to the rest of us too. That gives all of us both the opportunity and responsibility to defend what it is we believe. Unthinkable three years: you can now argue with an Isil operative currently in Syria, via Twitter or a Britain First activist on Facebook – all from your own home. The battle for ideas online can't be won, or even fought, by governments. It's down to us.
© The Telegraph
3/11/2014- A Kremlin-backed human rights body has assailed a Russian website as “Nazi” and “racist” for claiming that nearly one quarter of Russia’s billionaires are Jewish – but the response from one Jewish leader was more composed. Nikolai Svanidze of the Russian Human Rights Council – a Kremlin-affiliated body with no executive powers – condemned Lenta.ru, which covers the banking sector, for publishing a report that broke down by faith and ethnicity those Russian citizens appearing in Forbes Magazine’s 2014 list of the world’s wealthiest individuals. According to lenta.ru, 48 of the top 200 wealthy Russians are Jews, with a combined net worth of $132.9 billion. Mikhail Fridman, with a net worth of $17.6 billion, tops the list and is Russia’s second richest man. “It’s a Nazi and racist approach,” Svandiza was quoted as saying by the Slon.ru news site.
But , as JTA reported, Yuri Kanner, president of the Russian Jewish Congress, defended the decision to publish the study. “If you cannot compare the proportion of representatives of various nationalities in the general ethnic composition of the country, it is impossible to understand who is really successful and who is not,” he told the currsorinfo.co.il news website on Oct. 29. He said, however, that he doubted the authenticity of the research. “The proportion of Jews in the population of the Russian Federation is calculated incorrectly. Besides, to compare the Jewish population, which is mainly concentrated in the major cities and has a university degree, with a total mass of Russian citizens, it is not accurate,” Kanner said. Of the Jews who made the list, 42 are of Ashkenazi origin, and together have a net worth of $122.3 billion.
Six Kavkazi Jews (a group also known as “Mountain Jews”) appear on the list, with a combined net worth of $10.6 billion. There are only 762 Russian citizens classified as Kavkazi Jews, according to the Russian Bureau of Statistics and they represent just 0.00035% percent of the population. A leading Russian affairs analyst was skeptical of the Kremlin’s motivations in condemning the website, arguing that false claims of Ukrainian anti-Semitism had been advanced in partial justification of the Russian invasion of Crimea – claims that were both condemned and ridiculed by Jewish leaders in Ukraine. Michael Weiss, editor-in-chief of The Interpreter, a magazine covering Russian affairs, told The Algemeiner: “Russian ultra-nationalists and the far right seize on the theme of wealthy, bloodsucking Jewish oligarchs a great deal, but what nobody bothers to say is that the chief enabler of Russian nationalism is Vladimir Putin.”
Weiss pointed out that in spite of stringent laws against extremism, neo-Nazis marched openly in St. Petersburg earlier this year, while later this week, a full array of extremists is expected at the annual Russian March. “Putin is aligned with fascist parties in Europe like Jobbik in Hungary and Front National in France,” Weiss added. “He’s looking to create fifth columnists in Europe, drawn from racist and xenophobic parties with the occasional communist thrown in. So it’s a bit rich for the regime to be calling out antisemitism.”
© The Algemeiner
With 1.35 billion people checking into Facebook every month, there’s bound to be some things that pop up on your news feed that you’d rather not see.
1/11/2014- The social media site has the difficult job of being a place where people can feel free to share their views, likes and dislikes, but also respect the myriad of cultures and values held by its global audience. What one person may find hilarious, others may find deeply offensive. An Australian mother opened a can of worms surrounding Facebook censorship after complaining that photographs of her giving birth had been removed from the site. Milli Hill, who is shown naked in the pictures, campaigns for positive depictions of childbirth and said Facebook had censored her “powerful female images”. This prompted news.com.au to ask its Facebook followers whether they thought the site responded to offensive material effectively. We received nearly 700 Facebook comments and emails that revealed users had mixed experiences. Some were satisfied with the site’s prompt removal of offensive material, while others were left confused when content that they thought was abhorrent was found not to breach Facebook standards.
Our readers provided examples of content that they had reported, that was investigated and deemed acceptable. They included:
A pornographic cartoon
An animal cruelty video
A video that showing a sex act
An image of a man holding the decapitated head of someone else
Graphic photos of a dead baby
A photograph of a man pointing a gun at the head of a baby
A comment that Tony Abbott should be assassinated
A video of a teenager being beaten senseless.
While she was unable to comment on these specific cases, Facebook’s Australian spokeswoman said the site worked hard to create a safe and respectful place for sharing and connection. “This requires us to make difficult decisions and balance concerns about free expression and community respect,” she told news.com.au. “We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organising real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual, eg bullying. “Sometimes people encounter content on Facebook that they disagree with or find objectionable but that do not violate our community standards.” Many readers objected to videos or images of animal cruelty, but Facebook considers the context in which the video was posted before taking it down.
This type of content is often posted to condemn it or galvanise people into action in order to stop it. If so, that material is allowed. Similarly, the self-regulating nature of the Facebook community can be more effective than Facebook staffers because people can pressure their friends to remove content through their comments. “Facebook receives hundreds of thousands of reports every week and, as you might expect, occasionally we make a mistake and remove a piece of content we shouldn’t have or mistakenly fail to remove a piece of content that does violate our community standards,” the spokeswoman said. “When this happens, we work quickly to address this by apologising to the people affected and making any necessary changes to our processes to ensure the same type of mistakes do not continue to be made.”
While some news.com.au readers were disappointed with Facebook’s responses to complaints, many others said they were satisfied. Reader Michelle said she had reported content several times and each time the offensive page or material was promptly removed, including get-rich-quick spam, sexual content and racist jokes. Another reader, Cathy, helped to have a number of comments taken down that threatened violence towards Tony Abbott. Meanwhile, Karen said her experience had also been positive. “Not that I am a serial complainer either but I have reported material of graphic violence nature, primarily cruelty to animals, and on one occasion something was removed as a result of that feedback,” she told news.com.au.
How do I report something offensive?
Every update posted to Facebook carries with it a small arrow in the top right corner that allows users to hide the post or report it. If a complaint is made, it is then placed in a queue for assessment.
What does Facebook consider unacceptable?
Nudity: Photos of breastfeeding or Michelangelo’s David are likely to pass the test, however. Milli Hill’s childbirth photographs were most likely taken down because of the nudity depicted
Violence and threats
Self-harm: “We remove any promotion or encouragement of self-mutilation, eating disorders or hard drug abuse,” Facebook says
Bullying and harassment: Repeatedly targeting users with unwanted friend requests or messages is considered harassment
Hate speech: “While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition,” Facebook says
Graphic content: Some graphic content is considered acceptable if it is shared for the purposes of condemning it, but it should carry a warning. “However, graphic images shared for sadistic effect or to celebrate or glorify violence have no place on our site,” Facebook says
Privacy violations: Claiming to be another person and creating multiple accounts is a no-no
Selling items illegally
Phishing and spam
Fraud or deception.
Who assesses complaints? Are there programs that do it automatically?
All complaints are reviewed by Facebook staffers, and not by any automatic programs. Complaints are assessed against Facebook’s community standards, which govern what material is acceptable on the site. There are dedicated teams based in the US, Ireland and India, so complaints can be processed around the clock. More serious material is prioritised, but most reports are reviewed within 72 hours. Reporting a post does not guarantee it will be removed. “Because of the diversity of our community, it’s possible that something could be disagreeable or disturbing to you without meeting the criteria for being removed or blocked,” the Facebook community standards page reads. You can find out more about how complaints are assessed here.
What can I do if something I find offensive is not taken down?
Facebook also offers personal controls so every user can hide or quietly block people, pages or applications they find offensive. Facebook has tools for controlling what you see in your news feed, and tools for controlling your Facebook experience generally.
© News Australie
Laws not strong enough to police it, say experts
1/11/2014- Islamophobia has been an ongoing concern in the west since 9/11, but a number of recent incidents in Britain have given rise to a new wave of hatred that experts say is finding a breeding ground online. Part of the problem, researchers say, is that right-wing groups can post anti-Islamic comments online without fear of legal prosecution. “If they were to say, ‘Black people are evil, Jamaicans are evil,’ they could be prosecuted,” says Fiyaz Mughal, founder of Islamophobia reporting web site TellMamaUK.org. But because religious hatred isn't covered legally in the same way that racism is, Mughal says "the extreme right are frankly getting away with really toxic stuff.” Researchers believe the rise of the Islamic State in Iraq and Syria (ISIS) and incidents such as the murder of British soldier Lee Rigby and the recent sexual exploitation scandal in the town of Rotherham have contributed to a spike in online anti-Muslim sentiment in the UK.
Imran Awan, deputy director of the Centre for Applied Criminology at Birmingham City University, noticed the trend when he was working on a paper regarding Islamo-phobia and Twitter following Rigby's death. Rigby was killed in the street in southeast London in 2013 by two Islamic extremists who have since been convicted. Awan says the anonymity of social media platforms makes them a popular venue for hate speech, and that the results of his report were “shocking, to say the least.”
'A year-by-year increase'
Of the 500 tweets from 100 Twitter users Awan examined, 75 per cent were Islamophobic in nature. He cites posts such as "'Let's go out and blow up a mosque' and 'Let’s get together and kill the Muslims," and says most of these were linked to far-right groups. Awan’s findings echo those of Tell MAMA UK, which has compiled data on anti-Muslim attacks for three years. (MAMA stands for "Measuring Anti-Muslim Attacks.") Tell MAMA's Mughal says anti-Muslim bigotry is "felt significantly," and adds that "in our figures, we have seen a year-by-year increase." Researchers believe far-right advocates are partly responsible for a spike in online hate speech. “There’s been a real increase in the far right, and in some of the material I looked at online, there were quite a lot of people with links to the English Defence League and another group called Britain First,” says Awan.
Both Mughal and Awan believe that right-wing groups such as Britain First and the EDL become mobilized each time there is an incident in the Muslim community. The Twitter profile of the EDL reads: “#WorkingClass movement who take to the streets against the spread of #islamism & #sharia #Nosurrender #GSTQ.” Their Facebook page has over 170, 000 likes. Below that page, a caption reads, “Leading the Counter-Jihad fight. Peacefully protesting against militant Islam.” EDL spokesperson Simon North dismisses accusations that his group is spreading hate, emphasizing that Muslims are often the first victims of attacks carried out by Islamic extremists. “We address things that are in the news the same way newspapers do,” says North.
The spreading of hate
Experts in far-right groups, however, say their tendency to spread hateful messages around high-profile cases is well established. North allows that some Islamophobic messages might emanate from the group's regional divisions. But they do not reflect the group’s overall thinking, he says. “There are various nuances that get expres-sed by these organizations,” North says. “Our driving line is set out very clearly in our mission statement.” According to EDL's web site, their mission statement is to promote human rights while giving a balanced picture of Islam.
Awan argues online Islamophobia should be taken seriously and says police and legislators need to make more successful prosecutions of this kind of hate speech and be more “techno-savvy when it comes to online abuse.” Prosecuting online Islamophobia, however, is rare in the UK, says Vidhya Ramalingam of the European Free Initiative, which researches far-right groups. That's because groups like Britain First, which have over 400,000 Facebook likes, have a fragmented membership and do not have the traditional top-down leadership that groups have had in the past. Beyond that, UK law allows for the parody of religion, says Mughal, which can sometimes be used as a cover for race hate. “The bar for prosecution of race hate is much lower, because effectively the comedic lobby has lobbied so that religion effectively could be parodied.”
The case in Canada
Online Islamophobia is also flourishing in Canada. The National Council of Canadian Muslims (NCCM) is receiving a growing number of reports. But there are now fewer means for prosecuting online hate speech in Canada. Section 13 of the Canadian Human Rights Act protected against the wilful promotion of hate online, but it was repealed by Bill C-304 in 2012. “It’s kind of hard to say what the impact is, because even when it existed, there weren’t a lot of complaints brought under it,” says Cara Zwibel of the Canadian Civil Liberties Association. Though there is a criminal code provision that protects against online hate speech, it requires the attorney general’s approval in order to lay charges — and that rarely occurs, says Zwibel.
Section 319 of the Criminal Code of Canada forbids the incitement of hatred against “any section of the public distinguished by colour, race, religion, ethnic origin or sexual orientation." A judge can order online material removed from a public forum such as social media if it is severe enough, but if it is housed on a server outside of the country, this can be difficult. Ihsaan Gardee, executive director of NCCM, says without changes, anti-Muslim hate speech will continue to go unpunished online, which he says especially concerns moderate Muslims. “They worry about people perceiving them as sharing the same values these militants and these Islamic extremists are espousing.”
© CBC News
By Sam Volkering
28/10/2014- What’s the best way to start a riot? Let me help you out…suppress free speech. It’s possibly the number one reason people protest. And if the crowds face heavy handed control measures, these protests sometimes turn into full blown riots. Communities rally with greater force now than ever before thanks to social networks. Today, if your cause is engaging enough, it’s easy to rally the troops. A strong social media collective can be as powerful as a state army. In fact, using social networks is the best way to start a movement. Look at the Arab Spring or Euromaidan in Ukraine. Even the recent protests in Hong Kong…each was organised through social networks.
So much fear, so many reasons to protest
The world is in a very volatile state. And I’m not even talking about the markets. Ebola spread across western parts of Africa like wildfire, and now the whole world is panicked over it. Scandalous ‘news’ headlines don’t help. You can’t avoid it on social media either. In between ‘news’ about The Bachelor, all I see on my Facebook feed is horrible news: beheadings, ISIS and Ebola currently dominate. Thank the world for cat videos…oh blessed be the cat videos. At least there’s something to smile about day to day… But, along with Ebola, there’s plenty else wrong with the world. ISIS has created racial and religious tension not just in Islamic nations but also across the world. Earlier in the month, there were fatal protests in Turkey. Over the weekend, there was a violent riot in Cologne, Germany. The target of the protest — Islamic extremism. The Cologne protest was organised by a far right, neo-Nazi group. The protest had around 4,000 people, according to IBTimes. This was double the number expected by police. Most of the protesters were gathered through social media. And things got ugly. Riot police were called in. Water cannons and pepper spray were shot…
Just when you thought it couldn’t get worse
Of course, much of this violence is a direct result of the actions of global leaders. Whether related to ISIS or not, the violence and protests around the world stem from misguided government policies. The idea of the protest is nothing new, but social networks are. And the combination of the two has created greater influence on decision makers by the people. The voice of many is always more powerful than the voice of a few. For better or worse, connected networks allow people to share a voice and a view like never before. I highlight this because trouble is brewing in one particular eastern European country…one you probably wouldn’t expect. This country’s government is trying to implement one of the most regressive, oppressive policies of the modern era… The Hungarian government wants to implement an ‘internet tax’. The draft bill has a provision where a tax is paid to the government revenue collectors per gigabyte of data transfer. This would apply to consumers and businesses.
Hungary already has the highest VAT (GST) rate of any country in the world at 27%. You can see why another tax has angered the people of Hungary. But more than that, it’s widely viewed as the government taxing the freedom of information. The internet is perhaps the greatest tool of all time for creating and accessing information. It’s why we live in the ‘information age’. Anyone can use the internet to express opinions, ideas, ideals and views. It’s the ultimate tool for freedom of speech. On Sunday, approximately 100,000 Hungarians gathered in front of the Economic Ministry to protest these regressive laws. And the protest was organised through Facebook by a group with over 210,000 followers. Words broke out through Facebook, Twitter and other social networks and the people came together to have their say. As part of the protest, attendees held up their phones as a sign to the government.
This protest was peaceful, but it proved a significant point: Governments should not try to enact policies that aim to restrict what has become an essential human right —that is, access to information. That’s really what the internet is, after all — the world’s biggest collection of information. And it should be free to access by anyone, anywhere as a basic human right. It’s an optimistic goal, but hopefully, one day, the entire world will have free access to the internet. The world should also strive for clean water, food and shelter for all. But perhaps the internet is equally as important. Perhaps the internet could provide the information to help communities achieve those other goals… Regardless, social networks are clearly crucial to connecting and empowering people. And the internet is the backbone of that power. When go-vernment tries to restrict our freedom of information, they will face a resolute and defiant community.
© Tech Insider
As 'Hitler' Twitter account gains more and more followers and Facebook page displays 'list of Jews,' Foreign Ministry and EU representatives discuss ways to combat anti-Semitism.
28/10/2014- "It's hard being openly Jewish in Europe today," Gideon Bachar, the Director of the Department for Combating Anti-Semitism and Holocaust Remembrance in the Foreign Ministry, said Monday. An experts' meeting on the topic of fighting anti-Semitism and racism conducted in Jerusalem today led to various estimates as to the future of the Jewish community in Europe and links between radical Islam and anti-Semitism.
The "Hitler" account has 370,000 followers
"Anti-Semitism is like Ebola," Bachar said. "It's a virus. It constantly accumulates mutations. It changes all the time, adapts itself to the situation, and is transnational. The rise in anti-Semitism is a danger to civilization and to democracy in general." The meeting was attended by Yad Vashem representatives, the State Attorney's Office, the Association of Israeli Students and European Union representatives. "We are witnessing a strong willingness and desire to take action against this phenomenon," Bachar said. "There is an understanding of the problems it poses. Europe is seeing a steady and substantial increase in Anti-Semitism." Ido Daniel, Program Director at Israeli Students Combating Anti-Semitism, displayed during the meeting a photo of a French Facebook page with names, pictures and information about Jewish residents, including their place of prayer and the parks where they take their children. He also showed the attendees a faux Adolf Hitler Twitter account, with more than 370,000 followers, that has since been suspended. "The man tweeted a picture of Birkenau and wrote: 'It's a great day at work today,'" Daniel read out the sentence
According to Bachar, various European initiatives which include prohibitions on circumcision and kosher slaughter "do not stem from anti-Semitic motives, but they do pose a real threat to the continued existence of Jewish life in Europe. Apart from them, hundreds of anti-Semitic demonstrations have taken place. We are diagnosing three phenomena: Leaving the country, assimilation or isolation." Bachar also spoke about recent occurrences in which people removed mezuzahs from their doors, concerns of wearing a yarmulke in public while going to the synagogue and the hiding of Jewish identity.
© Y-Net News
27/10/2014- Up to 10,000 people rallied in Budapest on Sunday (26 October) in protest of Viktor Orban’s government plan to roll out the world’s first ‘internet tax’. Unveiled last week, the plan extends the scope of the telecom tax onto Internet services and imposes a 150 forint tax (€0.50) per every gigabyte of data transferred. European Commission spokesperson Ryan Heath said, under the tax hike, streaming a movie would cost an extra €15. Streaming an entire TV series would cost around €254. The levy, to be paid by internet service providers, is aimed at helping the indebted state fill its coffers. Hungary’s economy minister Mihaly Varga said the tax was needed because people were shifting away from phones towards the Internet. But unhappy demonstrators on Sunday threw LCD monitors and PC cases through the windows of Fidesz headquarters, Orban’s ruling party.
A Facebook page opposing the new tax attracted thousands of followers within hours of being set up after the regime was announced. The page called for a protest with some 40,000 people having signed up by Sunday early evening. Hungary’s leading telecoms group Magyar Telekom told Reuters the planned tax “threatens to undermine Hungarian broadband developments and a state-of-the-art digital economy and society built on it”. The proposal has generated controversy in Brussels as well. EU’s outgoing digital chief Neelie Kroes on Sunday told people to go out and demonstrate. “I urge you to join or support people outraged at #Hungary Internet tax plan who will protest 18h today,” she wrote in a tweet. The backlash prompted Orban’s government to rollback the plans and instead place a monthly cap of 700 forints (€2.3) for private users and 5,000 forints (€16) for businesses.
The concession did little to appease critics who say the levy will still make it more difficult for small businesses and impoverished people to gain Internet access. Others say it would restrict opposition to the ruling elite. “This is a backward idea, when most countries are making it easier for people to access the Internet,” a demonstrator told the AFP. "If the tax is not scrapped within 48 hours, we will be back again," one of the organisers of the protest told the crowds. Orban, who was elected for a second term in April, has come under a barrage of international criticism for other tax policies said to restrict media freedoms amid recent allegations of high-level corruption. Civil rights group say the Fidesz-led government fully controls the public-service media and has transformed it into a government mouthpiece.
An advertising tax imposed in August risks undermining German-owned RTL, one of the few independent media organisations in Hungary, which does not promote a pro-Fidesz editorial line. Kroes has described the advertising tax as unfair and one that is intended “to wipe out democratic safeguards” and to rid Fidesz of “a perceived challenge to its power.”
© The EUobserver
26/10/2014- Pavee Point strongly condemns any actions to intimidate and promote violence against Roma in Waterford. This follows the publication of multiple Face-book pages which openly incite hatred against Roma, and reports of a public order incident on Saturday evening where up to 100 people are reported to have gathered outside the home of Roma living in Waterford. The content on Facebook pages to date have shown huge misinformation and racism towards Roma and have included inflam-matory, dehumanising and violent language. There is a clear link between online hate speech and hate crime and there is an urgent need to address the use of the inter-net to perpetuate anti-Roma hate speech and to organise violence.
European institutions and groups such as the European Roma Rights Centre have raised concerns about rising violence in Europe and the strengthening of extremist and openly racist groups which spread hate speech and organise anti-Roma marches. Attacks in other European countries have included several murders of Roma. We don’t want this to become a feature in Ireland. “Anti-Roma racism does not occur in a vacuum and we now need strong public and political leaders to be visible, vocal and openly condemn anti-Roma actions in Waterford” said Siobhan Curran, Roma Project Coordinator Pavee Point. “At a national level a progressive national strategy to support Roma inclusion in Ireland needs to be developed as a matter of urgency” she continued.
Pavee Point calls on all elements of the media to take on board the recommendations from the Logan Report and avoid sensationalist and irresponsible reporting.
© Pavee Point
Our world is now more connected than ever. Technology – specifically social media – allows us to establish lines of communication hitherto unthinkable.
31/10/2014- As technology has developed and the use of social media proliferated, unfortunately so too has the echo chamber for racism expanded. As chairman of the All-Party Parliamentary Group Against Antisemitism and with the Inter-Parliamentary Coalition for Combatting Antisemitism, I have been working together with the industry and MPs from across the world to tackle cyber hate. Predominantly, this has been through improved self-regulation by the companies in question. In September, I went to California to agree protocols on hate speech on the internet with Facebook, Twitter, Google and Microsoft. They were among others that endorsed a series of pledges to introduce better, user-friendly reporting systems and more rapidly respond to allegations of abuse. It is easy to look at the big picture and work with companies to implement frameworks to tackle abuse.
It is, of course, a very different experience to be on the receiving end of antiSemitic hate and death threats. Recently, an important precedent was established when a man who had sent an anti- Semitic tweet to my parliamentary colleague, Luciana Berger MP, was jailed. While civilised people the world over celebrated the news, it elicited quite the opposite response from Nazi sympathisers and far-right extremists. Taking inspiration from one lunatic, posting articles to an American server, a number of ‘activists’ took to Twitter in an attempt to orchestrate a campaign of hate and vitriol. I was not prepared to let Luciana fight this alone and so raised a point of order at Prime Minister’s Questions and queried whether Twitter might be brought to the Commons to answer for the hate that was being espoused through its platform. Subsequently I, too, became subjected to the ire of fascists and racists on twitter. If you have ever suffered abuse through the medium of Twitter, you will know how difficult it is to report it and have action taken. Given the work I have already done with the company, it should not come as a surprise that I was able to make contact with the company and request action.
While individuals have sought to be helpful, hateful accounts and messages targeting both Luciana and myself remain online and my experience points to a significant structural failure to curb racist activity on social media. In November, I will visit Twitter and Facebook European HQ’s with parliamentary colleagues and will take my concerns to them. This week, I led a debate in the House of Commons about these matters and asked the government and the parliamentary authorities what action they would be taking. I set out a number of practical suggestions. What happens on social media has real world consequences. I expect Twitter to make it easier for any victim of abuse to report hate so the threat of harm is reduced. I want the social media companies to invest extra resources in tackling cyber hate.
Protocols that companies have signed up to, such as the ICCA/ADL accord, should be honoured and there should be more transparency so it is easier to contact people working for these companies. I expect these companies to work proactively to develop algorithms that identify repeat offenders and key words which, when they appear together, are automatically removed. I want racist and anti-Semitic pictures to be taken off these platforms and I want police RIPA requests to be more speedily processed through the UK. Specifically, I want our police and courts to ensure they are at the forefront of the fight against cyber hate. Sex offenders can be barred from social media and from online activity.
I believe that if they show a considered and determined intention to exploit social media networks to harm others, individual perpetrators of harassment and racist abuse should also be subject to such a ban. If they can do it for child exploitation, then they can do it for racism and anti-Semitism. Technology has helped us to create new and important means of communication. I will not allow the racists and anti-Semites of the world to be the primary beneficiaries.
© Jewish News UK
30/10/2014- Local software development company, PDMS are delighted to announce that they have been working with PNLD (Police National Legal Database) in the UK to provide the technology for their latest project - an innovative new web service aimed at helping victims and witnesses of crime. The website, aptly named www.helpforvictims.co.uk was launched on Friday 24th October by Yorkshire’s Police and Crime Commissioner, Mark Burns-Williamson, with an event in Leeds where Baroness Newlove, the UK Government’s Victims’ Commissioner was present to support the launch. Funded by the Ministry of Justice, it is hoped that the website will be rolled out to other police forces across England and Wales.
With the introduction of Help for Victims, individuals in Yorkshire will be able to immediately access all the information contained within the Victims’ Code and the Witness Charter in a question and answer format. The website also includes individual pages dedicated to over 400 local supporting organisations, which can help with concerns such as cyber bullying or hate crime, with trained advisers on hand to give advice. Additionally, the website utilises a self-referral service to local organisations who can provide particular specialist victim and witness services beyond the website.
Chris Gledhill, Managing Director of PDMS commented, “The new website is an integral part of Mr. Burn-Williamson’s Police and Crime Plan to ensure victims and witnesses in Yorkshire receive high quality support exactly when they need it. It is the only website of its kind that facilitates all of their local resources, whilst providing one place for clear and concise advice with regards to the criminal justice process and rights from the Victims Code. As well as English, the site has been translated into the five most frequently spoken languages in West Yorkshire - including Gujarati, Urdu, Punjabi, Arabic and Polish, and will shortly be launched in IoS and Android App format too”.
PDMS have been PNLD’s technology partner for over 10 years, helping them provide a range of services to the police and wider criminal justice sector in Yorkshire. Previous technology projects have included the Police National Statistics Database (PNSD), an internet-based solution allowing Police Forces to comparatively analyse and examine statistics at national and local levels, the ‘Ask the Police’ Portal (www.askthe.police.uk) for the Police Service in England and Wales, which is estimated to save forces over £25 million per year, and Apple and Android ‘Ask the Police’ apps, which reached over 30,000 downloads shortly after launch.
© Isle of Man
With political campaigns increasingly being fought on social media, The Telegraph investigates the rise of Britain First, a tiny group with more likes on Facebook than the three main parties
27/10/2014- Started in 2011 by former BNP members Paul Golding and Jim Dowson, Britain First describes itself as “a patriotic political party and street defence organisation”. The group has amassed almost 500,000 likes on Facebook compared to the Conservatives on 293,000, Labour with 190,000 and the Liberal Democrats’ 104,000. This popularity has led to questions about how the group has managed to gain so many likes when its offline activities seem to draw few supporters in comparison. I met the leader of Britain First, former BNP communications chief Paul Golding, and asked him about the kind of posts the group was using to attract likes. One tactic they employ is to post pictures of animal cruelty with text asking people to “Like and share if you demand far harsher penalties for those who mistreat animals”.
“All the top grossing charities in this country are animal charities and there’s a reason for that. We’re just tuning into the nation’s psyche (by) posting stuff like that,” explained Mr Golding. Creating posts which appear to have little to do with the aims of the group and which seem aimed at simply garnering the most amount of likes is a tactic used by many far right groups according to Carl Miller, a social media researcher for the think tank, Demos. “Far right groups have always wanted to appear more popular and influential than they are, this is one of the ways in which they think they can have influence on mainstream political decisions.” The people who respond to these messages online may not be aware of the kind of activities their likes are being used to support offline. Britain First has run a campaign of what they call ‘Mosque Invasions’. One of these took place at Crayford Mosque, in Kent in July of this year. Filmed by Britain First, the ‘invasion’ consisted of a small group dressed in matching green jackets entering the mosque and demanding to see the Imam.
A gentleman inside the Mosque points out that they are standing on the prayer mat with their shoes on, to which Mr Golding responds “Are you listening?” before demanding that the mosque remove signs denoting separate entrances for men and women outside. The man asks again for the group to leave and eventually convinces them to go after promising to remove the signs. Before leaving, Mr Golding warns him “You’ve got one week to take those signs down otherwise we will.” When challenged about the validity of these tactics, Mr Golding said his organisation would not treat those who followed Islam with respect because, in his opinion, they treated women like second class citizens. “We didn’t make a distinction in the second world war between moderate Nazi’s and extreme Nazi’s did we? We just went to war,” he said. Buoyed by the success of their Facebook page, Britain First plans to stand in the Rochester and Strood by election. How they poll will reveal whether the likes they have accrued online translate into votes offline.
© The Telegraph
30/10/2014- A neo-Nazi website based in the US is behind a co-ordinated campaign of antisemitic abuse targeting Britain's youngest Jewish MP, the JC can reveal. The site provides a user guide to harassing Luciana Berger and has created offensive images to be shared by internet trolls and sent to her via social media sites. It carries a series of "dos and don'ts" for those who intend to abuse Ms Berger. The site advises trolls not to "call for violence, threaten the Jew b---h in any way. Seriously, don't do that". But it goes on to encourage calling her "a Jew, call her a Jew communist, call her a terrorist, call her a filthy Jew b---h. Call her a hook-nosed y-- and a ratfaced k---. "Tell her we do not want her in the UK, we do not want her or any other Jew anywhere in Europe. Tell her to go to Israel and call for her deportation to said Jew state."
Advice on the easiest ways to set up anonymous Twitter accounts and email addresses to limit traceability is also available on the website. It posts hundreds of racist articles targeting black people, Muslims and Jews. Ms Berger received around 400 abusive messages on Twitter last week. Many carried the hashtags and images created by the American site, which urged trolls to join "Operation: Filthy Jew Bitch". The campaign against the Liverpool Wavertree MP was set up last Monday, hours after Merseysider Garron Helm was jailed for sending her abusive messages. Helm's imprisonment was heralded as an "important precedent", but it is now clear that his abuse was merely the tip of an iceberg.
The JC understands the Labour shadow cabinet member has received death threats amid the series of "deeply threatening" messages. She has not commented on the abuse, but friends said she was feeling isolated after the "relentless" storm of offensive tweets. "Luciana is sickened by what's flashing up on her phone on a minute-by-minute basis," said one. "It's hard for her being the focus of something so sinister and global and relentless." A coalition of security groups, police and Twitter have been investigating the source of the messages and have shut down some accounts. The operation against her is being orchestrated by the racist, white nationalist website Daily Stormer. It is run by Andrew Anglin, who has previously been filmed at Berlin's Holocaust memorial mocking victims of the Shoah and questioning the number of Jews who were murdered. The site promotes use of the #HitlerWasRight hashtag.
It provides what is effectively a resource pack of racist images which it advises trolls to use to "flood" Ms Berger's Twitter account. Among the images are those of the MP next to Labour's Jewish leader Ed Miliband with a yellow star with the word "Jude" superimposed on their heads. The call to action concludes by urging abusers to use the hashtags #FilthyJewB---h and #FreeGarronHelm on every tweet targeting Ms Berger. "We will not bow to Jews. We will not be silenced by Jews. We will not allow Jews to destroy the nations that our ancestors spilled blood on to build on this sacred land." When Twitter began to block the tweets late last week, Daily Stor-mer users began posting Ms Berger's email address on internet forums. A website claiming to be Britain's "number one nationalist newspaper" also highlighted Helm's conviction.
The Daily Bale, run by "nationalist" Joshua Bonehill-Paine, said that as a former director of Labour Friends of Israel, Ms Berger was a supporter of "institutional state child murderers", a "money grabber" and a war criminal. The JC understands police are investigating the comments. Ms Berger's parliamentary colleagues and members of the Jewish community have responded by posting messages of support online. Lord Wood, a Labour peer and adviser to party leader Ed Miliband, wrote on Twitter: "The vile antisemitic abuse of Luciana Berger online only succeeds in uniting everyone in her support and in revulsion against those behind it." Baroness Royall, Labour's leader in the Lords, tweeted: "Luciana Berger is a terrific MP, friend and colleague - a very fine woman. The racist abuse against her must stop. It's abhorrent."
Board of Deputies vice president Jonathan Arkush tweeted: "Racist abuse of Luciana Berger is nauseating and disfigures our country. Perpetrators should expect to go to prison. We value and support her." The case was raised in Parliament on Wednesday, with Commons Speaker John Bercow condemning the abuse as "despicable and beneath contempt".
© The Jewish Chronicle
Internet trolls are among the worst specimens the human race can offer. But they are not a reason to nod through another restriction on personal freedom
By Nick Cohen
26/10/2014- No one has tested my commitment to liberalism so sorely as Edinburgh University’s Feminist Society. I know I should believe in freedom of speech and changing minds with arguments, not punishments, and all the rest of it. And, trust me, I do. Or rather I did, until the moment Edinburgh’s feminist students said they wanted to kick the Socialist Workers party out of their campus. The BNP of the left has had a malign influence on public life far beyond its numbers. In the universities, it has been at the forefront of thuggish demands that there must be “no platform” for fascists or supporters of Israel or, it seems, anyone else it disagrees with. The desire to censor has reached the absurd state where the academic left has banned women’s rights campaigners, who have upset transsexuals, and admirers of Friedrich Nietzsche, who have upset students who had not read him but know he was a bad person.
After this disgraceful record, it is worth enjoying the plight of the SWP for hours – maybe weeks. The censor faces censorship. The fanatics who have screamed down so many others could be screamed down themselves. No one can deny that Edinburgh’s women have good reason to go after the Trots. Like priests in the Catholic church and celebrities in light entertainment, the leaders of a Marxist-Leninist party are men at the top of a hierarchy that demands obedience. Last year, a succession of women alleged that senior figures in the party had demanded their sexual compliance. Rather than tell them to take their cases to the hated “capitalist” courts, the SWP set up its own tribunals. The alleged victims said it subjected them to leering questions worthy of the most misogynist judge about their sex lives and alcohol consumption, then duly “acquitted” the “accused”.
Eleanor Brayne-Whyatt of the Edinburgh Feminist Society has a point when she says that universities will show they do not tolerate “rape apologism and victim blaming” if they order the SWP to leave. Even if you want to differ, you may find the task of contradicting her beyond you. We have reached a state where arguing that a speaker has the right to free speech is the same as agreeing with his or her arguments. If you say that racist or sexist views should not be banned, you are a racist or rape apologist yourself. Your opponents then go further and accuse you of ignoring the “offence” and “pain” of the victims of racism and sexism have suffered and turn you into an abuser as well. With remarkable speed this double bind knots itself around its targets. Defend a repellent man’s right to speak and you become that repellent man and his victims, real or imagined, become your victims too. Small wonder so many keep quiet when they should speak up.
Observer readers may not care, as most modern prohibitions on speech are – to put it crudely – instances of leftwing censorship of prejudiced views. If so, you should notice how easy the right finds it to march in step alongside you. Chris Grayling, a Tory bully boy, announced last week that he would quadruple the maximum jail sentence for internet trolls who spread “venom” on social media or, rather, he fed an old story from March to a naive and punitive media. Even though internet trolls are among the worst specimens the human race can offer up for inspection, there are many reasons not to nod through yet another hardline restriction of personal freedom. Interest groups like nothing better than exploiting the law. We’ve already seen supporters of the McCanns, who were understandably aggrieved by the abuse the family received online, turn into troll catchers. They collected a dossier and passed it to Sky News and the police. The hunters unmasked one of the McCanns’ tormentors as Brenda Leyland, who took her own life within hours of her exposure, a reminder that many trolls are mentally ill and need treatment rather than prison.
Meanwhile, as the free speech campaigners at English Pen reminded me, the white right and far right have learned from the left and can be as politically correct. Their most recent success was to demand that the police prosecute one Azhar Ahmed from Dewsbury. He admitted posting a Facebook message two days after the killing of six British servicemen in Afghanistan: “All soldiers should die and go to hell,” it read. A disgusting statement, no doubt, but put in different terms, the belief that British troops should not be in “Muslim lands” is a political sentiment, not a criminal act. The court nevertheless found him guilty of the criminal offence of making a “grossly offensive communication”. The prosecutors did not say that he was inciting violence against British troops, simply that he was offensive. Two can play at that game. The Islamist religious right can respond in kind and demand prosecutions for Islamophobia, and before you know it we will be off on a cycle of competitive grievance.
Only last week, the authorities recalled Tommy Robinson, the former leader of the extreme right English Defence League, to prison – apparently for tweeting that he planned to criticise the police. I carry no brief for the man, but his detention feels all wrong. It would be far better if social media sites and newspapers stopped inciting people’s ugliest instincts by allowing them to post anonymously. It would be better still if politicians reformed a law that is alarmingly vague. The state can charge citizens for words that are “grossly offensive,” as Azhar Ahmed found. No government should be allowed to get away with such a catch-all charge. Every sentiment beyond the blandest notions “offends” someone. “Offensive” is a subjective term, which is wide open to political manipulation by loud and vociferous interest groups and the government of the day.
The only respectable reason for banning organisations or punishing individuals is if they incite violence against others. Unless feminists can prove that the SWP promotes rape as a matter of party policy – and I don’t think they can – they remain free to despise it, harangue it and oppose and expose its many stinking hypocrisies, but they have no moral right to order it off campuses. I know I am going to regret writing that last sentence. Indeed, I am regretting it already. But it remains the case that a country where it’s a crime to be offensive is a country where everyone can try to ban everyone else.
© Comment is free - The Guardian
Editors' Note: This story includes references to hate speech and other language that readers may find offensive.
26/10/2014- In September, a group of black women penned an impassioned letter to the people who run Reddit entitled: "We have a racist user problem and reddit won't take action." See also: Reddit: A Beginner's Guide
Posted by the username of pro_creator, who serves as a moderator on the subreddit /r/blackladies, it was cosigned by the moderators of more than 60 other subred-dits. "Since this community was created, individuals have been invading this space to post hateful, racist messages and links to racist content, which are visible until a modera-tor individually removes the content and manually bans the user account," the message said. "reddit admins have explained to us that as long as users are not breaking sitewide rules, they will take no action," the letter added. Therein lies the issue. Reddit has a hate speech problem, but more than that, Reddit has a Reddit problem.
A persistent, organized and particularly hateful strain of racism has emerged on the site. Enabled by Reddit's system and permitted thanks to its fervent stance against any censorship, it has proven capable of overwhelming the site's volunteer moderators and rendering entire subreddits unusable. Moderators have pled with Reddit for help, but little has come. As the letter from /r/blackladies mentions, the bulk of what racists perpetrate on the site is within Reddit's few rules. And the site's CEO has made clear, even through criticism surrounding high-profile events like the celebrity nude leak, that those rules are not going to change.
This has put the front page of the Internet in a tenuous position. Having just completed a funding round, the site is poised to begin monetizing. That will mean convincing advertisers to put adds next to its user-generated content. It is a situation in which an unstoppable force meets an immovable object. Hate speech on Reddit is proving uncontainable while Reddit refuses to change. The situation has left moderators — essential cogs in the site's operation — as the site's last line of defense against some of the darkest parts of the Internet. It is a battle they are losing.
Down with the upvotes
It's just not that hard to manipulate Reddit. Motivated racists have proven capable of affecting everyone from smaller groups like /r/blackladies to huge subreddits like /r/news, which has more than 3.9 million subscribers. Reddit relies on a democratic “upvote” and “downvote” system that surfaces or buries content and com-ments. It’s a system that can be gamed by motivated groups. Allied redditors can vote en masse to push content and comments to the top of subreddits, a move known as "brigading." This is frowned upon — but it’s not technically against the rules. The site also allows users to quickly create anonymous accounts. Bands of anonymous, racist users can completely overrun smaller subreddits, which is what happened to /r/blackgirls, a predecessor to /r/blackladies.
“Our sub was created after a previous sub we'd frequented was overrun [by] hate groups,” pro_creator said in an email to Mashable. The user requested anonymity out of fear of “doxxing,” or the public disclosure of personal information online. The abuse “would come in waves as they grew upset with being rejected and banned.” Racist redditors had previously congregated at /r/n*ggers, a subreddit that was eventually banned for its open attempts to brigade other subreddits, including /r/ black-girls. A year and a half later, /r/blackladies, which bills itself as "designed specifically to be a safe space for black ladies on Reddit," is dealing with the same problem. Moderators are growing weary.
In addition to the upvote and downvote system, moderators, know as “mods,” are also a key part of Reddit. These unpaid volunteers regulate each subreddit, some of which have millions of subscribers. They have the power to block comments and ban users from their particular parts of the site. In the face of the types of organized attacks that hate groups have mounted on subreddits large and small, those tools are woefully inadequate, moderators say. Tyler Lawrence, a moderator of a variety of subreddits including /r/news, said that consistent and coordinated attacks have caused him to consider drastic action. This has become such a huge issue in /r/news alone that I've at multiple points conside-red outright closing comment sections to prevent hateful brigading from racist communities within Reddit," Lawrence told Mashable in an email.
Moderators’ pleas have almost entirely fallen on deaf ears. Reddit’s commitment to remaining as open as possible is well documented. Most recently, Reddit CEO Yishan Wong penned a defense of the site’s lack of action concerning its role in disseminating leaked celebrity photos. Moderators who spoke with Mashable are fatalistic about the site’s future. If Reddit was built in part by the darker corners of the site, why would it change now? “There's no desire to address the various -isms that have grown to dominate the site, so it doesn't seem like it will be resolved any time soon. Which is unfortunate, because the attitudes displayed by a good number of Reddit's target demographic are firmly on the wrong side of history,” pro_creator wrote. “The site is positioning itself as a playground for racists and misogynists. And if racism and sexism are paying the bills, why would they move against it?” Reddit declined to respond to questions on this topic.
Reddit at its core is a group of communities. The site's structure and format — relying on the voting system to elevate or bury content and comments — made it the ideal place for users with any number of interests to connect. Reddit now hosts thousands of sections, known as subreddits, and served more than 170 million unique users last month. Censorship is the site's mortal sin, even when being applied to the most odious content. This laissez faire ideology is an ingrained part of the plat-form, lending it a certain legitimacy. All are welcome and governed by the same rules. This led to the site playing host to a certain amount of racism and hate speech. Racism on the Internet preceded Reddit, and it will exist if the site ever goes away. But there was a relative peace among the various groups, which operated under something of an unspoken detente. You stay in your corner, we stay in ours.
That is until the 2012 shooting of Trayvon Martin by George Zimmerman. The Zimmerman trial really stands out in my mind. It served as a rally point for racists every-where," said Logan Hanks, a former Reddit programmer, in an email to Mashable. "This manifested on Reddit as a lot of new racist memes popping up here and there, drama around racists squatting on the 'TrayvonMartin' subreddit to mock the African-American community, and an uptick in bullying directed at minority subreddits," he said. "It became a prime opportunity to mock and harass minorities on Reddit." Since then, a battle has raged between Reddit's corps of volunteer moderators and racist activists. "After the Zimmerman trial, they were briefly dispersed, but never entirely gone, and this year they've returned as strong and bold as ever," Hanks said.
Racism is nothing new to the Internet, but rarely has it been so organized and on a platform that can quickly put it in front of millions of users. Numerous moderators who spoke with Mashable for this story say that hate groups are coordinating to disrupt large, mainstream sections of the site and occupy others. Moderators can delete posts and comments that violate subreddit rules. Some have taken screenshots of attacks in hopes of providing evidence to admins — Reddit employees that help run the site — and spurring them to take action. Examples can be found here, here and here. There's also evidence of plans to take these efforts to Twitter. Lawrence, the moderator, sent the following screenshot as an example of the type of action that he has had to deal with on a near-daily basis.
Many subreddits have their own rules, enforced by moderators. It is up to them to regulate content and comments with limited tools. They can block users and delete comments, but these efforts are sometimes not enough. While Reddit has 65 employees, it relies on thousands of unpaid mods. "The tools available to mods mainly offer limited reactive approaches, so they have to monitor submissions 24 hours a day to remove slurs and ban each new account created specifically to bully them," said Hanks, who was known to be a particularly active admin during his time at Reddit. "Whenever they were hit by a particularly hard deluge they would escalate to us, and sometimes we were able to stem the tide briefly," Hanks said. "If things get too bad, they have to close their subreddit until the bullies and trolls forget about them and move on."
It's also a strain on the mods. Ryan Perkins, a moderator of several subreddits, said in an email that he had lost count of the number of racist commenters he has had to ban. "This makes moderating any reasonably large subreddit with an eye towards being inclusive actually quite a lot of very emotionally and mentally taxing work," he said. Racism is only one type of hate speech on Reddit. The site has seen similar battles surrounding misogyny and more recently the GamerGate fiasco. Reddit has taken some action against organized hate groups. Banning the original hub for anti-black hate speech was a big step, but one that lacked much impact. Banning either a subreddit or a user is among the most aggressive moves that Reddit administrators can take. It also barely changes anything. New subreddits are easily formed, and new usernames created.
The moderators Mashable spoke with pointed to “the Chimpire,” a group of subreddits that had become the new hub for hate speech on Reddit. Two moderators associated with the Chimpire told Mashable through Reddit’s messaging system that brigading was forbidden in their subreddits and denied organized attempts at vote manipulation.
No help in sight
Successful platforms that began with a spirit of openness have learned to quickly change as they attempted to turn into successful businesses. Facebook and Twitter decided, whether as part of a moral or business decision, that freedom of expression on its platforms has limits. Tumblr cracked down on porn. Reddit recently announ-ced a $50 million round of funding. It has been eight years since Condé Nast parent company Advance Publications bought Reddit, and it’s no secret that the site is try-ing to figure out how to monetize. The recent celebrity leak just about coincided with news of the fundraising round, putting the site in an awkward position. In this case, Reddit took action. A subreddit called /r/TheFappening that had been created to host the leaked pictures was eventually banned. That move drew no shortage of criticism within Reddit for a perceived double standard.
"The core in this case is the same as the core in the celebrity hacking scandal, except in that instance, they only removed the subreddits once they received significant media coverage and legal pressure," said Lawrence, the /r/news moderator. Reddit is walking a fine line. The site is trying to be tough on content that could harm its prospects while also catering to its users that demand Reddit retain its anything-goes foundation. In the calculus between Reddit’s ideals, its business, its users and its moderators, the site seems to have decided that it can most afford to lean on the moderators. This has left them frustrated and angry, but still redditors for now. “We are here, we do not want to be hidden,” the letter on /r/blackladies concluded, “and we do not want to be pushed away.”