- Removing Hatred from Steam leaves awkward questions for Valve
- Calls to Ban Australian Defence League Following Inflammatory Facebook Post
- Dutch privacy watchdog threatens Google with 15m fine
- USA: That Ferguson Comment You Made on Facebook Could Get You Fired
- Australia: OHPI has launched Fight Against Hate (press release)
- Report: Hate Music Still Being Sold Via iTunes, Amazon and Spotify
- Yahoo News and the Hate Site
- UK: Internet troll admits Facebook abuse
- UK: Crime warning on social media abuse
- UK: White Paper - The Role of Prevent in Countering Online Extremism
- Israel: How to fight anti-Semitism online?
- Romanian Court Rules Facebook Pages Not Private
- Germany: Berlin introduces 'anti-nazi' application
- Racists Getting Fired exposes weaknesses of Internet vigilantism
- Israeli teens see spike in online anti-Semitism
- BDS Group Spreads Photoshopped Image of Concentration Camp Inmates Holding Anti-Israel Posters
- UK: Twitter's 'defensive' on online anti-Semitism is criticised by MPs
- UK: Facebook hosted Lee Rigby death chat ahead of soldier's murder
- UK: New team targeting online crime
- OSCE Centre supports conf. in Kazakhstan on countering terrorist use of Internet
- Censoring the Web Isn't the Solution to Terrorism or Counterfeiting. It's the Problem. (opinion)
- USA: Supreme Court faces a new frontier: Threats on Facebook
- USA: Social media proves racism is far from gone (opinion)
- Homophobic 'Ass Hunters' Game Removed From Google Play After 1000s Of Downloads
- UK: You've got hate mail: how Islamophobia takes root online
- UK: Eastwood blogger Simon Tomlin guilty of harassment
- USA: Neo-Nazi gets 17 years for email threats
- USA: Gay Slur Removed From Google Maps
- Hate Crimes in Cyberspace, by Danielle Keats Citron (book review)
- Ireland: New legislation needed to stop online trolls
- Malta: Website helps report racism
- Gaza war caused explosion of online hate speech in Europe, report finds
- 'Facebook Murder' - Should Crimes Using Social Networks Get Their Own Category?
- Australia: Racist posts on Facebook - how should you respond?
- USA: Law Enforcement Increasingly Reliant on Social Media
- USA: Court Agrees to Reconsider Decision Over Benghazi-Linked Anti-Islam Video
- Racism in Canada finds fertile ground online
- Does Twitter have a secret weapon for silencing trolls?
- UK PM Cameron says Internet must not 'be an ungoverned space'
- UK: How can football tackle the social media hate merchants?
- Disturbing Trend: Pro-Palestinians Promoting Carintifada on Social Media
- Gamehit Clash of Clans allows opportunity for anti-semitism
- UK: Ed Miliband demands zero-tolerance approach to antisemitism
- Canada: B.C. minor hockey coach fired over pro-Nazi Facebook posts
- Why terrorists and far-Right extremists will always be early adopters
- Kremlin Attack on Russian Website for Nazi List of Wealthy Jews Meets Skeptical Response
- Australia: How Facebook decides what to take down
- Why online Islamophobia is difficult to stop
- Social Networks Bringing People Together like Never Before (opinion)
- 'It's hard being openly Jewish'
- Hungary's 'internet tax' sparks protests
- Mob Violence Has No Place in Ireland (press statement)
- UK: J. Mann MP: Berger abuse reveals failure to curb racism on Twitter (opinion)
- UK: PDMS technology powers innovative new website for UK police
- UK: Far Right on Facebook - The group with more likes than all three main parties
- UK: Neo-Nazi gave out internet abuse tips in campaign against MP Berger
- UK: Silencing extreme views, even if they are those of internet trolls, is wrong (opinion)
- Hate Speech Is Drowning Reddit and No One Can Stop It
- Facebook re-invents the 1990s chat room with Rooms iPhone app
- After Twitter ruling, tech firms increasingly toe Europes line on hate speech
- Czech authorities alarmingly unwilling to prosecute online hate crimes
- Poland: Team behind Hatred lashes out in blog post, thanks press for attention
- Who Has the Right to Be Forgotten on the Internet?
- UK: Britain First 'tricks' Facebook users with Lynda Bellingham post
- British man gets jail time for sending lawmaker anti-Semitic tweet
- U.K. Seeks Help From Tech Firms in Combating Extremists Online
- United against Salafism, right-wing scene surges in Germany
- Italy: Online racial discrimination on the rise
- Web retailers accused of selling Nazi-related paraphernalia
- UK: Social media should not descend into a tool for far-right (opinion)
- UK: Jewish student union uses new media to fight oldest hatred
- Dutch government pressures ISPs to remove 'jihadic' web content
- Freedom of expression complicates EU law on 'right to be forgotten'
- Czech Republic: Neo-Nazis hack websites of human rights NGOs
- EU hosts anti-extremist tech meeting
- Just Because a Hate Crime Occurs on Internet Doesn't Mean It's Not a Hate Crime (opinion)
- Northern Ireland: Facebook: A Breeding Ground For Racism (opinion)
- Germany: Right-wing extremism on the internet (annual report 2013 Jugendschutz.net)
- USA: Supreme Court To Weigh Facebook Threats, Religious Freedom, Discrimination
- South Africa: Vicious tweets scare Jewish community
- Its time Facebook repents (opinion)
- The right to be forgotten; Drawing the line
- Germany: SoundCloud faces wave of jihadi postings
- USA: Brooklyn Coffee Shop Owner Posts Anti-Semitic Rants on Facebook, Instagram
- Google Chief Sees Bots as Weapon on Anti-Semitism
- Facebook agrees to drop real name policy which banned drag queens
- The burqa debate: lifting the veil on Islamophobia in Australia (comment)
- Austria: Fine and jail time for Nazi comments
- Goodbye Facebook, Hello Ello: Gay Users Are Leaving the Site En Masse
- ADL Releases Best Practices for Challenging Cyberhate
- Social Media Trace Australia Islamophobia
A gatekeeper such as Steam has a responsibilities. But it also must be reliable and predictable, and this is anything but
16/12/2014- At first glance, the removal of mass-murder simulator Hatred from Valve’s Steam digital distribution platform seems like a rare example of corporate responsibility. While “mass-murder simulator” sounds like a tabloidism, the sort of description preachy moralists give to games like Grand Theft Auto, it’s an accurate description of Hatred. Produced by developers linked to Polish far-right groups, the game is explicitly and solely about setting out to shoot innocent people. With an aesthetic that emphasises the brutality of the player’s actions, it is a thoroughly nasty concept. So news that Valve had removed the game from Greenlight, the main entry point for indie games on to its Steam store, was greeted by many with relief. The company told Eurogamer that “based on what we’ve seen on Greenlight we would not publish Hatred on Steam. As such we’ll be taking it down.”
In a world where a harassed politician has to fight to get explicitly antisemitic abuse removed from Twitter, it’s refreshing to see a company act quickly to remove hate. But there’s something about the seeming capriciousness with which Valve made the decision to pull Hatred that makes me uncomfortable. The company has an undeniable level of power in the PC gaming space. Last year, it controlled an estimated three quarters of the global market for digital PC games, a market which is itself 92% of the overall market for PC games. That proportion seems likely to have gone up since, and with the launch of SteamOS, there are now a few customers who have no choice but to buy their PC games from the store.
For a company wielding that level of power over a creative medium, Valve owes more explanation of its process than a two-sentence statement. And for any developer wanting to stay on the right side of what the company would publish on Steam, its entire content guidelines are given in one sentence in its FAQ:
Your game must not contain offensive material or violate copyright or intellectual property rights.
Hatred isn’t the first game to have been pulled from Steam with scant explanation. In 2012, sex game Seduce Me was taken off Steam, and the company’s spokesperson told Kotaku that “Steam has never been a leading destination for erotic material. Greenlight doesn’t aim to change that.” Of course, if your erotic material is presented through the lens of a triple AAA game - like GTA V’s strip clubs - you can be sure that Steam will happily accomodate your product. And if your mass-murder simulator is made in a slightly more parodic fashion, as with 2011’s Postal 3, it will still be welcome on Steam as well.
It’s not just Valve’s platform. Apple’s App Store has the same problems. On the one hand, it has a far more comprehensive set of guidelines, which at least allow developers a bit of help in working out what games will be accepted into the store; on the other hand, those guidelines are applied with wild inconsistency. The critically acclaimed Papers Please, for instance, was forced to self-censor after breaching a rule about “pornographic material”, which Apple defined as “explicit descriptions or displays of sexual organs or activities intended to stimulate erotic rather than aesthetic or emotional feelings”. The game’s depiction of full-body scanners in use at the border controls of an authoritarian state is dis-tressing and evocative, but fairly far from stimulating erotic feels. Nonetheless, Apple put its foot down.
The internet is increasingly controlled by a few powerful gatekeepers. Steam, Apple’s App Store, Google search and Facebook’s news feed all have a level of concentrated power over our cultural and social discourse that has rarely been seen in history. Against that background, old canards about companies having the right to stock what they want are increasingly worn out. Nation states evolved a justice system, and rule of law, precisely to exercise their power responsibly, fairly and predictably. But in far too many situations, the best response companies can provide artists wanting to know if they are going to be censored is “wait and see”. And if they fall foul of an unknown rule, there’s no jury, no appeals process, and rarely any explanation.
In the case of Hatred, it’s hard not to feel that Valve made the wrong decision. That’s not because there’s nothing objectionable in the game, but because a ban plays into the developers hands. Their game had already been cynically marketed to supporters of the Gamergate campaign as something that “social justice warriors” would hate, to the extent that fans were asking for downloadable content which would add women like Anita Sarkeesian into the game as murder targets. Removing it from the Steam Store just plays into that image as “the game they tried to stop”. What’s more, the opacity with which Valve makes its decisions means that there is nothing to point to to counter that impression. Rather than letting a cynical game die in obscurity, it’s now poised to become the forerunner of the 21st century equivalent of video nasties.
© The Guardian
16/12/2014- The Australian Defence League (ADL) are threatening to ignite anti-Islamic prejudices in the wake of Monday’s siege in Sydney. The group posted the following message on Facebook:
@nswpolice please protect people from these pathetic monsters of the Australian Defence League. https://twitter.com/BoyCalledAnn/status/544341055319986177/photo/1pic.twitter.com/iVG3u65Lo3
The Lakemba suburb is home to one of the largest mosques in Australia and is often portrayed in the media as being home to a predominantly Arab and Muslim popula-tion. However, people around the world have retaliated to this call by creating the hashtag #illridewithyou on Twitter, an expression of solidarity against Islamopho-bia. Dozens of people were held captive by the gunman, revealed to be self-styled, radical cleric Man Haron Monis, in a cafe in Sydney’s busy shopping district. Monis made some of the hostages hold up an Islamic flag, provoking an outcry against Islam from the far-right ADL - an offshoot of the violent English Defence League - who have taken to Facebook to express their views under the motto “ban Islam”. "Here it is folks,” reads a post on the group's Facebook page, “homegrown Islamic terrorism in our backyard, courtesy of successive Australian governments and their brainwashed voters.”
When labelled as racist by the Australian Channel 10 news station, the group replied on Facebook: “To Channel 10, you call us racists and get an Islamist to spew your left wing bigotry. Go to an Islamic country and see how you fair.” Earlier today Ralph Cerminarra the president of the ADL, had to be escorted away by police from near the scene of the siege after he began shouting abuse. “Half the reason we've got this problem today is because of left wing bigots,” he yelled angrily before being dragged away by police. “These people may be murdered because of your left wing bigotry... It's finally happened," he continued. The hashtag #illridewithyou started trending on Twitter shortly after the siege began, as people offer to meet Muslims at their local bus and train stations and ride with them on their journeys, as a safeguard against possible retaliatory attacks.
It’s thought to have been inspired by a young Sydney woman’s post in which she described encountering a Muslim woman on Monday:
"The (presumably) Muslim woman sitting next to me on the train silently removes her hijab. I ran after her at the train station. I said 'put it back on. I'll walk with u' [sic]. She started to cry and hugged me for about a minute - then walked off alone.”
According to Twitter Australia, over 40,000 people used the hashtag within the first two hours of it being created, and this number has now reached over 120,000 as people urged solidarity and support for Muslims, fearing a backlash due to the events on Monday.
Australia's top Muslim cleric Ibrahim Abu Mohamed also issued a statement condemning the hostage siege:
"The Grand Mufti and the Australian National Imam Council condemn this criminal act unequivocally and reiterate that such actions are denounced in part and in whole in Islam," Mohamed said.
Muslim leaders across Australia have also denounced Monis’ actions. "We reject any attempt to take the innocent life of any human being, or to instill fear and terror into their hearts," they said in a statement on behalf of almost 50 prominent organisations within the Muslim community. The 16-hour siege is now over after armed police stormed the building. There have been reports of loud bangs and gun fire from the scene. Paramedics were seen carrying some of the hostages out on stret-chers, and there have been unconfirmed reports that two people have died, whilst three are badly injured.
The ADL have yet to update their Facebook page in light of the recent developments but there is now a Change.org petition calling for all ADL pages to be shut down. The English Defence League (EDL) have also been been posting on their Facebook page following the events in Australia. The group have railed against The Guardian newspaper which they label a “leftist rag” and mocked the UK prime minister for defending Islam as a peaceful religion. They have yet to respond to Newsweek’s request for comment.
15/12/2014- Dutch privacy watchdog CBP is threatening internet giant Google with a fine of up to €15m for contravening Dutch privacy legislation. Since 2012, Google has been combining information about users from Gmail, Google maps, YouTube and search results into a single profile. This, broadcaster Nos points out, allows the company to offer more targeted advertising. However, the CBP says Google is not informing users properly about its actions or asking them permission. This, the CBP says, contra-venes Dutch law. Privacy regulators in Britain, Germany, Spain and Italy are also taking action, the CBP says. ‘Google continues to make great, innovative, happy products but don’t fool us by collecting our personal info behind our backs,’ Dutch watchdog chief Jacob Kohnstamm told Radio 1. Google said it is disappointed in the CBP’s reaction and that it has recently made a string of proposals to European privacy regulators. ‘We are looking forward to discussing them in the short term,’ a spokesman said.
© The Dutch News
The Ferguson battleground shifts to the virtual world, and people are losing their jobs
14/12/2014- Have you posted a status about the Ferguson riots and race relations in America? If you have, your comments, racist or otherwise, could cost you your job.
As unrest on the streets of Ferguson dissipates, a new battlefront is opening up on social networks. Social media users are taking comments they deem offensive and forwarding them to the employers of the offending Facebookers. Vocativ picked up on what might be a burgeoning trend when the administrator of “Ferguson/Saint Louis Riot Updates,” a 70,000-strong Facebook page created to keep the community updated on riots and civil unrest, posted a message he received. The unedited message from Jackie Williams: “Thanks to your racist fb page….I’ve gotten at least 2 ppl fired from their job by screen shotting their racist comments and emailing them to the companies they work for. One was a 10 year employee at Anheuser Busch.”
The page’s administrator claims he revealed the message to encourage the page’s followers to enhance the privacy settings on their profiles. Outrage ensued. Hundreds of the page’s followers suggested that the self-anointed whistle-blower was just as racist as those whose jobs she jeopardized, and they have started to turn her own technique against her. Several people identified Easter Seals Midwest, an NGO that helps people with developmental disabilities, as her place of work, and began posting comments on the page. Her address and work phone number were also shared on the page.
Some of the recent reviews posted to Easter Seals Midwest regarding Jackie Williams.
Megan McClintock Malloy posted: “It’s sad that a company such as Easter Seals employs such racist people. People who have no care at all for fellow citizens. Is this what you want your company known for? If so I’ll spread the word!”
Easter Seals Midwest responded to the complainants that Williams’ views “do not reflect the views of the organization and plan to investigate this situation.” Then the administrator of “Ferguson/Saint Louis Riot Updates” announced that he had scheduled the page to be deleted in an effort to prevent further exposure of the followers’ racist statuses. It’s not the only page involved, however. Another user said she was calling out similar slurs on the “Justice for Mike Brown” page, and had already forwarded the comments of one worker to his employer, FedEx.
9/12/2014- On the evening of December 9th, as International Human Rights Day grew close, The Hon Paul Fletcher MP pressed the big red button to launch 'Fight Against Hate' a new reporting tool created by Australia's Online Hate Prevention Institute (OHPI). Mr Fletcher, who has headed the Australian Government's push on online safety, praised the new tool and the change it will make to efforts to combat online hate and the harm it can cause, particularly to children.
The launch event also features a panel discussion with representatives of some of the groups regularly subjected to attacks online. The panel included Talitha Stone whose campaign against US rapper Tyler the Creator saw her subjected to thousands of death and rape threats. Also the panel was Julie Nathan, the Executive Council of Australian Jewry's Research Officer and the author of the 2014 report into antisemitism in Australia. Representatives of the Indigenous Australian community, Muslim community, the peak body for ethnic communities, and the peak body representing parents were also on the panel.
The software, which people can now register a free account with at http://fightagainsthate.com, allows members of the public to report online content that contains a wide variety of hate speech. The software currently handles report of content on Facebook, YouTube and Twitter, and the hate can take the form of antisemitism, anti-Muslim hate, misogyny, racism against Indigenous Australians, homophobia, cyber-bullying and others forms of hate. People using the software are asked to first report the content to social media companies directly, then to report it through the software so that the response of the social media companies can be reviewed and measured. So far 35 organisations has signed up as supporters of the software, and these supporters, and more which are in the pipeline, including from Government, will be able to access the content the public reports through the system.
â€œThis system will empower people and ensure the time they put into reporting online hate is not wasted. Even if a platform provider rejects their complaint initially, once the item is in Fight Against Hate, human rights organisations, government agencies, or the media may choose to follow up on that item. Rather than being forgotten, online hate that is not resolved may be seen as a failure of self regulation by the social media companies. The longer the incident stays unresolved, the greater the failure The new system will empower not only the public, but key stake holders like governments as well. Dr Andre Oboler, CEO of the Online Hate Preven-tion Institute, explained.
Jeremy Jones AM, a co-chair of the Global Forum to Combat Antisemitism explained that the software had the support of the Global Forum and the Israeli Government. He explained that a report into the antisemitic data gathered through the system will be released at the Global Forum to Combat Antisemitism in Jerusalem in May 2015. The launch event was attended by 90 people from a wide range of community organisations, human rights organisations, government agencies and members of the public. With Fight Against Hate now live, the next challenge is building up a sufficiently large user base of people reporting online hate.
© The Online Hate Prevention Institute
Racism is alive and on sale through online retailers who have yet to remove racist and offensive content from their sites.
11/12/2014- Racist hate music is more about influence than making money. The Intelligence Report, by the Southern Poverty Law Center, says that the racist music business was a multimillion-dollar industry in the 1990s. The genre also doubles as a recruiting tool. In 1999, the National Alliance, formerly the most prominent neo-Nazi organization in the U.S., bought Resistance Records and “were selling more than 70,000 CDs annually by the early 2000s.” Even though the sale of physical copies has slowed, the SPLC says that iTunes and other distributors have provided a “new and unprecedented tool to effectively distribute hate music.” An investigation into hate music by the SPLC revealed that as of September 2014, there were 54 “white power” bands with songs being sold on iTunes. After the report was released, Apple removed only 30 of the groups as of Wednesday, according to The Daily Beast.
According to the SPLC report, iTunes’ “Submission to the iTunes service” says that submitted materials “shall not infringe or violate the rights of any other party or violate any laws, contribute to or encourage infringing or otherwise unlawful conduct.” Despite the policy, songs like “Jigrun” by the Bully Boys were being sold on iTunes. Part of the song says, “We’re going on the town tonight / Hit and run / Let’s have some fun / We’ve got jigaboos on the run / And they fear the setting sun.” The mainstream media caught on to the influence of hate music after Wade Page, the white supremacist who killed six people at a Sikh temple in Wisconsin, was known to have played in a few “white power” bands, according to The Daily Beast.
At least Apple took some action. Amazon and Spotify still allow the hate music to be sold and purchased. Bands such as Skrewdriver, Max Resist and Brutal Attack are available for download on Amazon even though its policies claim that offensive products are prohibited from its site. Spotify bases its removal of content according to Germany’s Federal Department for Media Harmful to Young Persons. “We take this very seriously…We’re a global company, so we use the BPjM [Bundesprüfstelle für jugendgefährdende Medien/Ger-many’s Federal Department for Media Harmful to Young Persons] index as a global standard for these issues,” Spotify said in an e-mail to The Daily Beast. Content not removed by the index is handled on a “case by case basis,” the company added. As of Monday, Spotify hadn’t removed any hate music.
© Atlanta Black Star
One of the World's Largest Internet Companies is Promoting Anti-Semitic Site Veteran News Now
5/12/2014- Yahoo is one of the most visited sites on the internet. How fortunate that is for Debbie Meron, an old-school anti-Semite whose hate site Veterans News Now has been promoted on Yahoo's front page several times in recent weeks. Let's jump straight to the substantiation for that last sentence, because it should go without saying that if its three key points are all true — in other words, if Yahoo considers Veterans News Now (VNN) a legitimate news source and prominently features it on its front page; if Veterans News Now is in fact an extremist site; and if VNN is run by a fanatical Jew-hater — then Yahoo has a serious problem, which it must quickly remedy. CAMERA has recently received several complaints from readers shocked to see VNN articles promoted on Yahoo and Yahoo News. The following image, a screen shot of the Yahoo homepage on Dec. 4, proves point number one: Yahoo does treat VNN as a legitimate news site and, at least for some readers, gives it one of the most coveted spots on the internet.
The second point, too, is easy enough to substantiate. Is Veterans News Now a site that peddles in hate and conspiracy? If Holocaust denial and 9/11 trutherism fit the bill, it clearly is. One recent article on VNN, for example, rails indignantly at a commentator whose crime was to describe the Holocaust as "a horrific genocide":
Recently, Abby Martin, the host of "Breaking the Set" on the Russia Today network, released two segments on the subjects of the Nazis and the "holocaust," an event which she described as "a horrific genocide that forever changed the world." One wonders why Martin – like her compatriots in the Zionist-dominated Hollywood establishment — places exceptional status on the "holocaust" when in fact a far greater number of non-Jews — particularly Germans, Russians and Chinese — perished during the Second World War than even the highest exaggerations of the sacred Shoah.
It only gets worse from there. About Auschwitz, the author approvingly states that "some historians estimate less than 100,000 people died in that camp, primarily from disease and starvation caused by Allied bombing." He continues:
The camp's true purpose bares little resemblance to the picture painted in Hollywood movies and mainstream history books. It is an irrefutable fact that Auschwitz had facilities one would never expect to find in a bona fide "death factory," such as a swimming pool, a soccer pitch, a theater, a library, a post office, a hospital, dental facilities, kitchens and so on. Inmates were encouraged to participate in orchestras, theater productions, soccer matches and other cultural and leisure activities.
The takeaway: Auschwitz was a "labour camp," not a death camp. The gas chambers did not exist. Nazis used Zyklon B to save Jewish lives, not extinguish them. The claim of 6 million Jewish victims is a hoax. The Jews are largely responsible for communist Russia and its crimes. And of course, "the media's obsession with the holocaust is part and parcel of the Zionist campaign to cast a spell over the collective consciousness of the Western world in order to desensitize the public to the suffering of the Palestinians and shield Israel from criticism."
VNN also specializes in conspiracy theories about the 9/11 attacks. One piece explicitly backs "the rising alternative thesis within the community of truth seekers, which claim that the masterminds were a Zionist network close to the Israeli Likud." Another argues that Israel is responsible not only for 9/11, but also for the anthrax attacks that followed. Yet another piece, which documents what the author calls "a textbook Zionist mind-control twist," purports to prove Israeli responsibility for 9/11 in this way:
Actions trump lies. Evidence does not lie . . . so how has the American public been so brainwashed by lies, in light of so much evidence? Are Zionists that intelligent, or is the American public that unintelligent—and how did even that obvious question become a "third-rail issue"? Totally uncool, our tradition of being outsmarted by Zionists even to the point of "Rothschilding" our descendants' future. Is it possible for the American public to think their way out of Zionist enslavement . . . or is Gaza a preview of our future? … 9/11 was trademark Zionist false-flag testing of what they might get away with, a pushing of boundaries that, magically, stayed in bounds. Zionists third-rail magicians still brag about 9/11.
Got that? Good. Then on to the third point: Could it be that Yahoo links to VNN because, problematic as the site might be, it is run by a credible, ethical journalist? Is it possible be that these unhinged articles (and the many other similar ones on the site) were posted without the knowledge of VNN's editor-in-chief Debbie Meron? It is certain that Meron knows about the Holocaust denial article mentioned above — she weighed in about the piece in its comments section:
Nor are the 9/11 conspiracy theories outliers. One of the pieces cited above is a currently featured item on the VNN home page, and the site's section dedicated to 9/11 "truth" is always one click away as one of the handful of topics on the menu bar at the top of every page. In fact, Meron seems to be a perfect fit for the hate site. The piece to which Yahoo linked on Dec. 4 was an article by Phil Weiss, which had originally appeared on his anti-Israel site Mondoweiss but was republished by Meron on both VNN and her other website, My Catbird Seat. In the comment section under the latter reposting, Meron fantasized about the Nazi Waffen SS "rip[ping] … into rubbish" the Israeli army, before calling for "Zionists" everywhere to be made personae non gratae. "If you want the best future for your people, never allow them entry into your country let alone any opportunities in your country," Meron wrote.
"Zionist," of course, is so often a euphemism for Jews, and in the case of Meron her feelings about a people who dangerously undermine their host countries are clearly directed at the Jews in general. Under an article posted on her site My Catbird Seat, she posted the text of a supposed interview with Harold Wallace Rosenthal in which the former Senatorial aide quoted admitting to the Jewish conspiracy that secretly runs the United States. In her comment, Meron highlights what appears to be her favorite part: "We Jews have put issue upon issue to the American people. Then we promote both sides of the issue as confusion reigns. With their eye's fixed on the issues, they fail to see who is behind every scene. We Jews toy with the American public as a cat toys with a mouse." (Unsurprisingly, there is no credible source for the interview, which can be found sprinkled throughout the anti-Semitic dregs of the Internet.)
So Veterans News Now is indeed run by an anti-Semite. And it indeed publishes Holocaust denial. And most disturbingly, it is indeed promoted on Yahoo's news feed. CAMERA has contacted Yahoo News to ask why it legitimizes and propagates a hate site, but did not immediately receive a response. Yahoo owes its users an explanation about why it legitimizes and promotes an anti-Semitic hate site. But unfortunately, the company has been slow in the past to respond to anti-Semitism on its site. A question posted several years ago on Yahoo Answers asked why "Judaism glorifies genocide"; the answer explained that "Orthodox jews consider mass-murder to be very honorable, and any reader of the Old Testament is acutely aware of the creepy preoccupation with killing babies…." The page was repeatedly flagged as a violation of Yahoo's guidelines, and emails were sent to the company asking why the hateful page was not removed. These were ignored, and the hateful "question" remained online for weeks, until CAMERA finally went public with the issue.
Now Yahoo has another chance to show it treats seriously concerns about hate-speech on the site. Will it forthrightly respond to those concerns and assure readers that Veterans News Now and other extremist sites will no longer be featured on its news feed? Or will it continue to mainstream anti-Jewish bigotry and 9/11 conspiracy?
A man has admitted posting offensive comments on Facebook about an Edinburgh boy beaten to death by his mother.
4/12/2014- Shaun Moth posted abuse about Mikaeel Kular on the social networking site the day before the three-year-old boy's body was found in a wood in Kirkcaldy. The 45-year-old, who lives in Aberdeenshire, posted the comments on an anti-racism page as a police search was underway for the boy in January. Rosdeep Adekoya, 34, was jailed for 11 years in August for her son's death. Adekoya had originally been charged with murder, but admitted the reduced charge of culpable homicide. Moth, from Whitehills, pleaded guilty to conducting himself in a disorderly manner, posting grossly offensive comments on Facebook and breaching the peace, aggravated by religious prejudice when he appeared at Aber-deen Sheriff Court on Thursday. He is due to be sentenced at a later date.
Fiscal depute David Bernard said: "A post was put on the page for a group entitled Scotland United Against the racist SDL. "During the evening of 16 January, one of the administra-tors for that Facebook page noticed a comment about the missing child which was made at 17:45 that day by a user named Shaun Moth. Other racist comments were also posted by Moth, one of them ending: "My work is done here. wpww 14/88." Mr Bernard said the acronym wpww was understood to stand for White Power World Wide and 14/88 was a Neo-Nazi term for "Heil Hitler". Describing himself as a Nationalist Socialist, he told officers that he often went on to the Facebook page for debate and classed it as a left wing Marxist page for a "communist types". Moth was asked if he was racist and said he was an intelligent man and "not a mindless yob". Moth was remanded in custody.
© BBC News
People who use social media to "peddle hate or abuse" will not escape justice by hiding behind their computers or phones, Scotland's top law chief has warned amid new guidelines on whether messages posted online constitute a crime.
4/12/2014- The Crown Office and Procurator Fiscal (COPFS) said it wants to reassure the public that it takes such offences as seriously as crimes committed in person. It has set out four categories of behaviour, including "grossly offensive, indecent or obscene" comments. However it said there is no danger to freedom of speech, and stressed that people will not be prosecuted for satirical comments, offensive humour or provocative statements. Lord Advocate Frank Mulholland QC said: "The rule of thumb is simple - if it would be illegal to say it on the street, it is illegal to say it online. "Those who use the internet to peddle hate or abuse, to harass, to blackmail, or any other number of crimes, need to know that they cannot evade justice simply by hiding behind their computers or mobile phones. "I hope this serves as a wake-up call to them. "As prosecutors we will continue to do all in our power to bring those who commit these crimes to justice, and I would encourage anyone who thinks they have been victim of such a crime to report it to the police."
The Crown Office said it has chosen to publish its guidance to ensure there is absolute clarity both in terms of its approach and the difference between criminal and non-criminal communications. It said it will take a "robust approach" to communications posted via social media if they are criminal in content, in the same way as such communications would be handled if they were said or published in the non-virtual world. The four categories of communication which prosecutors will consider are those which:
@ Specifically target an individual or group of individuals, in particular communications which are considered to be hate crime, domestic abuse, or stalking;
@ May constitute credible threats of violence to the person, damage to property or incite public disorder;
@ May amount to a breach of a court order or contravene legislation making it a criminal offence to release or publish information relating to proceedings;
@ Do not fall into categories 1,2 or 3 but are nonetheless considered to be grossly offensive, indecent or obscene or involve the communication of false information about an individual or group of individuals which results in adverse consequences for that individual or group of individuals.
In an interview on BBC Radio Scotland, the Lord Advocate was asked how "grossly offensive" could be defined when it could be seen as relative. He replied: "The guidance sets out that it would not include, for example, humour, satirical comment, which is part of the democratic debate, so there's guidance to prosecutors as to what's not included. "It doesn't include offensive comment because we recognise that, in a democratic society, with use of social media you can have offensive comment which wouldn't be criminal but it's really the category above the high bar grossly offensive which has a significant effect on the recipient of the comment. "We've all seen on the media reports of what you described, internet trolls, where this kind of comment, grossly offensive comment, is sent out to directly wound and has quite a significant effect." He added: "There's very detailed guidance of all the factors that prosecutors will take into account when they assess whether or not to raise criminal proceedings in relation to grossly offensive comments posted on social media."
© The Herald Scotland
2/12/2014- The following White Paper addresses the role of the UK government and social media companies and Internet service providers (ISP) in monitoring and policing the Internet for extremist and/or terrorism-related content. This paper seeks to analyse the effectiveness of the UK government’s Prevent strategy and provide recommendations for its improvement in line with the current nature of the threat. Currently, the two biggest challenges for UK counter-terrorism are the radicalisation and recruitment of individuals by the jihadist organisation Islamic State (IS) and the use of the Internet by IS and other extremist organisations to spread unwanted and potentially dangerous ideologies and narratives internationally. This subject is of great importance, especially as government debates how best to tackle extremism and adequately implement counter-extremism measures both in real terms and online. Sections 2 and 3 discuss the framework of the government’s Prevent strategy, while sections 4 through 9 detail the challenges extremism and terrorism-related content online pose. Section 10 addresses the role of Prevent in countering online extremism in the UK.
The Role of Prevent in Countering Online Extremism (full report - pdf)
© Quilliam Foundation
What do you do when you see hate speech on your Facebook or Twitter feed? Do you calm yourself down, swallow the bitter pill and move on, or do you comment bravely and report the image/page/user/group?
3/12/2014- For Israeli student Shay Amiran- Pugachov, fighting anti-Semitism and hate speech online has become a full-time job. Amiran Pugachov is the Program Coordinator of the national program ISCA - “Israeli Students Combating Antisemitism.” Each year, 30-40 top students from Israel’s various high-education facilities are elected to take part of this special program, where they monitor anti- Semitic behavior and discourse online, mainly on social networks like Facebook, Twitter and Youtube. Every day, they take time out from studying in order to make our world a little better. Just this year alone, this group of students took down more than 5,000 anti-Semitic Facebook pages, users and groups and helped expose and bring to the public’s eye the French comedian who invented the reverse Nazi salute (the Quenelle,) who has been publicly condemned and had his show cancelled. Days before the program kicks off its fourth year, Amiran Pugachov sat down with “Israelife” to talk about the world of online anti-Semitism, and the very special and influential program, which is dedicated to making a change in Facebook and Twitter’s Community Standards as well as in people’s very own personal standards.
Why gather students to fight anti-Semitism? Where did that idea come from?
"The idea to gather students to fight anti-Semitism came from the need to leverage the students’ academic experiences and various fields of education and talent, into counter - anti-Semitism. In our program, there are students for Computer Studies, Languages, History etcetera, who can contribute to our battle against hate speech and anti-Semitism. By being aquatinted with the program, they become more educated about the various ways and forms in which anti-Semitism appears online, thus being able to detect and react."
When did you join the program and what drove you to do that?
"I find anti-Semitism very disturbing. From my point of view, anti-Semitism is ignorance. It is blind hatred, regardless of what actually happens in reality. I’m talking about people who follow ancient blood-liables and honestly believe Jews drink Palestinians blood, control the world (from politics to the media) with their money, and other stories you wouldn’t believe people actually stick to. As a Political Science and Communications Studies student, I find the new form of anti- Semitism very interesting: Since the end of WWII, anti-Semitic behavior and discourse were considered out of line, and taboo. Haters attempted to hide their personal opinions and hide in the shadows. Now, things are different. The fast-pace growth and development of social media helps haters spread the anti-Semitic discourse and reach the younger audience, who later use this false informa-tion on school assignments. But it’s not only the young folks. The unaware public is easily affected by the information online. We must always be present on social media to provide them with the correct information."
Do you think anti-Semitism is as big of a threat to Jews now as it was 80 years ago?
"First, let me just say that although anti-Semitism targets Jews, it does not affect the Jewish people only. Anti- Semitism is also an indicator of xenophobia and minority persecu-tion: Whenever anti-Semitism is on the rise, we can see others who are being affected by it, such as Gypsies, Armenians, and LGBT. We witnessed it recently Jobbik and Golden Dawn - political anti-Semitic parties in Hungary and Greece that persecuted various minorities, not only Jews. In recent years, as the world of social media continued to become a meaningful part in our lives, allowing people to express themselves while hiding behind a keyboard - anti-Semitic discourse is becoming more and more popular, especially amongst younger audience. Before the age of internet and social media, it could have been prevented more easily, as anti-Semitis books were banned from stores, for example. Nowadays, anti-Semitism is becoming more and more common, and you don’t even need to make an effort in order to find it on Twitter, Facebook and Youtube."
Lately, complaints have been heard on Facebook's "permissive" policy when it comes to Antisemitic content. Do you agree?
"This year, we have witnessed some improvement, but unfortunately, it’s far from being enough. Many of our reports to Facebook of Community Standards violations are on con-tent which is bluntly anti-Semitic, but Facebook still refuses to remove it. I believe it’s because they only examine parts of the content in questions and don’t see the full picture, literally. For instance, you can post a photo of sweet little cats- nothing anti-Semitic there - but add a description saying “those cats are against Zionist rats.” I also believe there are some words in Facebook’s algorithm that assist them with flagging inappropriate of hateful content. Sadly, this is not enough. Therefore, Facebook must hire more people of various nationalities who speak various languages to truly enforce those Community Standards."
What can we do to help fight anti- Semitism online?
"First, follow ISCA’s channels on Facebook and Twitter. We are flagging hateful content occasionally and ask our followers to help remove them. Second- do not be afraid to report inappropriate of hateful content, by using the “report” option on Youtube videos, Tweets and Facebook posts/pages/groups/users. By reporting, you flag the content as harming or hurtful and tell Youtube/Facebook/Twitter that you don’t like it. The more people report- the clearer the message will be, and the chance for removal will be bigger. Third, and most important - Be yourself. If you see injustice - correct it, and don’t be afraid to deal with anti-Semitism online. The worst that can happen is you being blocked or ignored. It is far less traumatic than encountering a neo- Nazi group in the real world, and can help preventing it from happening. Know that we are here for you, and you can ask us for help and let us know if you encounter anti-Semitism online."
How would you respond to the claim that Israelis jump on every criticism of Israel's policy and scream "anti- Semitism!"?
"There are people with legitimate criticism about Israel’s policy. While I mostly disagree, I can accept it. Not all criticism is anti-Semitism. The problem is that anti-Semitic try to disguise their true selves by hiding behind supposed legitimate criticism. If you dig deeper into their claims, you’ll find that their criticism is nothing but legitimate. When some-one opposes human rights violations and decides to boycott several countries, including Israel - I can disagree, I can explain why he is wrong, but I can’t call him an anti-Semite. When Israel is excluded, is the only country being targeted when someone makes a “human rights” type of claim, and the Holocaust is claimed to be only second to the so called “Palestinian Holocaust” - it is anti-Semitism. When the media coverage disproportionately focuses on Israel’s actions in Gaza while ignoring places like Syria, Iraq or Qatar, I can only assume that there are other considerations involved, other than pure journalism.
Let’s use an example: Americans are probably familiar with the discussion revolving the US aid and financial/military support to other countries. Some are for it, some are against, claiming the taxpayer money needs to be spent on interior matters and save the local economy. There’s another group of people, though. They claim that the US should stop aiding Israel, with the same explanation about using the money to help the local community. Since I am not an American, I can’t really make a judgment call on that, but as an Israeli, I can’t help but wonder what stands behind the second type of a claim. It is one thing to oppose the idea of foreign aid, and another to exclude Israel alone. The US support other countries as well, including Egypt, Saudi Arabia and Qatar, so why those countries haven’t been mentioned by this group of people, who claim to oppose to foreign military aid altogether? Israel is a newborn country, with a lot to learn and many places to develop. It isn’t perfect and there are plenty of things that need to be fixed. Pointing out problems is okay, but criticism must be balanced and fair in order to be a legitimate criticism intended to improve and not hurt."
Is there an online experience you remember in particular from your time in the program?
"I remember encountering a Facebook user who accused Israel of leading a global scam in order to create a new world order. I joked and replied: “Yeah, right. Israel was esta-blished by an alliance that seeks to control the world.” To that he replied: “Yes. All Israeli inventions are part of it, and they want to use them to make experiments on the people in Gaza.” Another fellow I remember kept posting on Twitter quotes by Iranian Presidents about Israel’s “war crimes” and human rights violations. I asked him to tell me how the Iranian government follow freedom of speech, freedom of press and freedom of sexual orientation. I reminded him that they execute homosexuals and opposition leaders. These examples express the new form of anti-Semitism, which can be harder to find than the classical form we all know from decades ago. It pretends to be criticism."
Why is it so important to fight anti- Semitism online as well, and not settle with battles "off line?"
"The battle against anti-Semitism should be a combination of the “offline” with the “online.” We have to remember that behind every online user there is an “offline” extremist who believe that Jews should be extinct. It today’s world, people’s “likes” and “shares” are some sort of a social acknowledgement of their thoughts and beliefs. We cannot allow that anti-Semitic discourse gain popularity through growing numbers of online social acknowledgements. Moreover, words often grow into actions. We all witness it on almost a daily basis with the bullying phenomenon. Just like we won’t tolerate bullying, we must make a clear statement to not allow anti-Semitism as well. We must help prevent it from spreading online, thus help prevent attacks against Jews in the “real world,” similar to the 2012 murder outside of a Jewish school in Toulouse. We can only hope that the proper measures are being taken by the authorities against anti-Semitism in the “real world,” but online we can actually take action. We have to make sure that the various social media channels constantly enforce their Community Standards, protecting minorities and private people from persecution."
Where can we find anti- Semitism online?
"You can find anti-Semitism online in various forms. The most common is the comments (“Talkbacks”) on articles and op-eds regarding Israel on news websites: Mostly they appear on the website itself, below the article, but there are people who make their comments by sharing the story on social networks. There are also Facebook pages and groups with a clear anti-Semitic message. Others are dedicated to anti-Israel propaganda with hidden anti- Semitic motives. They do that by presenting quotes out of context, inventing non-existing quotes by Israeli/world leaders, sharing photos of bleeding children taken in Syria and presenting them as the actions of the Israeli army in Gaza, etcetera. We can also find anti-Semitism in “outcast” websites, Youtube channels or Facebook pages, run by extremists who use their hatred as an engine to gain more popularity. There are also politicians and public figures like British PM George Galloway, filmmaker Allain Soral, and French Comedian Dieudonne’ M’bala M’bala, who find no shame in denying the Holocaust and spread hatred.
But the most dangerous form of anti-Semitism online, in my opinion, is “Yahoo! Answers,” which is being used mostly by youth for school assignments instead if the Encyclopedia. Some pupils ask an innocent question, like “What caused WWII,” and haters use that platform for rewriting history, posting answers like “Because the Jews wanted to take over the world and make all countries fight against each other.” In this specific case, I stepped in and wrote the person who asked the question and gave his a proper answer, but there are countless of twisted answers there, which we try to replace. I recently heard of a student in an American college who got an A+ on an assay denying the Holocaust. Everything about the assay’s structure was right: the right font, the right references and the right structure, but the content was far from being accurate. Therefore, we must always have presence in all online platforms that may contain ignorance and inaccuracies and shed some light there with the truth."
© The Jewish Journal
In the first case of its kind in Romania, the country's highest court ruled that an ‘offensive’ message that a man wrote on his Facebook page was not private.
4/12/2014- The High Court of Cassation and Justice ruled that Mircea Munteanu, a clerk in the Transylvanian city of Tirgu-Mures, who wrote a message on his Facebook page quoting the Nazi slogan ‘Arbeit Macht Frei’, must pay a fine imposed for publishing offensive material. Munteanu wrote the note two years ago in a criticism of anti-government protesters. He was quoted by a local newspaper, and soon afterwards, Romania's anti-discrimination council, CNCD, fined him 1,000 lei (some 225 euro) for “nationalist propaganda that offends human dignity and is an offence against a group of people”. Munteanu decided to dispute the decision in the supreme court, saying his message was just a personal note on his Facebook page.
But the High Court of Cassation and Justice, the country’s highest court, decided that Facebook pages are public space, so Munteanu has to pay the fine. “The Facebook social network can’t be equated with a mailbox, in terms of controlling the posted message. A person’s personal profile on Facebook, although accessible only to a small number of people, is still public space, as any of the ‘friends’ can distribute the information posted by the page owner,” the court said in its decision. The decision, the first of its kind in the country, could set a precedent for future cases, although in the Romanian legal system, which is based on the French system, each case is judged separately, and not based on precedent. Facebook is extremely popular in Romania, with around 7.5 million accounts currently active.
© Balkan Insight
2/12/2014- With the support of the city-state of Berlin, a German association has launched a new mobile telephone application which aims to update users on nazi activity in the nation's capital and how to combat it. "Every app user will receive, if they wish, automatic notifications about all neonazi group actions in Berlin," Bianca Klose, the director for the Berlin Association for Democratic Culture (VdK), told AFP today. "In that way the user will be able to decide how to combat those extremist movements, be it through participating in counter-demonstrations which are also notified in the app or, for example, putting a flag in their window," she added. The "Against the Nazis" app can be downloaded for free on Android and iPhone mobiles and is available in three languages, German, English and Turkish. While the far-right in Berlin are a minor electoral force small groups are active in certain neighbourhoods of the German capital, prompting VdK and other organisations to form the "Berlin against Nazis" movement in March 2014 in an effort to stamp out extremist activity.
By Soraya Nadia McDonald
2/12/2014- At first glance, it seems like a clever bit of Internet Darwinism: If you don’t possess the savvy to privatize your obviously racist social-media activity, then your emplo-yer receives a call — or 10 or 20 — about your inappropriate online brain-drippings. That’s the mission of the new Tumblr blog “Racists Getting Fired.” It seems like a natural progression in a world that’s already home to the “Yes, You’re Racist” Twitter account. The account publicly shames Twitter users expressing racist sentiments by retweeting them, especially if their tweets are qualified by “I’m not racist, but …”
For example: “Man I ain’t racist but a Mexican family is so annoying.” And: “I’m not a racist but when I saw a black African coon stormtrooper I was taken aback. Stormtroopers arent colored.” Racists Getting Fired doesn’t just publicly shame — it adds consequences by rounding up those willing to call a business to say they don’t want to patronize a place with an employee who says things like “#Ferguson one less n—– on #foodstamps.” The Tumblr quickly took off. Racists Getting Fired gained nearly 40,000 followers in a matter of days, with 15,000 submissions in the first eight hours of the blog’s existence, according to its moderator.
But there was a hitch that revealed a problem with Internet blood-lust: Sometimes the torch-wielding throngs get it wrong. Such was the case with Brianna Rivera, a woman who certainly appeared to have posted racially charged hate speech to her Facebook account. Later, Racists Getting Fired was alerted Rivera was a victim herself; the account was a hoax created by an ex-boyfriend and submitted to the Tumblr with the aim of not just getting Rivera fired from her job at a movie theater, but smeared as well. This led to new submission guidelines: among them, that users only submit authentic public profiles with links to corroborate screen shots, and of posts that are obviously, explicitly racist. The moderator vowed to vet future posts before publishing them.
Because Racists Getting Fired grew so popular so quickly, the moderator, an anonymous, queer-identified woman, found herself staring down threats from 4chan, she said. Early Tuesday morning, she published a request seeking new moderators to take over Racists Getting Fired, or even create a new blogs with the same goal if hers was shuttered. After soliciting legal advice in an earlier post, she shared this message:
I began this blog much to publicly, with an excess of personal information bc i could not have forseen the skyrocketting of attention this blog has received. i was not prepared for the legal threats, nor for being hunted by 4chan doxxers and other anti-social justice websites. so it is too late for me. moving forward with the knowledge of my mistakes, i am looking for new moderators to completely take over this blog. for your safety and for the longevity of this movement, you need to be damn well versed in the language of online security, you need to be able to mask your presence and cover your tracks. i will only consider serious inquiries.
Despite this misstep, Racists Getting Fired has already proved to be highly effective. The moderator created a section called “gotten” documenting those fired once their employers were made aware of their workers’ racist online ramblings. It’s full of sullen apologies from the terminated offenders — and victory e-mails like this one from Brown’s Car Stores in Virginia:
Earlier today, Brown’s Car Stores was made aware of racist and other inappropriate posts made by an employee. Brown’s does not condone nor does it tolerate racism, bigotry or any other expression of prejudice or discrimination against anyone of any race, gender or religion. We have taken immediate action and the individual is no longer a part of the Brown’s family.
In recent days, there’s been much discussion of how to get people to keep paying attention to the problems highlighted by Ferguson once protesters are no longer blocking freeways and marching through city streets. And one way to do it has been to force businesses to take a hit. Protesters in Missouri shut down malls and occupied major stores on the biggest shopping day of the year. Most media outlets, including The Washington Post, did not attribute the 11 percent drop in Black Friday sales to the efforts of #BlackoutBlackFriday and #BoycottBlackFriday, however.
There’s an argument that it should be financially untenable to support racism. To that end, Mother Jones recently compiled a list of Fortune 500 companies it said were “funding the political resegregation of America” by donating to the Republican State Leadership Committee, an organization the magazine charges with bankrolling GOP gerrymandering efforts that created majority-minority voting districts. That list, or the thinking behind it, could easily be traced back to the same sentiment that fueled the creation of the now-defunct BuyBlue.org. James Watson, the “father of DNA” has certainly found that there was a steep price for making racist comments in 2007: He’s now selling his Nobel prize as a way of making up for his lost income. That argument probably has something to do with why Racists Getting Fired has been successful: It’s not just that employers are horrified by workers’ use of the n-word or clearly racist stereotypes, it’s that they can’t afford to become painted as racist, either. Though this tactic doesn’t do much to target structural, institutionalized racism, Racists Getting Fired provides clear means for attacking individual racism, and it probably provides a gleeful jolt of dopamine when there are actual results.
So there are questions: What are the goals of Racists Getting Fired, beyond well, getting racists fired? Does it want to teach a lesson? Does it want to ensure there are meaningful consequences to publicly spouting racist venom? Is the idea to eradicate this sort of thinking or simply put it out of view of polite society? One would imagine that a man who loses his job after calling the president a “p—y a– n—–” and threatening to kill undocumented immigrants isn’t going to execute an about face when it comes to his racial politics. In fact, there’s a good chance he digs in his heels and simply learns how to better conceal his prejudices. Is that victory? Whatever the fate of Racists Getting Fired, its moderator has let her thoughts be known. “I will retire, but this will not die,” she wrote. “YOU CAN’T KILL US ALL … WHITE SUPREMACY MUST PAY.”
Soraya Nadia McDonald covers arts, entertainment and culture for the Washington Post with a focus on race and gender issues.
© The Washington Post
Anti-Defamation League poll finds Israelis witness steep increase in ‘anti-Israel expression’ in 2014
2/12/2014- Jewish-Israeli teenagers faced more anti-Semitism and “anti-Israel expression” on the Internet in 2014 than they did last year, according to an Anti-Defamation League poll. The survey, which was announced Tuesday, polled 500 Jewish Israelis aged 15 to 18 in November. It found that 51 percent of the participants reported encountering “attacks” on the Internet because of their nationality, compared to 36% last year. Eighty-three percent of the teens reported seeing anti-Semitism online in some form through “hate sym-bols, websites, and messages found on social media and in videos and music,” compared to 69% last year. The respondents noted that online anti-Semitism increased significantly during Israel’s war in Gaza this summer. “The more teenagers in Israel are using the Internet to connect with friends and share social updates, the more they are coming into contact with haters and bigots who want to expose them to an anti-Israel or anti-Semitic message,” Abraham Foxman, the ADL’s national director, said in a news release issued by the organization.
The survey also found that the teens encountered more anti-Semitism on social media websites such as Facebook and Twitter. Eighty-four percent reported seeing anti-Semitism in Facebook posts or tweets, compared to 70% last year. Sixty-five percent of the teenagers noted that they took action in response to the posting of anti-Semitic content by contacting website administrators or responding with comments of their own. The poll was conducted in Hebrew by the Israeli polling company Geocartography. It has a margin of error of plus or minus 4.4%.
© JTA News.
A Facebook page supporting the anti-Israel Boycott, Divestment and Sanctions (BDS) movement on Wednesday uploaded a Photoshopped image of Nazi concentra-tion camp prisoners holding anti-Israel signs.
28/11/2014- The picture, posted by a page named “I Acknowledge Apartheid Exists”, shows skeletal survivors holding up signs that read “Israel Assassins,” “Break the Silence on Gaza,” “Stop the Holocaust in Gaza” and “Stop US Aid to Israel.” A sign in the far back of the image says Gaza is “the world’s biggest concentration camp,” while another poster shows a Palestinian flag along with the words “Free Palestine.” A slogan at the bottom of the offensive image reads, “Whatever happened to ‘never again?’”
The Facebook page, which boasts over 91,000 members, captioned the post “Viva Palestine.” At the time of publication, the picture has been “liked” by 307 users and “shared” on the social media site by 110 users, including the Central NY Committee for Justice in Palestine. Many Facebook users expressed disgust over the image, calling it “inappropriate,” “shameful” and asking for the picture to be taken down. One user said, “I find this really disturbing. It’s not a case of ‘not getting it’. How can exploiting and image of other people’s suffering be an acceptable thing to do? Is that not what we’re supposed to be against??”
Another commenter said the picture is not just distasteful but “outright anti-Semitic, incredibly unpleasant, inappropriate and sullies the name of everyone who is trying to oppose Israel’s actions on Palestine.” Responding to the criticism, the Facebook page claimed the image is intended to teach a lesson. The page’s administrator said, “I am not going to stop posting something because some people do not get it. We have to teach them at some point. If people think we should not post because some people do not get it, we may as well not post anything at all.” The page was created in March 2013. It claims its mission is to “promote the narrative that Palestinians deserve the same rights and liberties that Israeli’s enjoy.”
© The Algemeiner
MPs this week criticised Twitter’s “defensive” response to concerns about rising online anti-Semitism after a meeting with the social media giant, in the wake of vile abuse aimed at Jewish Labour MP Luciana Berger.
27/11/2014- John Mann, chair of the All-Party Parliamentary Group Against Anti-Semitism, joined Hendon MP Matthew Offord and others in raising the issue during talks in Dublin with both Twitter and Facebook on Monday. While Facebook was praised for its approach, the parliamentarians were less impressed with Twitter, which defended its response to online anti-Semitism by saying: “There’s so much out there,” according to those present. “They likened the tweets to hearing an offensive conversation in the street, meaning that it’s soon gone as you pass by,” said Offord of the micro-blogging site’s argument. “Needless to say, we don’t see it like that.”
The parliamentary group said Twitter refused to comment on the details of individual cases, although the issue of Ms Berger was brought up. “Facebook was amenable, open and willing to engage with our concerns,” said Offord. “We did not feel the same about Twitter. They were very defensive and not as proactive.” The criticism comes in the same week that social media giants were forced to defend their response to online threats made by Fusilier Lee Rigby’s killers, just days before he was murdered in Woolwich in May last year. Both Mann and Berger have suffered online abuse in recent weeks, with neo-Nazi group members having been arrested and jailed.
The All-Party group will now press their case for changes with the Ministry of Justice and the Home Office. Among the ideas being discussed is a so-called ‘Internet ASBO,” first proposed by Mann in the House of Commons. Currently, a court order can ban sex offenders from using the internet, and some MPs want this to be extended to those determined to perpetrate race hate. “When someone is banned from one social media site, they just move to another platform, and we need to prevent this,” said Offord. “We’re also pressing for better identification, although this can be difficult because 80 percent of all posts are made from hand-held devices.”
Last year, Twitter was eventually forced to give French authorities data that identified users responsible for a spate of vile anti-Semitic tweets, but only after a long- running court battle launched by Jewish students.
© Jewish News UK
Facebook was the firm that hosted a conversation by one of Fusilier Lee Rigby's killers five months ahead of the attack, the BBC has learned.
25/11/2014- Michael Adebowale said he wanted to kill a soldier and discussed his plans in "the most graphic and emotive manner", according to the UK's Intelligence and Security Committee. The ISC said the social network did not appear to believe it had an obligation to identify such exchanges. Facebook said it does tackle extremism. "Like everyone else, we were horrified by the vicious murder of Fusilier Lee Rigby," said a spokeswoman. "We don't comment on individual cases but Facebook's policies are clear, we do not allow terrorist content on the site and take steps to prevent people from using our service for these purposes." The ISC's report said, however, that the company should do more. "Had MI5 had access to this exchange, their investigation into Adebowale would have become a top priority," it stated. "It is difficult to speculate on the outcome but there is a significant possibility that MI5 would then have been able to prevent the attack."
The ISC does not identify Facebook as the host service in the edition of its report released to the public, but the BBC understands it does do so in the complete version given to the Prime Minister. In it, the committee states that the company's failure to notify the authorities about such conversations risked making it a "safe haven for terrorists to commu-nicate within". It highlights that the UK's security agencies say they face "considerable difficulty" accessing content from Facebook and five other US tech firms: Apple, Google, Microsoft, Twitter and Yahoo. The companies in question have said in the past that they have a duty to protect their members' privacy. "If the government believes that it needs additional powers to be able to access communication data it must be clear about exactly what those powers are and consult widely on them before putting proposals before Parliament," said Antony Walker, deputy chief executive at TechUK, a lobbying body that works with Facebook.
The ISC's report identifies a "substantial" online exchange during December 2012 between Adebowale and a foreign-based extremist - referred to as Foxtrot - who had links to the Yemen-based terror group AQAP, but was not known to UK agencies at the time. Foxtrot is reported to have suggested several possible ways of killing a soldier, including the use of a knife. After the murder of Lee Rigby an unidentified third-party provided a transcript of the conversation to GCHQ. The information was also said to have revealed that Facebook had disabled seven of Adebowale's accounts ahead of the killing, five of which had been flagged for links with terrorism. This had been the result of an automated process, according to GCHQ, and no person at the company ever manually reviewed the contents of the accounts or passed on the material for the authorities to check.
GCHQ notes that the account that contained the phrase "Let's kill a soldier" was not one of those closed by Facebook's software. The agency added that the social network had not provided a detailed explanation of how its safety system worked. ISC said that among the information Facebook did disclose was the fact it enabled users to report "offensive or threatening content" and that it prioritised the "most serious reports". However, the committee reflected that such checks were unlikely to help uncover communications between terrorists. It acknowledged that in some other cases, Facebook had indeed passed on information to the authorities about accounts closed because of links to terrorism. However, it said the failure to do so after deactivating Adebowale's account had been a missed opportunity to prevent Lee Rigby's death.
"Companies should accept they have a responsibility to notify the relevant authorities when an automatic trigger indicating terrorism is activated and allow the authorities, whether US or UK, to take the next step," its report concluded. "We further note that several of the companies attributed the lack of monitoring to the need to protect their users' privacy. However, where there is a possibility that a terrorist atrocity is being planned, that argument should not be allowed to prevail." But one digital rights campaign group has taken issue with these recommendations. "The government should not use the appalling murder of Fusilier Rigby as an excuse to justify the further surveillance and monitoring of the entire UK population," said Jim Killock, executive director of the Open Rights Group.
"The committee is particularly misleading when it implies that US companies do not co-operate, and it is quite extraordinary to demand that companies pro-actively monitor email content for suspicious material. "Internet companies cannot and must not become an arm of the surveillance state."
© BBC News
Police have responded to the growing threat of cybercrime by setting up a new specialist unit.
22/14/2014- Cybercrime can include a whole range of illicit online activities from hacking, fraud and scamming to stalking, hate crime and even human trafficking. Hertfordshire Constabulary’s Cyber and Financial Investigation Unit will be focussing on serious and complex cyber-enabled crime, supporting colleagues dealing with cyber-related investigations in other units and investigating and preventing fraud. The new team is launching a section on the Herts Police website dedicated to dealing with cybercrime, at: www.herts.police.uk/advice/cybercrime.aspx, which contains information about current issues, emerging threats and advice on how to be safe online.
© Borehamwood & Elstree Times
An OSCE-supported conference on countering the use of the Internet for terrorist purposes took place today in Astana, Kazakhstan.
25/11/2014- The event was co-organized by the OSCE Centre in Astana, the Committee on Religious Affairs of Kazakhstan’s Culture and Sport Ministry and the Institute for Strategic Studies under the President for some 100 government officials, parliamentarians, information technology and information security specialists, academics, theologians and journalists, including international experts and scholars from Austria, Azerbaijan, Germany, Kazakhstan, Italy, Moldova, the Russian Federation, Turkey, the UAE, the UK, the US, Uzbekistan as well as representatives from United Nations agencies and the CIS Antiterrorist Centre for Central Asia. Advisers from the Office of the OSCE Representative on Freedom of the Media and the OSCE Transnational Threat Department/Action against Terrorism Unit shared the OSCE best practices.
The conference provided a platform to discuss issues related to terrorist organizations receiving support via Internet technology and assessed the merits of developing practical guidelines on preventing the use of the Internet for terrorist purposes, setting a legal framework and enhancing international co-operation to counter the dissemination of violent extremist ideology and illegal content on the Internet and social networks. “The success in our fight against terrorism mainly depends on the effectiveness of national policies, practices and flexibility in reacting to emerging challenges. By preventing cybercrime in its different manifestations we also avert serious terrorist actions and ensure security for the people and the nation as a whole”, said Ambassador Natalia Zarudna Head of the OSCE Centre in Astana. “Since 2005, the OSCE has actively and consistently promoted and facilitated the elaboration and implementation of targeted measures in order to thwart the use of the Internet for terrorist purposes with a focus on respecting human rights and fundamental freedoms.”
Baglan Asaubayuly Mailybayev, Deputy Head of the Presidential Administration of the Republic of Kazakhstan said: “By now we have learned that all countries need to co-operation closely to effectively counter terrorism. New efforts are necessary for conceptual and practical work at the international level in combating terrorism. Only by joining efforts, exchanging ideas, opinions and experience can we create a real barrier to propaganda of the cult of violence, terrorism and extremism.” The event is part of the OSCE’s comprehensive contribution to global efforts against terrorism.
© The OSCE
By Jeremy Malcolm, Senior Global Policy Analyst
25/11/2014- In politics, as with Internet memes, ideas don't spread because they are good—they spread because they are good at spreading. One of the most virulent ideas in Internet regulation in recent years has been the idea that if a social problem manifests on the Web, the best thing that you can do to address that problem is to censor the Web. It's an attractive idea because if you don't think too hard, it appears to be a political no-brainer. It allows governments to avoid addressing the underlying social problem—a long and costly process—and instead simply pass the buck to Internet providers, who can quickly make whatever content has raised rankles “go away.” Problem solved! Except, of course, that it isn't. Amongst the difficult social problems that Web censorship is often expected to solve are terrorism, child abuse and copyright and trade mark infringement. In recent weeks some further cases of this tactic being vainly employed against such problems have emerged from the United Kingdom, France and Australia.
UK Court Orders ISPs to Block Websites for Trade Mark Infringement
In a victory for luxury brands and a loss for Internet users, the British High Court last month ordered five of the country's largest ISPs to block websites selling fake counterfeit goods. Whilst alarming enough, this was merely a test case, leading the way for a reported 290,000 websites to be potentially targeted in future legal proceedings. Do we imagine for a moment that, out of a quarter-million websites, none of them are false positives that actually sell non-infringing products? (If websites blocked for copyright infringement or pornography are any example, we know the answer.) Do we consider it a wise investment to tie up the justice system in blocking websites that could very easily be moved under a different domain within minutes? The reason this ruling concerns us is not that we support counterfeiting of manufactured goods. It concerns us because it further normalizes the band-aid solution of content blocking, and deemphasises more permanent and effective solutions that would target those who actually produce the counterfeit or illegal products being promoted on the Web.
Britain and France Call on ISPs to Censor Extremist Content
Not content with enlisting major British ISPs as copyright and trade mark police, they have also recently been called upon to block extremist content on the Web, and to provide a
button that users can use to report supposed extremist material. Usual suspects Google, Facebook and Twitter have also been roped by the government to carry out blocking of their own. Yet to date no details have been released about how these extrajudicial blocking procedures would work, or under what safeguards of transparency and accountability, if any, they would operate. This fixation on solving terrorism by blocking websites is not limited to the United Kingdom. Across the channel in France, a new “anti-terrorism” law that EFF reported on earlier was finally passed this month. The law allows websites to be blocked if they “condone terrorism.” “Terrorism” is as slippery a concept in France as anywhere else. Indeed France's broad definition of a terrorist act has drawn criticism from Human Rights Watch for its legal imprecision.
Australian Plans to Block Copyright Infringing Sites
Finally—though, sadly, probably not—reports last week suggest that Australia will be next to follow the example of the UK and Spain in blocking websites that host or link to allegedly copyright material, following on from a July discussion paper that mooted this as a possible measure to combat copyright infringement. How did this become the new normal? When did politicians around the world lose the will to tackle social problems head-on, and instead decide to sweep them under the rug by blocking evidence of them from the Web? It certainly isn't due to any evidence that these policies actually work. Anyone who wants to access blocked content can trivially do so, using software like Tor.
Rather, it seems to be that it's politically better for governments to be seen as doing something to address such problems, no matter how token and ineffectual, than to do nothing—and website blocking is the easiest “something” they can do. But not only is blocking not effective, it is actively harmful—both at its point of application due to the risk of over-blocking, but also for the Internet as a whole, in the legitimization that it offers to repressive regimes to censor and control content online. Like an overused Internet meme that deserves to fade away, so too it is time that courts and regulators moved on from website blocking as a cure for society's ills. If we wish to reduce political extremism, cut off the production of counterfeits, or prevent children from being abused, then we should be addressing those problems directly—rather than by merely covering up the evidence and pretending they have gone away.
© Electronic Frontier Foundation
26/11/2014- When the Supreme Court comes face to face Monday with a free speech case involving threats made on Facebook, Paulette Sullivan Moore and Francis Schmidt will have decidedly different reactions. Sullivan hears regularly from women who are harassed and threatened online. A licensed professional had to change her name and take a lower-paying job. An Arizona woman moved nine times in 18 months and changed jobs four times. An Illinois woman confronted Facebook images of herself, her house and children with the caption, "You think you can hide from me?" "What we know about abusers is that when they can't get physical access to the person they were abusing, they start using other methods," says Sullivan, vice president of public policy for the National Network to End Domestic Violence.
Schmidt was suspended from his job as an art and animation professor at a New Jersey college after posting on Google+ a photo of his 7-year-old daughter with a T-shirt that read, "I will take what is mine with fire and blood." The phrase, well-known to Game of Thrones fans, was interpreted by school officials as a threatened school shooting. "Our school is the laughingstock of academia because of this," Schmidt says. "If you look up my name on the Internet, I think the third hit is something about school shootings." Those are the two sides of the debate in the case of Anthony Elonis, whose threats were more intense than Schmidt's alleged threats, though perhaps no more intentional.
Upset at the breakup of his marriage, the 27-year-old Pennsylvania man repeatedly posted threatening remarks not only about his wife, but also about his former workplace, a kindergarten class, local police and FBI agents. Eventually, he was convicted on four federal counts of transmitting threats across state lines and sentenced to 44 months in prison. The question for the justices: Is it enough that Elonis' targets felt threatened, as two lower federal courts ruled? Or must a jury decide that he intended to instill fear or inflict physical harm? Elonis' attorneys say his dark posts — such as "I'm not gonna rest until your body is a mess" and "Hell hath no fury like a crazy man in a kindergarten class" — were a form of therapy, an imitation of rap lyrics and an expression of his First Amendment rights. On the Internet, they say, context is lost and words can be misinterpreted.
The federal government says the standard used by lower courts — that Elonis' words on Facebook could be viewed as threats by a reasonable person reading them — is sufficient, and his intent does not have to be proved. "Juries are fully capable of distinguishing between metaphorical expression of strong emotions and statements that have the clear sinister meaning of a threat," its brief says. Rap music has thrived under the "reasonable person" standard, it notes, without ensnaring popular rappers such as Eminem.
'It's Definitely Terrifying'
While context can be lost on the Internet, the government contends that what's important in Elonis' case is a different kind of context — what was going on in his life. His wife left with their two children. Despondent, Elonis' work suffered, and he lost his job at an Allentown, Pa., amusement park. Using a Facebook pen name, he lashed out at the employer, the ex-wife and many others — but with occasional references to his free speech rights. "It's illegal for me to say I want to kill my wife," he said in a typical post. "I'm not actually saying it. I'm just letting you know that it's illegal for me to say that." The same post included this addendum: "Art is about pushing limits. I'm willing to go to jail for my constitutional rights. Are you?"
Those in the business of helping victims of domestic violence and hate crimes aren't swayed by the arguments about artistic expression or free speech. Electronic communication provides ever more ways to threaten victims, they say, while other technological advances enable stalkers to track their targets' movements. "With new media communications, the message instantly finds its target, regardless of time, distance, or location," says a brief submitted by the Anti-Defamation League. "And with social media, such as Facebook, an individual can threaten a target privately, or in full view of his or her peers. In these ways, the Internet has lowered the barriers to issuing a true threat." In a survey of 759 victims' service agencies, the National Network to End Domestic Violence found that nearly 90% of them had cases of threats delivered via technology. Text messages were the most prevalent form, followed by social media and e-mail. Women between the ages of 18 and 24 were the most frequent targets.
"These threats are not artistic expression. They are not performance art or fantasy violence," says the brief submitted by the National Network to End Domestic Violence. "They are a key part of the in-person abuse to which the victims have been subjected, sometimes for years, and for which they have tried desperately to escape." Carissa Daniels, who goes by a pseudonym, can attest to that. The 58-year-old Washington state resident spent eight years in an abusive relationship and the next 16 "playing cat and mouse and hiding" because her ex-husband hasn't stopped harassing her online. "What happens on social media needs to be seriously looked at," she says. "It's a lot more psychologically damaging, and it's definitely terrifying."
Another victim, Tammy M., was married with four children when it was discovered that her husband had been secretly taking voyeuristic photos of the family and others. They split up, and after being turned in to police and charged with a misdemeanor, he set up a fake Facebook page using her name and pretended to be soliciting sex from strangers. "I've had them show up at the door. It was really scary," she says of her would-be suitors. "And I'm blind on top of it. It's hard to fight something that you can't see."
The flip side of that, others say, can be innocent people being penalized — perhaps even winding up in prison. In Texas, 19-year-old Justin Carter was thrown in jail for comments he made on Facebook while arguing with friends about an online video game. "I think I'ma shoot up a kindergarten and watch the blood of the innocent rain down," he wrote, later adding, "and eat the beating heart of one of them." He was jailed for several months on $500,000 bond and is awaiting trial. "Law enforcement is completely out of touch with the way our citizens are communicating with each other," Carter's attorney, Don Flanary, says. "They are operating based on fear and not on common sense."
In Kentucky, James Evans, 31, was arrested for posting lyrics to a song by the band Exodus about the 2007 Virginia Tech shootings that resulted in 33 deaths. Evans was charged with a felony that carries a five-year mandatory-minimum sentence. He spent eight days in jail before the charges were dropped. Even middle-aged college professors like Schmidt can run into technological trouble. For posting the Game of Thrones photo of his daughter, Schmidt was banished from campus, told to see a psychiatrist and forced to promise he would not wear clothes with "questionable statements." A brief submitted to the Supreme Court by the Student Press Law Center and other groups warns that under the standards used in Elonis' case, online speakers could face "life-ruining consequences." The result, they say, would "chill constitu-tionally protected speech."
© USA Today
By James Bright
25/11/2014- Controversial cases have always yielded controversial verdicts. In recent years the highly publicized trials of Casey Anthony, George Zimmerman and now Darren Wilson unleash a bevy of "legal scholars" on the world of social media. We as a society have developed a narcissism that coincides with tweeting and status updating. We love to pretend like we actually know what we are talking about. I'm guilty of it too - I mean I do use this space every week to prattle on and on about topics I deem of interest after all. Under this narcissism there are other truths that bubble to the surface in the wake of criminal proceedings. What I learned last night is we are not nearly as evolved racially as we like to think we are. A very obvious divide still exists. Tweets utilizing racially insensitive vernacular for Caucasian and African Americans filled Twitter.
There's no denying such racism has become taboo in the world, but apparently only when it's offline. Online many Americans digress into a society of fear filled bigots ready to point the finger of blame at any one of a different color. The media makes exhaustive efforts to be politically correct in an attempt to showcase our evolved sensibilities. All this does is sweep reality under the rug. Cyberspace has shown who many people truly are. These people may not use such vile terms in public, but social media has created an unrealistic sense of security for many. There are Tweeters, bloggers and posters who feel impermeable on the web and they use this avenue to vent and create animosity amongst people who feel equally impermeable.
© Chickasha News
24/11/2014- An anti-gay hunting game which made it onto the Google Play store has been removed, but not before it was downloaded 10s of thousands of times. Called 'Ass Hunter' the sick game asks players to "kill gays as much as you can or escape between them to the next level". The game was noticed by a reader of Gay Star News who then spoke to the paper after complaining to Google. According to the Mirror the game did receive a wave of negative reviews before being removed. Chad Hollinghead said: "This is sickening. I have zero tolerance for hate. We the people should always promote tolerance, love and understanding. "Vicious games, exterminating of minorities should be banned." The fact the app actually appeared on Google's app store has led to concerns that the company may need to reassess the way it processes app submissions, possibly leading to stricter checks. This isn't the first time an Android game has hit the headlines over its content with the 'Bomb Gaza' app causing similar outrage at being allowed through the net onto Google's app store.
© The Huffington Post - UK
By Imran Awan, Senior Lecturer and Deputy Director of the Centre for Applied Criminology at Birmingham City University
21/11/2014- In late 2013 I was invited to present evidence, as part of my submission regarding online anti-Muslim hate, at the House of Commons. I attempted to show how hate groups on the internet were using this space to intimidate, cause fear and make direct threats against Muslim communities – particularly after the murder of Drummer Lee Rigby in Woolwich last year. The majority of incidents of Muslim hate crime (74%) reported to the organisation Tell MAMA (Measuring Anti-Muslim Attacks) are online. In London alone, hate crimes against Muslims rose by 65% over the past 12 months, according to the Metropolitan Police and anti-Islam hate crimes have also increased from 344 to 570 in the past year. Before the Woolwich incident there was an average of 28 anti-Muslim hate crimes per month (in April 2013, there were 22 anti-Muslim hate crimes in London alone) but in May, when Rigby was murdered, that number soared to 109. Between May 2013 and February 2014, there were 734 reported cases of anti-Islamic abuse – and of these, 599 were incidents of online abuse and threats, while the others were “offline” attacks such as violence, threats and assaults.
A breakdown of the statistics shows these tend to be mainly from male perpetrators and are marginally more likely to be directed at women. After I made my presentation I, too, became a target in numerous online forums and anti-Muslim hate blogs which attempted to demonise what I had to say and, in some cases, threaten me with violence. Most of those forums were taken down as soon as I reported them.
It’s become easy to indulge in racist hate-crimes online and many people take advantage of the anonymity to do so. I examined anti-Muslim hate on social media sites such as Twitter and found that the demonisation and dehumanisation of Muslim communities is becoming increasingly commonplace. My study involved the use of three separate hashtags, namely #Muslim, #Islam and #Woolwich – which allowed me to examine how Muslims were being viewed before and after Woolwich. The most common reappearing words were: “Muslim pigs” (in 9% of posts), “Muzrats” (14%), “Muslim Paedos” (30%), “Muslim terrorists” (22%), “Muslim scum” (15%) and “Pisslam” (10%). These messages are then taken up by virtual communities who are quick to amplify their actions by creating webpages, blogs and forums of hate. Online anti-Muslim hate therefore intensifies, as has been shown after the Rotherham abuse scandal in the UK, the beheading of journalists James Foley, Steven Sotloff and the humanitarian workers David Haines and Alan Henning by the Islamic State and the Woolwich attacks in 2013.
The organisation Faith Matters has also conducted research, following the Rotherham abuse scandal, analysing Facebook conversations from Britain First posts on August 26 2014 using the Facebook Graph API. They found some common reappearing words which included: Scum (207 times); Asian (97); deport (48); Paki (58); gangs (27) and paedo/pedo (25). A number of the comments and posts were from people with direct links to organisations such as Britain First, the English Brotherhood and the English Defence League.
Abuse is not a human right
Clearly, hate on the internet can have direct and indirect effect for victims and communities being targeted. In one sense, it can be used to harass and intimidate victims and on the other hand, it can also be used for opportunistic crimes. Few of us will forget the moment when Salma Yaqoob appeared on BBC Question Time and tweeted the following comments to her followers: “Apart from this threat to cut my throat by #EDL supporter (!) overwhelmed by warm response to what I said on #bbcqt.” The internet is a powerful tool by which people can be influenced to act in a certain way and manner. This is particularly strong when considering hate speech that aims to threaten and incite violence. This also links into the convergence of emotional distress caused by hate online, the nature of intimidation and harassment and the prejudice that seeks to defame groups through speech intending to injure and intimidate. Some sites who have been relatively successful here include BareNakedIslam and IslamExposed which has a daily forum and chatroom about issues to do with Muslims and Islam and has a strong anti-Muslim tone which begins with initial discussion about a particular issue – such as banning Halal meat – and then turns into strong and provocative language.
Most of this anti-Muslim hate speech hides behind a fake banner of English patriotism, but is instead used to demonise and dehumanise Muslim communities. It goes without saying that the internet is just a digital realisation of the world itself – all shades of opinion are represented, including those Muslims whose hatred of the West prompts them to preach jihad and contempt for “dirty kuffar” Clearly, freedom of speech is a fundamental right that everyone should enjoy, but when that converges with incitement, harassment, threats of violence and cyber-bullying then we as a society must act before it’s too late. There is an urgent need to provide advice for those who are suffering online abuse. It is also important to keep monitoring sites where this sort of thing regularly crops up; this can help inform not only policy but also help us get a better understanding of the relationships forming online. This would require in detail an examination of the various websites, blogs and social networking sites by monitoring the various URLs of those sites regarded as having links to anti-Muslim hate.
It is also important that we begin a process of consultation with victims of online anti-Muslim abuse – and reformed offenders – who could work together highlighting the issues they think are important when examining online Islamophobia. The internet offers an easy and accessible way of reporting online abuse, but an often difficult relationship between the police and Muslim communities in some areas means much more could be done. This could have a positive impact on the overall reporting of online abuse. The improved rate of prosecutions which might culminate as a result could also help identify the issues around online anti-Muslim abuse.
© The Conversation
An Eastwood man who sent death threats to a former friend and harassed a police officer has been found guilty of five charges relating to malicious communications.
20/11/2014- Simon Tomlin, 46, of Lawrence Avenue, was convicted at Nottingham Magistrates Court today (Thursday) in his absence, after failing to attend a two-day trial. Tomlin was found guilty of criminal harassment of former friend Melony McElroy and PC Richard Reynolds, of sending a series of tweets containing grossly offensive material which refe-renced Ms McElroy on October 9, 2014, and of repeatedly referring to her as a ‘neo-Nazi’ on his blog, The Daily Agenda. When explaining his decisions, the magistrate said that despite Tomlin’s denial of harassment in police interviews, it was clear he deliberately caused alarm and distress and was aware that it would constitute harassment. He said of Melony McElroy: “He caused fear that she was at risk of murderous reprisals and sent matter that was grossly offensive and menacing. “It is clear from Ms McElroy’s evidence that he caused her fear and distress and I am quite sure that is what the defendant intended.” Tomlin was also convicted of sending by public communications network pictures of police officers’ private vehicles on October 5, 2014, which the magistrate described as ‘hate material’ against the police. He added: “In view of the number of followers and the nature of the website that the Facebook page was associated with, the officers whose cars were identified had every reason to fear damaging consequences.” A warrant for Tomlin’s arrest was issued and he will be sentenced at a later date.
© The Eastwood & Kimberley Advertiser
21/11/2014- A Virginia man arrested during a 2012 raid on a central Florida white supremacist compound has been sentenced to 17 ½ years in prison for threatening Florida officials. The U.S. Attorney's Office reports that a federal judge in Orlando sentenced 38-year-old William White on Friday. He was convicted in September of sending interstate threats with the intent to extort and using personal information without lawful authority in furtherance of a crime of violence. The new sentence will run consecutively with a nearly-8-year sentence he's already serving for a separate federal case out of Virginia. Authorities say the self-professed neo-Nazi sent a number of email threats to former State Attorney Lawson Lamar, Circuit Judge Walter Komanski and an FBI task force agent in May 2012. The emails included threats to recipients' family members, including children and grandchildren.
© The Associated Press
A mountain couple makes an unsettling discovery on Google Maps.
20/11/2014- Jennifer Mann and Jodi McDaniel say they've never had any problems living at their home in Canton. But, on Google Maps, instead of a street name, their driveway was labeled with a gay slur. Mann and McDaniel say the gay slur was hurtful and amounts to a hate crime. They’d like to find out who did it and take legal action. “And if I can I'm going to get legal advice about it,” Mann said. “I really don't know what to say to him other than grow up,” McDaniel said. “I have no problems with them....none. They're good neighbors,” Fay Capps said. There's an option on Google Maps to report problems like inappropriate content. Google Maps says its policy considers discrimination based on sexual orientation as a hate crime. As a result of News 13’s attention, Google removed the slur. A spokeswoman says there is a mapma-ker tool where people can edit maps. She says they don't know who did it or when. But she says the gay slur slipped through their check systems, perhaps because it was such a small road-driveway. She says they'll continue investigating. McDaniel’s has a message for whoever did it. “Live your life and leave us alone you know. We don't bother anybody.”
A compelling argument for strong-arm tactics against those who perpetrate abuse on the net.
By Helen Fenwick
20/11/2014- This book sets forth a compelling argument that the internet should not be allowed to maintain its “Wild West” anarchic status, because its ability to facilitate cyber-bullying outweighs the virtues of maintaining that status. It argues that the virtues of the web – in particular, anonymity, which fosters truth-telling and self-expression – also translate into vices: people become de-individuated in anonymous postings, and the lack of identification fosters the refusal to conform to social norms. The result is online harassment and bullying that can take extreme forms.
Hate Crimes in Cyberspace’s main strength lies in its sustained and detailed exploration of the bizarrely convoluted, sustained and extremely hurtful nature of online abuse of individuals. Danielle Keats Citron, a legal scholar, pertinently compares the social response to online bullying (which informs the legal one) to the response to domestic violence and workplace sexual harassment in the 1970s. At that time it was thought that both could be relegated to the sphere of the private choices of women – that the responsibility lay with the woman to deal with the problem, by growing a thicker skin or by simply packing her bags and leaving. Feminist campaigns from the 1970s onwards changed that perception and triggered legal change. Citron argues that the tendency to trivialise online abuse (as frat-boy banter) and to blame the victim for failing to shrug it off is highly prevalent and is retarding the development of stronger laws and law enforcement. She makes her case successfully for changing social perceptions and creating a far more effective legal response, particularly by utilising civil rights laws.
Nevertheless, her book is somewhat selective in its approach. Its very broad title is misleading – it might easily have been titled Cyber-Based Sexual Harassment and Proposals for US Legal Reform. Clearly, that title would have been less snappy and less appealing. But it would have been more accurate. The book focuses very strongly on the harassment and denigration of women via online abuse, and this is the right approach to take, rather than focusing on the harassment of white heterosexual males, who suffer significantly less online abuse. Its pioneering research could and should be used to support the case for introducing a criminal offence of gender-based hate speech in various countries, including the UK.
However, the book only touches on abuse suffered by lesbian, gay, bisexual and transgender persons and on racial grounds, largely disregards the abuse of persons due to other characteristics, and also largely disregards group-based online hate crimes (or hate speech as hate crime). So, for example, it does not discuss Salafi/Wahhabi or Christian fundamentalist online hate speech that is aimed at gay people, which can clearly have an impact on individuals. Many such groups – as has been brutally illustrated in recent months by the actions of Islamic State – understand the impact and utility of social media all too well.
Citron’s proposals for law reform are practical but also selective. They are very US-centric – which is understandable up to a point, but also ironic, given the book’s message about the nature of cyberspace and the difficulties of prosecuting in a borderless space. International initiatives aimed at cyber-bullying could have been considered, as could examples from other countries, since for obvious reasons this is an international problem. A strong, compelling, readable exploration of this problem is proffered here, but the call for action that it represents requires a wider focus.
Hate Crimes in Cyberspace
By Danielle Keats Citron
Harvard University Press, 352pp, £22.95
Published 20 October 2014
© Times Higher Education
The report published today looks at the need for regulation specific to online harassment.
19/11/2014- The law reform Commission has today published a paper that aims to tackle the issue of online harassment by “trolls”. In the paper, a number of issues relating to online bullying and anonymous posting are raised – and the adequacy of the current legislation is questioned. At the moment, the law requires sustained harassment for a offence to be committed online, while one-off incidents are not considered in the same way. The paper questions the current legislation that is in place, and whether it is sufficient to deal with the new challenges posed by online abuse, particularly in relation to hate speech.
It is asked whether there should be a new legislation for instances where:
@ There is a serious interference with privacy.
@ Content that goes online has the potential to cause serious harm due to its international reach and permanence.
@ There is no sufficient public interest in publication online.
@ The accused intentionally or recklessly caused harm.
As the law stands
At the moment the law deals with online offences through legislation designed to deal with general circumstances. Online bullying is considered under the Non-fatal Offences Against the Persons Act 1997, which makes harassment – also commonly referred to as ‘stalking’ – an offence. In the issues paper it is suggested that updates to the Prohibition of Incitement to Hatred Act 1989 could be updated in line with suggestions from the EU Commission. Such a change would bring Ireland into line with the 2008 EU Framework Decision on combating xenophobia and racism.
The ‘Issues Paper on Cyber-crime Affecting Personal Safety, Privacy and Reputation, including Cyber-bullying’ is being published as part of the Commission’s Fourth Programme of Law Reform. In it, the issue of dealing with how civil law remedies problems that comes from websites located outside of the state are also considered.
Speaking on Newstalk’s Pat Kenny show, Raymond Byrne, the Director of Research at the Law Reform Comission, said: We happen to have a lot of the big social media companies here in Dublin. We have the opportunity to do something here that is a good guide for other countries as well. Byrne went on to point out that penalties for offences in Ireland were more severe than elsewhere. “The harassment offence carries up to seven years imprisonment so that is pretty tough in terms of a sentencing. Most of the sentences we’ve had here in terms of malicious telephone calls –they are already higher than the comparison in England– where the maximum is six months at the moment. They are putting that up to two years. We are already way beyond that here in Ireland,” said Byrne.
© The Journal Ireland
Victims and witnesses of racism will, as from today, be able to report abuse through a website created to address its low reporting rate and offer support.
16/11/2014- The site – reportracism-malta.org – is intended to increase the reporting of such incidents, inform individuals about the remedies available and support them through the process. It was launched by human rights think tank The People for Change Foundation and also aims to gather data to understand the reality of racism in Malta and provide evidence to inform legal and policy development in the area. Anyone who witnesses or experiences racism can fill in an online form – available in Maltese, English and French – asking questions such as where and when the incident occurred, what it consisted of and whether a police report was filed. People can also send in evidence, such as photos or footage, to back up their claims. If the person filing the report agrees to be contacted, the foundation will offer its support. This will include information, as well as help with filing official reports and following them up.
85% - the percentage of racism victims who keep quiet
“The need for such a system is clear from the high levels of incidence and low levels of reporting of racist incidents,” the foundation said in a statement. Maltese authorities receive very low numbers of racism reports. A National Commission for the Promotion of Equality report showed that 85 per cent of victims of racism keep quiet. In contrast, a report published by the European Union Agency for Fundamental Rights found that 63 per cent of Africans in Malta experienced high levels of discrimination, the second highest incidence in the EU. In addition, 29 per cent fell victim to racially motivated crime. Taken together, these figures highlight a gap between reports and incidents. This could be due to the lack of access to information and a reporting system, the foundation said, as it pointed out that a Fundamental Rights Agency report found that only 11 per cent of African immigrants in Malta knew of the existence of the National Commission for the Promotion of Equality.
“We hope that this website will promote a culture of reporting racist incidents, while developing a better understanding of the state of play of racism in Malta through the compilation of information about such incidents,” the foundation said.
© The Times of Malta
The summer war between Israel and Hamas generated an explosion of online anti-Semitic hate speech in several European countries, an international watchdog reported.
14/11/2014- The assertion came in a report on 10 European countries released Wednesday by the International Network Against Cyber Hate and the Paris-based Inter-national League Against Racism and Anti-Semitism — or INACH and LICRA respectively. In the Netherlands, the Complaints Bureau Discrimination Internet, or MDI, recor-ded more instances of online hate speech against Jews during the two-month conflict than during the entire six months that preceded it, revealed the report, which the groups presented in Berlin at a meeting on anti-Semitism organized by the Organization for Security and Co-operation in Europe, or OSCE.
More than half of the 143 expressions of anti-Semitism documented by MDI in July and August, when Israel was fighting Hamas in Gaza, contained incitements to vio-lence against Jews, the report stated. Roughly three quarters of the complaints documented in that period occurred on social media. In Britain, the Community Security Trust recorded 140 anti-Semitic incidents on social media from January to August, with more than half occurring in July alone. And in Austria, the Forum against Antisemitism recorded 59 anti-Semitic incidents online during the conflagration of violence between Israel and Palestinians — of which 21 included incitements to violence — compared to only 14 incidents in the six months that preceded it.
The data on online anti-Semitic incidents corresponded with an increase in real-life assaults, LICRA and INACH wrote. The report’s recommendations included a submis-sion by the Belgian League Against Anti-Semitism, which called for OSCE member states to adopt the “Working Definition of Anti-Semitism” that the European Union’s agency for combating xenophobia enacted in 2005 but later dropped. The definition includes references to the demonization of Israel.
© JTA News.
14/11/2014- Is there such a thing as a Facebook murder? Is it different than any other murder? Legally, it can be. From a common sense point of view, there is no 'hate crime' status that should make a murder worse if a white person kills a latino person or a Catholic instead of a white person or a Protestant, but legally such crimes can be considered more heinous and get a special label of hate crime. But social media is ubiquitous and criminal justice academics are always on the prowl for new categories to create and write about so a 'Facebook Murder', representing crimes that may somehow involve social networking sites and thus be a distinct category for sentencing, has been postulated.
Common sense should prevail, says Dr. Elizabeth Yardley, co-author of a paper on the subject in the Howard Journal of Criminal Justice. Yes, perpetrators had used social networking sites in the homicides they had committed but the cases in which those were identified were not collectively unique or unusual when compared with general trends and characte-ristics - certainly not to a degree that would necessitate the introduction of a new category of homicide or a broad label like 'Facebook Murder'.
"Victims knew their killers in most cases, and the crimes echoed what we already know about this type of crime," said Yardley. "Social networking sites like Facebook have become part and parcel of our everyday lives and it's important to stress that there is nothing inherently bad about them. Facebook is no more to blame for these homicides than a knife is to blame for a stabbing--it's the intentions of the people using these tools that we need to focus upon." So banning guns or Facebook would not prevent murders any more than banning spoons would prevent people from getting fat. The justice system will be happy not to have another set of arcane guidelines to follow.
By Monica Dux
14/11/2014- Everyone has a right to be a bigot, or so Oberleutnant Brandis insists. But does that mean we're also obliged to put up with bigots on Facebook? We hear a lot about trolls and online bullying, but what if the problem is not an anonymous hater but someone you know? Perhaps even a member of your own family? My friend Claudia recently wrestled with this question after she reconnected with a distant cousin via Facebook. Friendly messages were exchanged - reminiscences about eccentric relatives and long-ago family Christmases. There was even reckless talk, as there so often is on Facebook, of meeting up in person. Then the racist posts started appearing in Claudia's feed: rants about refugees rorting the welfare system, people who come to this country but don't bother to learn English, and burqa-wearing housewives plotting to take over Parliament.
Feeling that she could not let this pass unchallenged, Claudia commented on one of the posts, calling it out as offensive rubbish. In a sense this had the desired effect in that the racist posts stopped appearing in her feed. But they had not disappeared because Claudia's cousin had seen the error of her ways. Claudia had simply been unfriended. One of the things I like about Facebook, when I like it at all, is its plurality. In the real world most of us socialise with a relatively small cohort of like-minded people. By contrast, on Face-book, we typically rub digital shoulders with a far more diverse collection of "friends", from life-long pals to some guy you met briefly at a party and have never seen again, although you are regularly updated on what he's having for breakfast.
With such a varied collection of people, your Facebook feed will inevitably contain many posts that you don't agree with. When this happens, you might choose to engage in friendly online debate, or you can just let it pass, huffing and puffing in the silence of the real world. But things get trickier when the opinions being expressed don't just offend your sensibilities or your political leanings but challenge your concept of basic human decency. If we choose to ignore repugnant, racist views, don't we become complicit? We're told that the only thing necessary for evil to thrive is for good people to remain silent. But if we are morally obliged to speak up in the face of bigotry, are we not under an equal obligation to post?
After all, challenging racism is far easier on Facebook than in the real world. When you're at your family Christmas and Uncle Bob starts to sound off about how Australia ought to be reserved for Australians, calling him out as a disgraceful racist will probably mean that Christmas is ruined, everyone goes home angry and you'll all have to drink even more next year to get through the ordeal. On the other hand, at least Bob's racism will have been publicly debunked. Or will it? Perhaps the real reason so many of us hesitate to slam the Uncle Bobs of this world is not a cowardly desire to avoid conflict but an understanding that doing so will achieve nothing, aside from making you feel good about your own moral righteousness. For whatever you might say, Bob's mind will probably not be changed.
Obviously it is important to speak up to institutional racism, such as that evidenced in our government's draconian treatment of asylum seekers. Similarly, calling out and critiquing the drivel expressed by people with an influential public voice, such as shock jocks, is vital. But what about the unanalysed racism of people like Claudia's cousin, which is so often born of ignorance and disempowerment? People with little education, or radically different life experiences, who have been encouraged by a dog-whistling government to focus their fears and frustrations on vulnerable groups within our society? This kind of bigotry has many and varied roots, and it'll take a lot more than a withering comment on their Facebook page to dig them out.
The social media is often criticised for creating a false sense of intimacy, while actually distancing people from genuine, meaningful interaction. But perhaps this distance is sometimes a positive. Because stepping back and being a mere bystander, a witness, can provide you with a valuable opportunity to see how others think, acting as a reminder that the world is filled with people who hold views radically different from your own. And that tackling those ideas will require far more than simply clicking the unfriend button.
© The Sydney Morning Herald
Law enforcement professionals throughout the US are increasingly leveraging social media to assist in crime prevention and investigative activities, according to a new study released by LexisNexis Risk Solutions.
13/11/2014- The LexisNexis 2014 Social Media Use in Law Enforcement report solicited feedback from 496 participants at every level of law enforcement—from rural localities to major metropolitan cities and federal agencies—to examine the law enforcement community’s proclivity to use social media for crime investigation and prevention. The study, a follow-up to an initial study conducted in 2012, found that eight out of 10 law enforcement professionals are actively using social media for investigations, with 25 percent using social media on a daily basis. “The benefits of social media from an information-gathering and community outreach perspective became very evident during the subsequent investigations of the Boston Marathon bombings and the Washington Navy Yard tragedy,” said Rick Graham, Law Enforcement Specialist, LexisNexis Risk Solutions and former Chief of Detectives for the Jacksonville (Fla.) Sheriff’s Office. “It is imperative that agencies invest in formal social media investigative tools, provide formal training, develop or amend current policies to ensure investigators and analysts are fully armed to more effectively take advantage of the power social media provides.”
Use of social media by law enforcement grew in 2014 and the upward trend is likely to continue. Over three-quarters of respondents indicate plans to use social media even more in the next year. Moreover, the value of social media in helping solve crimes more quickly and assisting in crime anticipation is increasing. 67 percent of respondents believe that social media is a valuable tool for anticipating a crime. Law enforcement officials cited a number of real-world examples in which social media helped thwart impending crime, from stopping an active shooter to tracking gang behavior. Although social media use among law enforcement personnel is high and is likely to continue to grow, few agencies have adopted formal training in the use of social media to boost law enforcement efforts. In fact, there has been a decrease in formal training since the 2012 study, with most law enforcement personnel indicating that they are self-taught. “Lack of access to social media channels is the single biggest driver for non-use and has increased from 2012. Whereas, lack of knowledge has decreased significantly as a reason for not using social media,” states the study.
Fortunately, although agency support of social media training for law enforcement officials remains low, three quarters of law enforcement professionals are very comfortable using social media, showing a seven percent increase over 2012 despite a decrease in availability of formal training. As law enforcement personnel become more comfortable and familiar with social media tools, they are increasingly discovering new and effective ways to utilize it in criminal investigations. For instance, one law enforcement respondent used Facebook to discover criminal activity and obtain probable cause for a search warrant. “I authored a search warrant on multiple juveniles’ Facebook accounts and located evidence showing them in the location in commission of a hate crime burglary. Facebook photos showed the suspects inside the residence committing the crime. It led to a total of six suspects arrested for multiple felonies along with four outstanding burglaries and six unreported burglaries,” said one respondent. Another law enforcement official achieved success in using social media to identify networks of criminals, by using Facebook to identify suspects that were friends or associates of other suspects in a crime. “My biggest use for social media has been to locate and identify criminals,” the respondent stated. “I have started to utilize it to piece together local drug networks.”
Law enforcement officials have also used social media to collect evidence, identify witnesses, conduct geographic searches, identify criminals and their locations, and raise public safety awareness by posting public service announcements and crime warnings to Facebook. “As personnel become even more familiar and comfortable using it, they will continue to find robust and comprehensive ways to incorporate emerging social media platforms into their daily routines, thus yielding additional success in interrupting criminal activity, closing cases and ultimately solving crimes,” the report concluded.
Editor's note: Also read the report, The Rise of Predictive Policing Using Social Media, in the Oct./Nov. issue of Homeland Security Today.
© Homeland Security Today
A copyright claim on the "Innocence of Muslims" will be reviewed by the full 9th Circuit Court of Appeals.
12/11/2014- A federal Appeals Court on Wednesday agreed to reconsider its decision to order Google to take down an anti-Islam propaganda film that was linked to the 2012 Benghazi attack. Earlier this year, a three-judge panel sided with Cindy Lee Garcia, who sued Google for infringing on her copyright by hosting the video—titled Innocence of Muslims—on YouTube. The actress argued that she was fooled into appearing in the video after following up on an ad posting purporting to be for another movie. The video was taken down following the decision. Now, the full U.S. Court of Appeals for the 9th Circuit will review that decision, and the three-judge panel's ruling will not hold precedent in the full Court's review. Garcia originally had her case dismissed by a trial judge.
The case presents a thicket of thorny issues, including a debate over the balance between copyright protections and free speech in the Internet age. Open-Internet activists and several tech companies argued that the February ruling facilitates overly burdensome copyright limits. Facebook, Twitter, Yahoo, eBay, and Netflix have all supported Google's position. "This is very welcome decision," said Corynne McSherry, intellectual-property director at the Electronic Frontier Foundation. "The court's ruling was mistaken as a matter of law and a terrible precedent for online free speech. What happened to Cindy Garcia was truly shameful, but the 9th Circuit took a bad situation and made it worse." And the tensions over the case are ratcheted up by the video's controversial nature—as well as its connection to the September 2012 attack on the U.S. consulate in Benghazi.
According to an extensive New York Times piece published last December, the video partially contributed to the violence, in which four Americans were killed. "Contrary to claims by some members of Congress, [the violence] was fueled in large part by anger at an American-made video denigrating Islam," according to The Times. The role of the video is hotly debated, and many conservatives accuse the Obama administration of overstating its impact to deflect attention from a terrorist attack in the run-up to the 2012 presidential election. Earlier this year, a second actor in the film, Gaylor Flynn, filed a separate lawsuit also arguing that Google had reproduced his performance without consent.
© The National Journal
Joanne St. Lewis case is just one that shows how internet easily spreads racist message.
12/11/2014- When Joanne St. Lewis wrote a critical evaluation of a student racism project, she could not have known the grief it would cause. And certainly not the years it would take to finally erase the racial slur that accompanied her name in every online search. It began six years ago, and continues today in spite of an Ontario Superior Court decision in June. The decision found an Ottawa blogger had defamed St. Lewis by attaching a racial epithet meaning to "sell out," stemming from the black slave experience, to her name. St. Lewis, a University of Ottawa law professor, has taken steps most would find daunting. Going to court, winning a decision and now fighting an appeal. "It's extremely expensive. It’s difficult. It’s imperfect. It’s painful. And it may not always even remotely be an opportunity or a remedy for someone," she said. But for St. Lewis, standing up against the slur, written in a blog and repeated by others, it was a sense of duty and dignity. "If it is my fate to be the first black Canadian so publicly defiled, then it is my hope to be the last. It was essential that no other suffer as I have," she wrote after a jury found the words used against her were defamatory. In accordance with the court’s decision, the blog post has been removed from the internet, but the term can still be found in Google searches of her name. St. Lewis was also awarded $350,000 in damages. "I think there’s a recklessness, a casual cruelty, a complete indifference and egotism that the internet permits," she said in an interview with CBC News. "What it seems to do is allow people to be bullies and behave like feral pack animals on the internet to target activists."
Researcher tries to quantify online racism
There is little research to quantify the extent of online racism in Canada. Irfan Chaudhry is trying to change that. A PhD Candidate at the University of Alberta, Chaudhry is tracking Twitter for terms that would be considered racist and offensive. With Twitter, Chaudhry is able to look at racist terms and references, and which cities they originate from. Spe-cifically he looked at Edmonton, Winnipeg, Calgary, Vancouver, Montreal and Toronto. He chose those cities because in 2010, they reported some of the highest rates of hate crime in the country. His three-month study found about 750 instances he considered overt racism. "People were tweeting about things that you’d probably want to have left in your mind," he explains. He cites examples such as people boarding a bus or plane and tweeting: "About to board, stuck beside a --- and a --- #thanks." Other cases were far more direct. "It was someone saying ‘I hate’ and then insert racialized group here." He found those sorts of statements were more likely directed at aboriginal populations in Winnipeg and Edmonton, while in Toronto and Montreal, racist comments were largely aimed at people of colour. "When you break down the amount of tweets... it kind of reflected different demographic patterns," he notes.
In Thompson, Manitoba, a community with a large aboriginal population, a local newspaper was forced to shut down its Facebook page in response to a large number of racist comments. Lynn Taylor, general manager of the Thompson Citizen, said racist sentiments have long simmered in the community, but recently surfaced online. The tipping point came when someone posted a photoshopped picture showing the front of the newspaper’s building with racist comments painted over it.. She hopes to reopen the site next year, with better monitoring of comments before they are posted. Other media outlets, including the CBC, closely monitor or disable comments to minimize the risk of racist material being posted. St. Lewis said part of the problem is the medium itself. "It allows people to behave in a way that if they did it in the bricks and mortar universe amongst flesh and blood people, we know it’s not acceptable. We know there’s legal consequence. But somehow, that piece of being virtual, that piece of being on the internet seems to give this incredible permission," she said.
© CBS News
A British lawmaker complained of abuse. Suddenly, the abuse stopped.
12/11/2014- Luciana Berger, a member of British Parliament, has been receiving a stream of anti-Semitic abuse on Twitter. It only escalated after a man was jailed for tweeting her a picture with a Star of David superimposed on her forehead and the text "Hitler was Right." But over the last few weeks, the abuse began to disappear. Her harassers hadn’t gone away, and Twitter wasn't removing abusive tweets after the fact, as it sometimes does, or suspending accounts as reports came in. Instead, the abuse was being blocked by what seems to be an entirely new anti-abuse filter.
For a while, at least, Berger didn’t receive any tweets containing anti-Semitic slurs, including relatively innocuous words like "rat." If an account attempted to @-mention her in a tweet containing certain slurs, it would receive an error message, and the tweet would not be allowed to send. Frustrated by their inability to tweet at Berger, the harassers began to find novel ways to defeat the filter, like using dashes between the letters of slurs, or pictures to evade the text filters. One white supremacist site documented various ways to evade Twitter’s censorship, urging others to "keep this rolling, no matter what."
In recent months, Twitter has come under fire for the proliferation of harassment on its platform—in particular, gendered harassment. (According to the Pew Center, women online are more at risk from extreme forms of harassment like "physical threats, stalking, and sexual abuse.") Twitter first implemented the ability to report abuse in 2013, in response to the flood of harassment received by feminist activist Caroline Criado-Perez. The recent surge in harassment has again resulted in calls for Twitter to "fix" its harassment problem, whether by reducing anonymity, or by creating better blocking tools that could mass-block harassing accounts or pre-emptively block recently created accounts that tweet at you. (The Blockbot, Block Together, and GG Autoblocker are all instances of third party attempts to achieve the latter.) Last week, the nonprofit Women, Action, & the Media announced a partnership with Twitter to specifically track and address gendered harass-ment.
While some may welcome the mechanism deployed against Berger’s trolls as a step in the right direction, the move is troubling to free speech advocates. Many of the proposals to deal with online abuse clash with Twitter’s once-vaunted stance as "the free speech wing of the free speech party," but this particular instance seems less like an attempt to navigate between free speech and user safety, and more like a case of exceptionalism for a politician whose abuse has made headlines in the United Kingdom. The filter, which Twitter has not discussed publicly, does not appear as if it's intended to be a universal fix for harassment that is experienced by less-important users on the platform, such as the women targeted by Gamergate. Prior to the filter being activated, Luciana Berger and her fellow MP, John Mann, had announced plans to visit Twitter’s European Headquarters, to talk to higher-ups about the abuse. Parliament is currently discussing more punitive laws against online trolling, including a demand from Mann for a way to ban miscreants from "specific parts of social media or, if necessary, to the Internet as a whole."
In a letter to Berger that is quoted in part here, Twitter’s head of global safety outreach framed efforts over the past year as including architectural solutions to harassment. "Our strategy has been to create multiple layers of defense, involving both technical infrastructure and human review, because abusive users often are highly motivated and creative about subverting anti-abuse mechanisms." The letter goes on to describe known mechanisms, like the use of "signals and reports from Twitter users to prioritize the review of abusive content," and hitherto unknown mechanisms like "mandatory phone number verification for accounts that indicate engagement in abusive activity." However, the letter says nothing about a selective filter for specific words. To achieve that result, the company appears to have used an entirely new tool outside of its usual arsenal. A source familiar with the incident told us, "Things were used that were definitely abnormal."
A former engineer at Twitter, speaking on the condition of anonymity, agreed, saying, "There’s no system expressly designed to censor communication between individuals. … It’s not normal, what they’re doing." He and another former Twitter employee speculated that the censorship might have been repurposed from anti-spam tools—in particular, BotMaker, which is described here in an engineering blog post by Twitter. BotMaker can, according to Twitter "deny any Tweets" that match certain conditions. A tweet that runs afoul of BotMaker will simply be prevented from being sent out—an error message will pop up instead. The system is, according to a source, "really open-ended" and is frequently edited by contractors under wide-ranging conditions in order to effectively fight spam.
When asked whether a new tool had been used, or BotMaker repurposed, a Twitter spokesperson replied: "We regularly refine and review our spam tools to identify serial accounts and reduce targeted abuse. Individual users and coordinated campaigns sometimes report abusive content as spam and accounts may be flagged mista-kenly in those situations." It’s not clear whether this filter is still in place. (I attempted to test it with "rat," the only word that I was willing to try to tweet, and my tweet did go through. The filter may have been removed, the word "rat" may have been removed from the blacklist, or the filter may have only been applied to recently created accounts). It’s hard to shed a tear for a few missing slurs, but the way they were censored is deeply alarming to free speech activists like Eva Galperin of the Electronic Frontier Foundation. "Even white supremacists are entitled to free speech when it’s not in violation of the terms of service. Just deciding you’re going to censor someone’s speech because you don’t like the potential political ramifications for your company is deeply unethical. The big point here is that someone on the abuse team was worried about the ramifications for Twitter. That’s the part that’s particularly gross."
What’s worrisome to free speech advocacy groups like the EFF about this incident is how quietly it happened. Others may see the bigger problem being the fact that it appears to have been done for the benefit of a single, high-profile user, rather than to fix Twitter’s larger harassment issues. The selective censorship doesn’t seem to reflect a change in Twitter abuse policies or how they handle abuse directed at the average user; aside from a vague public statement by Twitter that elides the specific details of the unprecedented move, and a few, mostly-unread complaints by white supremacists, the entire thing could have gone unnoticed. Eva Galperin thinks incidents like these could be put in check by transparency reports documenting the application of the terms of services, similar to how Twitter already puts out transparency reports for government requests and DMCA notices. But while a transparency report might offer users better information as to how and why their tweets are removed, some still worry about the free-speech ramifications of what transpired. One source familiar with the matter said that the tools Twitter is testing "are extremely aggressive and could be preventing political speech down the road." He added, "are these systems going to be used whenever politicians are upset about something?"
© The Verge
UK prime minister David Cameron has called for “extremist material” to be taken offline by governments, with help from network operators.
14/11/2014- Speaking in Australia's Parliament on a trip that will also see him attend the G20 leaders' summit, Cameron spoke of Australia and Britain's long shared history, common belief in freedom and openness and current shared determination to fight terrorism and extremism. Cameron said [PDF] poverty and foreign policy are not the source of terror. “The root cause of the challenge we face is the extremist narrative,” he said, before suggesting bans on extremist preachers, an effort to “root out” extremism from institutions and continuing to “celebrate Islam as a great world religion of peace.”
He then offered the following comment:
“A new and pressing challenge is getting extremist material taken down from the internet. There is a role for government in that. We must not allow the internet to be an ungoverned space. But there is a role for companies too. In the UK, we are pushing them to do more, including strengthening filters, improving reporting mechanisms and being more proactive in taking down this harmful material. We are making progress, but there is further to go. This is their social responsibility, and we expect them to live up to it.” Cameron's remarks have a strong whiff of a desire to extend state oversight of the internet. The UK already prohibits “Dissemination of terrorist publications” under Part 1, Section 2 of the the Terrorism Act 2006. The country also operates a plan to reduce hate crime, in part by removing hate material found online.
A May 2014 report [PDF] on that plan's progress notes difficulties securing co-operation from ISPs and social networks, especially those outside the UK. Security is not on the G20 agenda, but what the leaders choose to discuss around the table is fluid. Might the sentence “We must not allow the internet to be an ungoverned space” therefore be an attempt to steer talks in the direction of international co-operation around internet regulation? The summit runs over the weekend and Vulture South has accreditation to the event, mea-ning we can get our hands on any communiqués the leaders emit. Most of the output of such events is negotiated in advance, but we'll keep an eye on things in case Cameron's thought bubble expands and also because a major initiative to combat multinational tax avoidance is expected to be one of the event's highlights.
© The Register
Many observers were encouraged to see Manchester City midfielder Yaya Toure speak out via the BBC last week against those who had racially abused him over Twitter just hours after he had reactivated his account.
11/11/2014- As one of the sport's most high-profile figures, it felt as if the Ivory Coast international had made a stand on behalf of an ever-growing number of similar victims in the game - because Toure is far from alone in being subject to such treatment. Already this season, Liverpool striker Mario Balotelli has been racially abused over the internet after he made fun of Manchester United following their defeat to Leicester City. Last year I interviewed former footballer and Professional Footballers' Association (PFA) chairman Clarke Carlisle at his house. He showed me his laptop and the torrent of vile racial abuse he had received via Twitter, abuse he did not want his wife or children to see, and which had left him feeling numb. And all because he had been commentating on a match that week on TV. Last season, 50% of all complaints about football-related hate crime submitted to anti-discrimination organisation Kick It Out (KIO) related to social media abuse. So severe is the problem, KIO now employs a full-time reporting officer whose job is to act on such incidents and refer them to the relevant authorities. Greater Manchester Police are investigating the Toure case, but don't be too surprised if no-one is ever punished. The anonymity users can gain on social media can make it very difficult to track down offenders.
But Kick It Out is also frustrated by what it feels is a lack of a co-ordination between the police and Twitter and the need for better communication between the two. It feels there needs to be more education for local police forces on the misuse of social media and how complaints are dealt with. In some cases, KIO says, it has made a report but has not heard back from the police, something one source there described as "very disheartening". In addition, KIO wants more clubs to be proactive in coming out to publicly support their players when they are the victims of discrimination online, by calling upon the authorities to work closely with the relevant platform to investigate and track down the offenders. Such concerns are nothing new. Accounts with false identities often mean the police need Twitter to provide them with an IP address for the account if they hope to find them. The Association of Chief Police Officers (ACPO) has said that Twitter only provides this information with a US court order, something which it is difficult to get because of the value and protection afforded there to freedom of speech.
Elsewhere - like in the UK - it is optional, although Twitter insists it is co-operating with law enforcement here more than ever before. During the first half of 2014, Twitter received 78 account information requests, 46% of which resulted in some information being produced, the highest proportion to date. It says it has made it easier for users to report malicious posts, claims it has become more vigilant in blocking offensive tweeters, and is developing technology that prevents barred trolls from simply opening up a new account.
Progress being made
The police insist important progress is being made, and that platforms are now beginning to appreciate the responsibility they have for what is posted on their networks. Last year, following a long legal battle in France when prosecutors argued Twitter had a duty to expose wrong-doers, the site agreed to hand over details of people who allegedly posted racist and anti-Semitic abuse. Although that set an important precedent, Twitter admits it could do better. Earlier this year it promised to change its policies after Robin Williams's daughter Zelda was targeted by trolls following his suicide. But there are signs that such abuse will not be tolerated.
In 2012 Liam Stacey, from Swansea, received a prison sentence after racially abusing former Bolton player Fabrice Muamba on Twitter. Last year, a man who admitted sending racist tweets to two footballers was ordered to pay £500 compensation to each of them. And police were heartened last month when a Nazi sympathiser was jailed for four weeks for sending anti-Semitic tweets to Jewish MP Luciana Berger. But these cases, of course, while dissuading some, will not prevent further incidents from occurring. Twitter admits it is impossible to monitor all of the 500 million or so postings going through its networks each and every day. One expert I spoke to told me that some of the cases the UK media has picked up on would simply not register in the US, where such abuse is often disregarded and denied the publicity some trolls crave. Others will insist that it is absolutely right that such vitriol is exposed and condemned.
Paul Giannasi, hate crime lead officer at ACPO, said the challenge was huge, but efforts to combat the problem were constantly evolving. ACPO sits on an international cyber-hate working group lead by the US based Anti-Defamation League. This group brings parliamentarians, professionals and community groups together with industry leaders to help find solutions that balance protection from offensive comments with the right to free speech. "The police will draw on the guidelines issued by the Director of Public Prosecutions and The College of Policing to assess whether the threshold for communications which are grossly offensive, indecent, obscene or false is met. "The CPS guidance is very clear that a high threshold applies in these cases. We encourage officers to work with the CPS at an early stage of an investigation to determine whether proceeding with a prosecution is in the public interest." Certainly, with its tradition of rivalry and tribal passions, football seems particularly vulnerable to the dark side of social media.
Twitter and other platforms have enabled fans and the players they idolise to get closer than ever. Amid the anodyne world of bland footballer interviews, it is refreshing that players' true emotions and opinions can often be glimpsed online even if sometimes it results in them being fined. But it also enables a sad and cowardly minority to abuse and insult in a way that would never be tolerated - and that they would never dare to - in a public, physical place. Amid unprecedented interest and media exposure, footballers can be followed by millions of supporters. This makes them an attractive target for the trolls who crave attention through a retweet, and seek maximum impact from their messages of hate. The question is how to tackle them without endangering the freedom that makes social media such a special place to so many.
How Twitter tackles abuse
Over the past year it has expanded the number of people working on abuse reports, reporting 24/7 cover. It has invested in technology to make it harder for serial abusers to create accounts, and perpetuate abusive behaviour. It has worked with the Safer Internet Centre and charities that specialise in developing strategies to counter hate speech.
© BBC News
11/11/2014- In the wake of a series of terrorist “run over” attacks, where Israeli pedestrians have been mowed down by Palestinian terrorists, more than 90 Facebook pages glorify-ing the attacks and urging more violence against Israeli civilians have been identified. The social media campaign, which uses the Arabic term “Daes” (Run-over), which is a play on the word “Daesh” (ISIS), praises the attacks as a form of resistance, according to the Anti-Defamation League. Some of the posts on these pages describe the “run-overs” as part of a new revolution, a form of “car Intifada.” Many of the pages also enable users to give vent to expressions of violent anti-Semitism. “This campaign is the latest example of how social media is being used to promote and glorify terrorism and anti-Semitism,” said Abraham H. Foxman, ADL National Director. “Social media platforms were not created to spread anti-Se mitism and terrorism to the masses.” The campaign is also starting to spread on Twitter, according to ADL. The “Daes” hashtag has attracted numerous terrorist sympathizers. Several pages include anti-Semitic posts depicting religious Jews with hooked noses running away from vehicles attempting to run over them. ADL is in the process of notifying those social media companies about those accounts promoting the campaign.
The huge gaming hit Clash of Clans allows its players the opportunity for anti-semitism. Among the millions of players are groups that call themselves 'holocaust', for example. Players also come up with provocative anti-semitic captions.
5/11/2014- In Clash of Clans various clans do battle. Clans are made up of at most 50 players, who combat players from other clans. A search by BNR Nieuwsradio found at least 45 clans calling themselves 'holocaust'. Other names used are - among others - 'jew raiders' and 'we kill jews'. Some captions used are 'we burn jews for fun' en 'Anne Frank was easy to find'. Many games, such as World of Warcraft, try to prevent this kind of behaviour. They employ moderators who police players’ illegal or offensive practices. It is not clear if Supercell, the Finnish game development company behind Clash of Clans, does this as well. Clash of Clans was launched in 2012. Supercell responded by email saying that ‘it is not possible to prevent the anti-semitic expressions from taking place, given the millions of people who play their games. “We will close down clans that use abusive language when we see it happening.”
© BNR (dutch)
Labour leader condemns spike in antisemitic attacks, and calls on social media sites to do more to identify online trolls.
4/11/2014- A recent spike in antisemitic attacks should serve as a “wake-up call” for anyone who thinks the “scourge of antisemitism” has been defeated in Britain, Ed Miliband warned on Tuesday. In a post on his Facebook page, the Labour leader called for a “zero-tolerance approach” to antisemitism and said that some Jewish families had told him they felt scared for their children. Miliband intervened after the Community Security Trust, which provides training for the protection of British Jews, recorded a 400% increase in antisemitic incidents in July this year compared with the same month in 2013. The Labour leader highlighted what he described as “shocking attacks” on Luciana Berger, the shadow public health minister, and Louise Ellman, the chair of the Commons transport select committee. The two senior Labour MPs, who are Jewish, were targeted by antisemitic trolls after a man was jailed for four weeks after he admitted sending what Miliband described as a “vile” tweet.
The Jewish Chronicle reported that Garron Helm was jailed after tweeting a photograph of Berger superimposed with a yellow star - as used by the Nazis to identify Jews during the war. Miliband called on social media sites to do more to identify the perpetrators. He wrote: “There have been violent assaults, the desecration and damage of Jewish property, antisemitic graffiti, hate-mail and online abuse. The shocking attacks on my colleagues Luciana Berger and Louise Ellman have also highlighted the new channels by which antisemites spread their vile views. That is why it is vital that Twitter, Facebook and other social media sites do all they can to protect users and crack down on the perpetrators of this sickening abuse.” He said that the rise in attacks took place during the recent conflict between Israel and Hamas in Gaza and that it was important to be temperate in discussing Israel.
“More than half of the anti-Semitic incidents recorded by the CST in July involved direct reference to the conflict and the previous highest number of monthly incidents recorded by CST (January 2009) also coincided with a period of fighting between Israel and Hamas. We need to tackle this head on because I am clear that this can never excuse antisemitism, just as conflicts elsewhere in the Middle East can never justify Islamophobia. All of us need to use calm and responsible language in the way we discuss Israel, especially when we disagree with the actions of its government. A zero-tolerance approach to anti-Semitism and prejudice in all its forms here in Britain will go hand-in-hand with the pursuit of peace in the Middle East as a key focus of the next Labour government’s foreign policy.” Miliband, who is Jewish, was recently criticised by the actor Maureen Lipman after he voted in favour of recognising Palestinian statehood.
In an article in Standpoint, Lipman wrote: “Just ... when our cemeteries and synagogues and shops are once again under threat. Just when the virulence against a country defending itself, against 4,000 rockets and 32 tunnels inside its borders, as it has every right to do under the Geneva convention, had been swept aside by the real pestilence of IS, in steps Mr Miliband to demand that the government recognise the state of Palestine alongside the state of Israel.” The New York Times recently reported on Miliband’s vote in favour of Palestinian statehood under the headline: British Labour Chief, a Jew Who Criticizes Israel, Walks a Fine Line. Its London correspondent Stephen Castle wrote: “Britain’s center-left Labour Party often sympathizes instinctively with the Palestinian cause, and Mr Miliband is not the first party leader to criticize Israel. Yet his willingness to speak about his family’s story and connections to Israel – showcased in a high-profile visit there this year – has brought a personal dimension to a loaded issue.”
© The Guardian
A 33-year-old minor hockey coach from Langley, B.C. has been fired after posting a series of shocking Nazi propaganda images to his Facebook page.
5/11/2014- Christopher Maximilian Sandau coached players in North Delta before league officials were alerted to his posts, some of which question the Holocaust death toll and suggest prisoners at the Auschwitz concentration camps were well-cared for. Another post features a swastika and reads, “If this flag offends you, you need a history lesson.” The North Delta Minor Hockey Association issued a statement confirming Sandau was let go over the weekend and condemning the material he shared online. “The posts contained extreme and objectionable material believed to be incompatible with an important purpose of our Minor Hockey Association: To promote and encourage good citizenship,” presi-dent Anita Cairney said in a statement. “The NDMHA requires that our coaches present themselves as positive role models for our children athletes.” The association said it won’t be commenting further on the advice of its legal counsel, but that alternative coaching arrangements have already been made. On Wednesday, Sandau told CTV News he’s been treated unfairly. The former coach said he was passionate about his job, and gave his players extra practice time every week free of charge.
“I was doing a good job and I wasn’t trying to impose my political beliefs or anything on anyone,” he said. “From the time I stepped onto the parking lot of the arena to the time I left, I was all about hockey and trying to help the kids get better.” Sandau acknowledged his opinions are likely to offend people, but insisted he’s not a neo-Nazi, merely a “history buff” who believes German atrocities during World War II have been misconstrued, or fabricated altogether. Apparent hostility toward Jewish people is a recurring theme in his posts, however. One features the image of a World War II soldier, claiming he was killed “so the Jews could control your banks,” and “so foreigners could run your civil and public services.” Asked about the post, Sandau conceded that “it does generalize a little too much, obviously,” and said he might consider taking that one down. Sandau said he was given a chance to keep his job by changing his Facebook settings and making his posts private, but turned it down on principle. Parents with children on either of the two North Delta minor hockey teams Sandau coached have been informed of his dismissal.
© CTV News
Jamie Bartlett explains why the battle for hearts and minds has moved online
4/11/2014- The head of GCHQ has warned that firms such as Facebook and Twitter are "in denial" about the use of their sites by terrorists and criminals. And he's right: extremists of all kinds have indeed "embraced the web". This is only natural. The battle for hearts and minds is a vital part of any conflict. To be seen as on the side of right; to create a groundswell of popular support; to reach new supporters. Whether it’s Isil or the extreme Right, the aim is to convince people to take your side. If not on the battlefield itself, then emotionally, morally, vocally, financially – and now, digitally. This battle used to be waged from on high: propaganda air dropped from governments and media broadcasters. Now it’s on Facebook and Twitter.
It barely needs saying that social media has been a boon to society – allowing anyone with a message or campaign to reach out to millions of people at almost zero cost. That includes charities, campaigning groups, political dissidents, and the rest. But for angry or violent groups social media is the perfect vehicle to spread a message and win new fans: a free and open way to share and disseminate propaganda to millions of people. What’s more, the cost of producing high-quality videos and multimedia content is now practically nothing. This means that small groups can exaggerate their influence and extend their reach more easily than ever before. And that’s exactly what they are doing.
Let’s start with Isil. So far, they have organised hashtag campaigns on Twitter to generate internet traffic. They then get those hashtags trending, which generates even more traffic. They hijack other Twitter hashtags – such as those about the World Cup, and more recently the iPhone 6, which they use to start tweeting Islamist propaganda – to increase their reach further still. They have posted real time footage from the battlefield, and directed it against their enemies. They use social media "bots" to automatically spam platforms with their content. In short, they are very active indeed: social media is an important part of their modus operandi. Although we’re constantly told that Isil are marketing geniuses, this is all pretty standard for any second-rate advertising company. And why wouldn’t it be? Many Isil supporters are young, Western men for whom social media is second nature. What they have done, crucially, is to create the impression of a much larger groundswell of popular support than they have – and generate enormous amounts of free publicity from the world’s media. (They do this quite deliberately too – directing tweets at the BBC and CNN in an effort to get coverage).
It goes something like this: this media mujahideen – most of whom aren’t even in Syria – post lots of tweets, attaching a hashtag to their tweets to ensure it reaches more people (such as #iphone6). People notice, and start using the same hashtag to criticise the group. Journalists write about how much support and traffic Isil is generating on Twitter, which then gets them mainstream media coverage. Isil will often include the Twitter accounts of major media outlets when they post. @BBCWorld and @BBCTrending were important Twitter accounts through which word spread about the threats Isil made to America. Between 3 and 9 July a BBC article, Americans scoff at Isil Twitter threats was the most shared article in tweets containing the tag #CalamityWillBefallUS. We’re doing their work for them.
According to Ali Fisher, a specialist who has been monitoring how Islamists use social media for the last two years, these Jihadist propaganda networks are stronger than ever. "They disseminate content through a network that is constantly reconfiguring, akin to the way a swarm of bees or flock of birds constantly reorganises in flight. This approach thrives in the chaos of account suspensions and page deletions’. Fisher calls this a 'user-curated' swarmcast." The UK’s far-Right is possibly even more impressive than Isil. Although it might be politically convenient to draw moral equivalences, they are quite different to Isil in their values, radicalism, brutality and threat to national security. Nevertheless, in September the BBC suggested that the far-Right is on the rise in the UK, as a result of Islamic State and sex abuse stories involving men of Pakistani descent. According to a senior Home Office official, the UK government underestimates the threat. He claimed that, since last year, at least five new far-Right groups have formed.
I’m not sure exactly what "far-Right group" means anymore, because the far-Right are also very gifted at using the net to give the impression they are bigger than they really are. For the most part the UK’s far-Right is relatively small and disjointed. Online, though, it's different. Just like Isil, the modus operandi of much of the far-Right has moved online: Facebook, Twitter, YouTube, forums, and blogs. There are hundreds of pages and forums dedicated to every shade of extreme nationalism. New groups pop up and disappear every day, and it’s very hard to work out if they are legitimate or not. Just with Isil, it’s often a handful of people making a lot of noise, without it necessarily becoming a significant force in the real world. The latest far-Right movement is called Britain First. They've been around for a while – and are perhaps the most cunning users of Facebook of any political movement. They have half a million Facebook "Likes" – far more than the Tories or the Labour Party. They produce and share very good content online: campaigns about the armed forces, about animal cruelty, about child sex abuse. Things that people with little interest in politics would share.
But according to Hope Not Hate, an anti-fascist campaign group, these general campaigns mask a more sinister motive. They argue that Britain First have been involved in intimidating British Muslims, including invading mosques, and call them "confrontational, uncomprising and dangerous". According to Hope Not Hate, Britain First has a core membership of only around 1500 people – most of whom were followers of former leader Jim Dowson, an anti-abortion campaigner. There are, reckons Matt Collins (a former National Front member who now works for Hope Not Hate) around 60 – 70 hardcore activists who are "willing to put on their badges and march on the street". But, Collins claims, their use of Facebook to increase their reach is "far beyond" anything he’s seen before. He also claims some of their Likes have probably been paid for. That’s the problem: it’s very hard to know.
NSA whistleblower Edward Snowden has complicated this story considerably. Since his revelations, there has been a significant growth in the availability and use of (usually free) software to guard freedom and keep internet users anonymous. There are hundreds of people working on ingenious ways of keeping online secrets or preventing censorship, designed for the mass market rather than the computer specialist: user-friendly, cheap and efficient. These tools are, and will continue to be, important and valuable tools for democratic freedoms around the world. Unfortunately, along with journalists, human rights activists and dissidents, groups like Isil and the far-Right will be the early adopters.
Censorship is not the answer. The Home Secretary has called for more action on tacking extremism – and I agree that it's necessary – but it's far easier to say than to do. Online, groups and organisations can be shut down and then relaunched quicker than the authorities can phone Facebook’s head office. And here’s the Gordian knot: the more we censor them, the smarter they get. When Isil was kicked off Twitter, some went to Diaspora, which is one of several new decentralised social media platforms run by users on their own servers, meaning, unlike YouTube or Twitter, their content is hard to remove.
The answer is found in riddle. Extremists are motivated, early adopters of technology – and their ideas and propaganda spread person to person, account to account. The battle for ideas used to be waged from on high. But today it’s more like hand-to-hand combat, played out across millions of social media accounts, 24 hours a day. Censorship doesn’t work in this distributed, dynamic ecosystem. But the same tools used by extremists are free to the rest of us too. That gives all of us both the opportunity and responsibility to defend what it is we believe. Unthinkable three years: you can now argue with an Isil operative currently in Syria, via Twitter or a Britain First activist on Facebook – all from your own home. The battle for ideas online can't be won, or even fought, by governments. It's down to us.
© The Telegraph
3/11/2014- A Kremlin-backed human rights body has assailed a Russian website as “Nazi” and “racist” for claiming that nearly one quarter of Russia’s billionaires are Jewish – but the response from one Jewish leader was more composed. Nikolai Svanidze of the Russian Human Rights Council – a Kremlin-affiliated body with no executive powers – condemned Lenta.ru, which covers the banking sector, for publishing a report that broke down by faith and ethnicity those Russian citizens appearing in Forbes Magazine’s 2014 list of the world’s wealthiest individuals. According to lenta.ru, 48 of the top 200 wealthy Russians are Jews, with a combined net worth of $132.9 billion. Mikhail Fridman, with a net worth of $17.6 billion, tops the list and is Russia’s second richest man. “It’s a Nazi and racist approach,” Svandiza was quoted as saying by the Slon.ru news site.
But , as JTA reported, Yuri Kanner, president of the Russian Jewish Congress, defended the decision to publish the study. “If you cannot compare the proportion of representatives of various nationalities in the general ethnic composition of the country, it is impossible to understand who is really successful and who is not,” he told the currsorinfo.co.il news website on Oct. 29. He said, however, that he doubted the authenticity of the research. “The proportion of Jews in the population of the Russian Federation is calculated incorrectly. Besides, to compare the Jewish population, which is mainly concentrated in the major cities and has a university degree, with a total mass of Russian citizens, it is not accurate,” Kanner said. Of the Jews who made the list, 42 are of Ashkenazi origin, and together have a net worth of $122.3 billion.
Six Kavkazi Jews (a group also known as “Mountain Jews”) appear on the list, with a combined net worth of $10.6 billion. There are only 762 Russian citizens classified as Kavkazi Jews, according to the Russian Bureau of Statistics and they represent just 0.00035% percent of the population. A leading Russian affairs analyst was skeptical of the Kremlin’s motivations in condemning the website, arguing that false claims of Ukrainian anti-Semitism had been advanced in partial justification of the Russian invasion of Crimea – claims that were both condemned and ridiculed by Jewish leaders in Ukraine. Michael Weiss, editor-in-chief of The Interpreter, a magazine covering Russian affairs, told The Algemeiner: “Russian ultra-nationalists and the far right seize on the theme of wealthy, bloodsucking Jewish oligarchs a great deal, but what nobody bothers to say is that the chief enabler of Russian nationalism is Vladimir Putin.”
Weiss pointed out that in spite of stringent laws against extremism, neo-Nazis marched openly in St. Petersburg earlier this year, while later this week, a full array of extremists is expected at the annual Russian March. “Putin is aligned with fascist parties in Europe like Jobbik in Hungary and Front National in France,” Weiss added. “He’s looking to create fifth columnists in Europe, drawn from racist and xenophobic parties with the occasional communist thrown in. So it’s a bit rich for the regime to be calling out antisemitism.”
© The Algemeiner
With 1.35 billion people checking into Facebook every month, there’s bound to be some things that pop up on your news feed that you’d rather not see.
1/11/2014- The social media site has the difficult job of being a place where people can feel free to share their views, likes and dislikes, but also respect the myriad of cultures and values held by its global audience. What one person may find hilarious, others may find deeply offensive. An Australian mother opened a can of worms surrounding Facebook censorship after complaining that photographs of her giving birth had been removed from the site. Milli Hill, who is shown naked in the pictures, campaigns for positive depictions of childbirth and said Facebook had censored her “powerful female images”. This prompted news.com.au to ask its Facebook followers whether they thought the site responded to offensive material effectively. We received nearly 700 Facebook comments and emails that revealed users had mixed experiences. Some were satisfied with the site’s prompt removal of offensive material, while others were left confused when content that they thought was abhorrent was found not to breach Facebook standards.
Our readers provided examples of content that they had reported, that was investigated and deemed acceptable. They included:
A pornographic cartoon
An animal cruelty video
A video that showing a sex act
An image of a man holding the decapitated head of someone else
Graphic photos of a dead baby
A photograph of a man pointing a gun at the head of a baby
A comment that Tony Abbott should be assassinated
A video of a teenager being beaten senseless.
While she was unable to comment on these specific cases, Facebook’s Australian spokeswoman said the site worked hard to create a safe and respectful place for sharing and connection. “This requires us to make difficult decisions and balance concerns about free expression and community respect,” she told news.com.au. “We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organising real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual, eg bullying. “Sometimes people encounter content on Facebook that they disagree with or find objectionable but that do not violate our community standards.” Many readers objected to videos or images of animal cruelty, but Facebook considers the context in which the video was posted before taking it down.
This type of content is often posted to condemn it or galvanise people into action in order to stop it. If so, that material is allowed. Similarly, the self-regulating nature of the Facebook community can be more effective than Facebook staffers because people can pressure their friends to remove content through their comments. “Facebook receives hundreds of thousands of reports every week and, as you might expect, occasionally we make a mistake and remove a piece of content we shouldn’t have or mistakenly fail to remove a piece of content that does violate our community standards,” the spokeswoman said. “When this happens, we work quickly to address this by apologising to the people affected and making any necessary changes to our processes to ensure the same type of mistakes do not continue to be made.”
While some news.com.au readers were disappointed with Facebook’s responses to complaints, many others said they were satisfied. Reader Michelle said she had reported content several times and each time the offensive page or material was promptly removed, including get-rich-quick spam, sexual content and racist jokes. Another reader, Cathy, helped to have a number of comments taken down that threatened violence towards Tony Abbott. Meanwhile, Karen said her experience had also been positive. “Not that I am a serial complainer either but I have reported material of graphic violence nature, primarily cruelty to animals, and on one occasion something was removed as a result of that feedback,” she told news.com.au.
How do I report something offensive?
Every update posted to Facebook carries with it a small arrow in the top right corner that allows users to hide the post or report it. If a complaint is made, it is then placed in a queue for assessment.
What does Facebook consider unacceptable?
Nudity: Photos of breastfeeding or Michelangelo’s David are likely to pass the test, however. Milli Hill’s childbirth photographs were most likely taken down because of the nudity depicted
Violence and threats
Self-harm: “We remove any promotion or encouragement of self-mutilation, eating disorders or hard drug abuse,” Facebook says
Bullying and harassment: Repeatedly targeting users with unwanted friend requests or messages is considered harassment
Hate speech: “While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition,” Facebook says
Graphic content: Some graphic content is considered acceptable if it is shared for the purposes of condemning it, but it should carry a warning. “However, graphic images shared for sadistic effect or to celebrate or glorify violence have no place on our site,” Facebook says
Privacy violations: Claiming to be another person and creating multiple accounts is a no-no
Selling items illegally
Phishing and spam
Fraud or deception.
Who assesses complaints? Are there programs that do it automatically?
All complaints are reviewed by Facebook staffers, and not by any automatic programs. Complaints are assessed against Facebook’s community standards, which govern what material is acceptable on the site. There are dedicated teams based in the US, Ireland and India, so complaints can be processed around the clock. More serious material is prioritised, but most reports are reviewed within 72 hours. Reporting a post does not guarantee it will be removed. “Because of the diversity of our community, it’s possible that something could be disagreeable or disturbing to you without meeting the criteria for being removed or blocked,” the Facebook community standards page reads. You can find out more about how complaints are assessed here.
What can I do if something I find offensive is not taken down?
Facebook also offers personal controls so every user can hide or quietly block people, pages or applications they find offensive. Facebook has tools for controlling what you see in your news feed, and tools for controlling your Facebook experience generally.
© News Australie
Laws not strong enough to police it, say experts
1/11/2014- Islamophobia has been an ongoing concern in the west since 9/11, but a number of recent incidents in Britain have given rise to a new wave of hatred that experts say is finding a breeding ground online. Part of the problem, researchers say, is that right-wing groups can post anti-Islamic comments online without fear of legal prosecution. “If they were to say, ‘Black people are evil, Jamaicans are evil,’ they could be prosecuted,” says Fiyaz Mughal, founder of Islamophobia reporting web site TellMamaUK.org. But because religious hatred isn't covered legally in the same way that racism is, Mughal says "the extreme right are frankly getting away with really toxic stuff.” Researchers believe the rise of the Islamic State in Iraq and Syria (ISIS) and incidents such as the murder of British soldier Lee Rigby and the recent sexual exploitation scandal in the town of Rotherham have contributed to a spike in online anti-Muslim sentiment in the UK.
Imran Awan, deputy director of the Centre for Applied Criminology at Birmingham City University, noticed the trend when he was working on a paper regarding Islamo-phobia and Twitter following Rigby's death. Rigby was killed in the street in southeast London in 2013 by two Islamic extremists who have since been convicted. Awan says the anonymity of social media platforms makes them a popular venue for hate speech, and that the results of his report were “shocking, to say the least.”
'A year-by-year increase'
Of the 500 tweets from 100 Twitter users Awan examined, 75 per cent were Islamophobic in nature. He cites posts such as "'Let's go out and blow up a mosque' and 'Let’s get together and kill the Muslims," and says most of these were linked to far-right groups. Awan’s findings echo those of Tell MAMA UK, which has compiled data on anti-Muslim attacks for three years. (MAMA stands for "Measuring Anti-Muslim Attacks.") Tell MAMA's Mughal says anti-Muslim bigotry is "felt significantly," and adds that "in our figures, we have seen a year-by-year increase." Researchers believe far-right advocates are partly responsible for a spike in online hate speech. “There’s been a real increase in the far right, and in some of the material I looked at online, there were quite a lot of people with links to the English Defence League and another group called Britain First,” says Awan.
Both Mughal and Awan believe that right-wing groups such as Britain First and the EDL become mobilized each time there is an incident in the Muslim community. The Twitter profile of the EDL reads: “#WorkingClass movement who take to the streets against the spread of #islamism & #sharia #Nosurrender #GSTQ.” Their Facebook page has over 170, 000 likes. Below that page, a caption reads, “Leading the Counter-Jihad fight. Peacefully protesting against militant Islam.” EDL spokesperson Simon North dismisses accusations that his group is spreading hate, emphasizing that Muslims are often the first victims of attacks carried out by Islamic extremists. “We address things that are in the news the same way newspapers do,” says North.
The spreading of hate
Experts in far-right groups, however, say their tendency to spread hateful messages around high-profile cases is well established. North allows that some Islamophobic messages might emanate from the group's regional divisions. But they do not reflect the group’s overall thinking, he says. “There are various nuances that get expres-sed by these organizations,” North says. “Our driving line is set out very clearly in our mission statement.” According to EDL's web site, their mission statement is to promote human rights while giving a balanced picture of Islam.
Awan argues online Islamophobia should be taken seriously and says police and legislators need to make more successful prosecutions of this kind of hate speech and be more “techno-savvy when it comes to online abuse.” Prosecuting online Islamophobia, however, is rare in the UK, says Vidhya Ramalingam of the European Free Initiative, which researches far-right groups. That's because groups like Britain First, which have over 400,000 Facebook likes, have a fragmented membership and do not have the traditional top-down leadership that groups have had in the past. Beyond that, UK law allows for the parody of religion, says Mughal, which can sometimes be used as a cover for race hate. “The bar for prosecution of race hate is much lower, because effectively the comedic lobby has lobbied so that religion effectively could be parodied.”
The case in Canada
Online Islamophobia is also flourishing in Canada. The National Council of Canadian Muslims (NCCM) is receiving a growing number of reports. But there are now fewer means for prosecuting online hate speech in Canada. Section 13 of the Canadian Human Rights Act protected against the wilful promotion of hate online, but it was repealed by Bill C-304 in 2012. “It’s kind of hard to say what the impact is, because even when it existed, there weren’t a lot of complaints brought under it,” says Cara Zwibel of the Canadian Civil Liberties Association. Though there is a criminal code provision that protects against online hate speech, it requires the attorney general’s approval in order to lay charges — and that rarely occurs, says Zwibel.
Section 319 of the Criminal Code of Canada forbids the incitement of hatred against “any section of the public distinguished by colour, race, religion, ethnic origin or sexual orientation." A judge can order online material removed from a public forum such as social media if it is severe enough, but if it is housed on a server outside of the country, this can be difficult. Ihsaan Gardee, executive director of NCCM, says without changes, anti-Muslim hate speech will continue to go unpunished online, which he says especially concerns moderate Muslims. “They worry about people perceiving them as sharing the same values these militants and these Islamic extremists are espousing.”
© CBC News
By Sam Volkering
28/10/2014- What’s the best way to start a riot? Let me help you out…suppress free speech. It’s possibly the number one reason people protest. And if the crowds face heavy handed control measures, these protests sometimes turn into full blown riots. Communities rally with greater force now than ever before thanks to social networks. Today, if your cause is engaging enough, it’s easy to rally the troops. A strong social media collective can be as powerful as a state army. In fact, using social networks is the best way to start a movement. Look at the Arab Spring or Euromaidan in Ukraine. Even the recent protests in Hong Kong…each was organised through social networks.
So much fear, so many reasons to protest
The world is in a very volatile state. And I’m not even talking about the markets. Ebola spread across western parts of Africa like wildfire, and now the whole world is panicked over it. Scandalous ‘news’ headlines don’t help. You can’t avoid it on social media either. In between ‘news’ about The Bachelor, all I see on my Facebook feed is horrible news: beheadings, ISIS and Ebola currently dominate. Thank the world for cat videos…oh blessed be the cat videos. At least there’s something to smile about day to day… But, along with Ebola, there’s plenty else wrong with the world. ISIS has created racial and religious tension not just in Islamic nations but also across the world. Earlier in the month, there were fatal protests in Turkey. Over the weekend, there was a violent riot in Cologne, Germany. The target of the protest — Islamic extremism. The Cologne protest was organised by a far right, neo-Nazi group. The protest had around 4,000 people, according to IBTimes. This was double the number expected by police. Most of the protesters were gathered through social media. And things got ugly. Riot police were called in. Water cannons and pepper spray were shot…
Just when you thought it couldn’t get worse
Of course, much of this violence is a direct result of the actions of global leaders. Whether related to ISIS or not, the violence and protests around the world stem from misguided government policies. The idea of the protest is nothing new, but social networks are. And the combination of the two has created greater influence on decision makers by the people. The voice of many is always more powerful than the voice of a few. For better or worse, connected networks allow people to share a voice and a view like never before. I highlight this because trouble is brewing in one particular eastern European country…one you probably wouldn’t expect. This country’s government is trying to implement one of the most regressive, oppressive policies of the modern era… The Hungarian government wants to implement an ‘internet tax’. The draft bill has a provision where a tax is paid to the government revenue collectors per gigabyte of data transfer. This would apply to consumers and businesses.
Hungary already has the highest VAT (GST) rate of any country in the world at 27%. You can see why another tax has angered the people of Hungary. But more than that, it’s widely viewed as the government taxing the freedom of information. The internet is perhaps the greatest tool of all time for creating and accessing information. It’s why we live in the ‘information age’. Anyone can use the internet to express opinions, ideas, ideals and views. It’s the ultimate tool for freedom of speech. On Sunday, approximately 100,000 Hungarians gathered in front of the Economic Ministry to protest these regressive laws. And the protest was organised through Facebook by a group with over 210,000 followers. Words broke out through Facebook, Twitter and other social networks and the people came together to have their say. As part of the protest, attendees held up their phones as a sign to the government.
This protest was peaceful, but it proved a significant point: Governments should not try to enact policies that aim to restrict what has become an essential human right —that is, access to information. That’s really what the internet is, after all — the world’s biggest collection of information. And it should be free to access by anyone, anywhere as a basic human right. It’s an optimistic goal, but hopefully, one day, the entire world will have free access to the internet. The world should also strive for clean water, food and shelter for all. But perhaps the internet is equally as important. Perhaps the internet could provide the information to help communities achieve those other goals… Regardless, social networks are clearly crucial to connecting and empowering people. And the internet is the backbone of that power. When go-vernment tries to restrict our freedom of information, they will face a resolute and defiant community.
© Tech Insider
As 'Hitler' Twitter account gains more and more followers and Facebook page displays 'list of Jews,' Foreign Ministry and EU representatives discuss ways to combat anti-Semitism.
28/10/2014- "It's hard being openly Jewish in Europe today," Gideon Bachar, the Director of the Department for Combating Anti-Semitism and Holocaust Remembrance in the Foreign Ministry, said Monday. An experts' meeting on the topic of fighting anti-Semitism and racism conducted in Jerusalem today led to various estimates as to the future of the Jewish community in Europe and links between radical Islam and anti-Semitism.
The "Hitler" account has 370,000 followers
"Anti-Semitism is like Ebola," Bachar said. "It's a virus. It constantly accumulates mutations. It changes all the time, adapts itself to the situation, and is transnational. The rise in anti-Semitism is a danger to civilization and to democracy in general." The meeting was attended by Yad Vashem representatives, the State Attorney's Office, the Association of Israeli Students and European Union representatives. "We are witnessing a strong willingness and desire to take action against this phenomenon," Bachar said. "There is an understanding of the problems it poses. Europe is seeing a steady and substantial increase in Anti-Semitism." Ido Daniel, Program Director at Israeli Students Combating Anti-Semitism, displayed during the meeting a photo of a French Facebook page with names, pictures and information about Jewish residents, including their place of prayer and the parks where they take their children. He also showed the attendees a faux Adolf Hitler Twitter account, with more than 370,000 followers, that has since been suspended. "The man tweeted a picture of Birkenau and wrote: 'It's a great day at work today,'" Daniel read out the sentence
According to Bachar, various European initiatives which include prohibitions on circumcision and kosher slaughter "do not stem from anti-Semitic motives, but they do pose a real threat to the continued existence of Jewish life in Europe. Apart from them, hundreds of anti-Semitic demonstrations have taken place. We are diagnosing three phenomena: Leaving the country, assimilation or isolation." Bachar also spoke about recent occurrences in which people removed mezuzahs from their doors, concerns of wearing a yarmulke in public while going to the synagogue and the hiding of Jewish identity.
© Y-Net News
27/10/2014- Up to 10,000 people rallied in Budapest on Sunday (26 October) in protest of Viktor Orban’s government plan to roll out the world’s first ‘internet tax’. Unveiled last week, the plan extends the scope of the telecom tax onto Internet services and imposes a 150 forint tax (€0.50) per every gigabyte of data transferred. European Commission spokesperson Ryan Heath said, under the tax hike, streaming a movie would cost an extra €15. Streaming an entire TV series would cost around €254. The levy, to be paid by internet service providers, is aimed at helping the indebted state fill its coffers. Hungary’s economy minister Mihaly Varga said the tax was needed because people were shifting away from phones towards the Internet. But unhappy demonstrators on Sunday threw LCD monitors and PC cases through the windows of Fidesz headquarters, Orban’s ruling party.
A Facebook page opposing the new tax attracted thousands of followers within hours of being set up after the regime was announced. The page called for a protest with some 40,000 people having signed up by Sunday early evening. Hungary’s leading telecoms group Magyar Telekom told Reuters the planned tax “threatens to undermine Hungarian broadband developments and a state-of-the-art digital economy and society built on it”. The proposal has generated controversy in Brussels as well. EU’s outgoing digital chief Neelie Kroes on Sunday told people to go out and demonstrate. “I urge you to join or support people outraged at #Hungary Internet tax plan who will protest 18h today,” she wrote in a tweet. The backlash prompted Orban’s government to rollback the plans and instead place a monthly cap of 700 forints (€2.3) for private users and 5,000 forints (€16) for businesses.
The concession did little to appease critics who say the levy will still make it more difficult for small businesses and impoverished people to gain Internet access. Others say it would restrict opposition to the ruling elite. “This is a backward idea, when most countries are making it easier for people to access the Internet,” a demonstrator told the AFP. "If the tax is not scrapped within 48 hours, we will be back again," one of the organisers of the protest told the crowds. Orban, who was elected for a second term in April, has come under a barrage of international criticism for other tax policies said to restrict media freedoms amid recent allegations of high-level corruption. Civil rights group say the Fidesz-led government fully controls the public-service media and has transformed it into a government mouthpiece.
An advertising tax imposed in August risks undermining German-owned RTL, one of the few independent media organisations in Hungary, which does not promote a pro-Fidesz editorial line. Kroes has described the advertising tax as unfair and one that is intended “to wipe out democratic safeguards” and to rid Fidesz of “a perceived challenge to its power.”
© The EUobserver
26/10/2014- Pavee Point strongly condemns any actions to intimidate and promote violence against Roma in Waterford. This follows the publication of multiple Face-book pages which openly incite hatred against Roma, and reports of a public order incident on Saturday evening where up to 100 people are reported to have gathered outside the home of Roma living in Waterford. The content on Facebook pages to date have shown huge misinformation and racism towards Roma and have included inflam-matory, dehumanising and violent language. There is a clear link between online hate speech and hate crime and there is an urgent need to address the use of the inter-net to perpetuate anti-Roma hate speech and to organise violence.
European institutions and groups such as the European Roma Rights Centre have raised concerns about rising violence in Europe and the strengthening of extremist and openly racist groups which spread hate speech and organise anti-Roma marches. Attacks in other European countries have included several murders of Roma. We don’t want this to become a feature in Ireland. “Anti-Roma racism does not occur in a vacuum and we now need strong public and political leaders to be visible, vocal and openly condemn anti-Roma actions in Waterford” said Siobhan Curran, Roma Project Coordinator Pavee Point. “At a national level a progressive national strategy to support Roma inclusion in Ireland needs to be developed as a matter of urgency” she continued.
Pavee Point calls on all elements of the media to take on board the recommendations from the Logan Report and avoid sensationalist and irresponsible reporting.
© Pavee Point
Our world is now more connected than ever. Technology – specifically social media – allows us to establish lines of communication hitherto unthinkable.
31/10/2014- As technology has developed and the use of social media proliferated, unfortunately so too has the echo chamber for racism expanded. As chairman of the All-Party Parliamentary Group Against Antisemitism and with the Inter-Parliamentary Coalition for Combatting Antisemitism, I have been working together with the industry and MPs from across the world to tackle cyber hate. Predominantly, this has been through improved self-regulation by the companies in question. In September, I went to California to agree protocols on hate speech on the internet with Facebook, Twitter, Google and Microsoft. They were among others that endorsed a series of pledges to introduce better, user-friendly reporting systems and more rapidly respond to allegations of abuse. It is easy to look at the big picture and work with companies to implement frameworks to tackle abuse.
It is, of course, a very different experience to be on the receiving end of antiSemitic hate and death threats. Recently, an important precedent was established when a man who had sent an anti- Semitic tweet to my parliamentary colleague, Luciana Berger MP, was jailed. While civilised people the world over celebrated the news, it elicited quite the opposite response from Nazi sympathisers and far-right extremists. Taking inspiration from one lunatic, posting articles to an American server, a number of ‘activists’ took to Twitter in an attempt to orchestrate a campaign of hate and vitriol. I was not prepared to let Luciana fight this alone and so raised a point of order at Prime Minister’s Questions and queried whether Twitter might be brought to the Commons to answer for the hate that was being espoused through its platform. Subsequently I, too, became subjected to the ire of fascists and racists on twitter. If you have ever suffered abuse through the medium of Twitter, you will know how difficult it is to report it and have action taken. Given the work I have already done with the company, it should not come as a surprise that I was able to make contact with the company and request action.
While individuals have sought to be helpful, hateful accounts and messages targeting both Luciana and myself remain online and my experience points to a significant structural failure to curb racist activity on social media. In November, I will visit Twitter and Facebook European HQ’s with parliamentary colleagues and will take my concerns to them. This week, I led a debate in the House of Commons about these matters and asked the government and the parliamentary authorities what action they would be taking. I set out a number of practical suggestions. What happens on social media has real world consequences. I expect Twitter to make it easier for any victim of abuse to report hate so the threat of harm is reduced. I want the social media companies to invest extra resources in tackling cyber hate.
Protocols that companies have signed up to, such as the ICCA/ADL accord, should be honoured and there should be more transparency so it is easier to contact people working for these companies. I expect these companies to work proactively to develop algorithms that identify repeat offenders and key words which, when they appear together, are automatically removed. I want racist and anti-Semitic pictures to be taken off these platforms and I want police RIPA requests to be more speedily processed through the UK. Specifically, I want our police and courts to ensure they are at the forefront of the fight against cyber hate. Sex offenders can be barred from social media and from online activity.
I believe that if they show a considered and determined intention to exploit social media networks to harm others, individual perpetrators of harassment and racist abuse should also be subject to such a ban. If they can do it for child exploitation, then they can do it for racism and anti-Semitism. Technology has helped us to create new and important means of communication. I will not allow the racists and anti-Semites of the world to be the primary beneficiaries.
© Jewish News UK
30/10/2014- Local software development company, PDMS are delighted to announce that they have been working with PNLD (Police National Legal Database) in the UK to provide the technology for their latest project - an innovative new web service aimed at helping victims and witnesses of crime. The website, aptly named www.helpforvictims.co.uk was launched on Friday 24th October by Yorkshire’s Police and Crime Commissioner, Mark Burns-Williamson, with an event in Leeds where Baroness Newlove, the UK Government’s Victims’ Commissioner was present to support the launch. Funded by the Ministry of Justice, it is hoped that the website will be rolled out to other police forces across England and Wales.
With the introduction of Help for Victims, individuals in Yorkshire will be able to immediately access all the information contained within the Victims’ Code and the Witness Charter in a question and answer format. The website also includes individual pages dedicated to over 400 local supporting organisations, which can help with concerns such as cyber bullying or hate crime, with trained advisers on hand to give advice. Additionally, the website utilises a self-referral service to local organisations who can provide particular specialist victim and witness services beyond the website.
Chris Gledhill, Managing Director of PDMS commented, “The new website is an integral part of Mr. Burn-Williamson’s Police and Crime Plan to ensure victims and witnesses in Yorkshire receive high quality support exactly when they need it. It is the only website of its kind that facilitates all of their local resources, whilst providing one place for clear and concise advice with regards to the criminal justice process and rights from the Victims Code. As well as English, the site has been translated into the five most frequently spoken languages in West Yorkshire - including Gujarati, Urdu, Punjabi, Arabic and Polish, and will shortly be launched in IoS and Android App format too”.
PDMS have been PNLD’s technology partner for over 10 years, helping them provide a range of services to the police and wider criminal justice sector in Yorkshire. Previous technology projects have included the Police National Statistics Database (PNSD), an internet-based solution allowing Police Forces to comparatively analyse and examine statistics at national and local levels, the ‘Ask the Police’ Portal (www.askthe.police.uk) for the Police Service in England and Wales, which is estimated to save forces over £25 million per year, and Apple and Android ‘Ask the Police’ apps, which reached over 30,000 downloads shortly after launch.
© Isle of Man
With political campaigns increasingly being fought on social media, The Telegraph investigates the rise of Britain First, a tiny group with more likes on Facebook than the three main parties
27/10/2014- Started in 2011 by former BNP members Paul Golding and Jim Dowson, Britain First describes itself as “a patriotic political party and street defence organisation”. The group has amassed almost 500,000 likes on Facebook compared to the Conservatives on 293,000, Labour with 190,000 and the Liberal Democrats’ 104,000. This popularity has led to questions about how the group has managed to gain so many likes when its offline activities seem to draw few supporters in comparison. I met the leader of Britain First, former BNP communications chief Paul Golding, and asked him about the kind of posts the group was using to attract likes. One tactic they employ is to post pictures of animal cruelty with text asking people to “Like and share if you demand far harsher penalties for those who mistreat animals”.
“All the top grossing charities in this country are animal charities and there’s a reason for that. We’re just tuning into the nation’s psyche (by) posting stuff like that,” explained Mr Golding. Creating posts which appear to have little to do with the aims of the group and which seem aimed at simply garnering the most amount of likes is a tactic used by many far right groups according to Carl Miller, a social media researcher for the think tank, Demos. “Far right groups have always wanted to appear more popular and influential than they are, this is one of the ways in which they think they can have influence on mainstream political decisions.” The people who respond to these messages online may not be aware of the kind of activities their likes are being used to support offline. Britain First has run a campaign of what they call ‘Mosque Invasions’. One of these took place at Crayford Mosque, in Kent in July of this year. Filmed by Britain First, the ‘invasion’ consisted of a small group dressed in matching green jackets entering the mosque and demanding to see the Imam.
A gentleman inside the Mosque points out that they are standing on the prayer mat with their shoes on, to which Mr Golding responds “Are you listening?” before demanding that the mosque remove signs denoting separate entrances for men and women outside. The man asks again for the group to leave and eventually convinces them to go after promising to remove the signs. Before leaving, Mr Golding warns him “You’ve got one week to take those signs down otherwise we will.” When challenged about the validity of these tactics, Mr Golding said his organisation would not treat those who followed Islam with respect because, in his opinion, they treated women like second class citizens. “We didn’t make a distinction in the second world war between moderate Nazi’s and extreme Nazi’s did we? We just went to war,” he said. Buoyed by the success of their Facebook page, Britain First plans to stand in the Rochester and Strood by election. How they poll will reveal whether the likes they have accrued online translate into votes offline.
© The Telegraph
30/10/2014- A neo-Nazi website based in the US is behind a co-ordinated campaign of antisemitic abuse targeting Britain's youngest Jewish MP, the JC can reveal. The site provides a user guide to harassing Luciana Berger and has created offensive images to be shared by internet trolls and sent to her via social media sites. It carries a series of "dos and don'ts" for those who intend to abuse Ms Berger. The site advises trolls not to "call for violence, threaten the Jew b---h in any way. Seriously, don't do that". But it goes on to encourage calling her "a Jew, call her a Jew communist, call her a terrorist, call her a filthy Jew b---h. Call her a hook-nosed y-- and a ratfaced k---. "Tell her we do not want her in the UK, we do not want her or any other Jew anywhere in Europe. Tell her to go to Israel and call for her deportation to said Jew state."
Advice on the easiest ways to set up anonymous Twitter accounts and email addresses to limit traceability is also available on the website. It posts hundreds of racist articles targeting black people, Muslims and Jews. Ms Berger received around 400 abusive messages on Twitter last week. Many carried the hashtags and images created by the American site, which urged trolls to join "Operation: Filthy Jew Bitch". The campaign against the Liverpool Wavertree MP was set up last Monday, hours after Merseysider Garron Helm was jailed for sending her abusive messages. Helm's imprisonment was heralded as an "important precedent", but it is now clear that his abuse was merely the tip of an iceberg.
The JC understands the Labour shadow cabinet member has received death threats amid the series of "deeply threatening" messages. She has not commented on the abuse, but friends said she was feeling isolated after the "relentless" storm of offensive tweets. "Luciana is sickened by what's flashing up on her phone on a minute-by-minute basis," said one. "It's hard for her being the focus of something so sinister and global and relentless." A coalition of security groups, police and Twitter have been investigating the source of the messages and have shut down some accounts. The operation against her is being orchestrated by the racist, white nationalist website Daily Stormer. It is run by Andrew Anglin, who has previously been filmed at Berlin's Holocaust memorial mocking victims of the Shoah and questioning the number of Jews who were murdered. The site promotes use of the #HitlerWasRight hashtag.
It provides what is effectively a resource pack of racist images which it advises trolls to use to "flood" Ms Berger's Twitter account. Among the images are those of the MP next to Labour's Jewish leader Ed Miliband with a yellow star with the word "Jude" superimposed on their heads. The call to action concludes by urging abusers to use the hashtags #FilthyJewB---h and #FreeGarronHelm on every tweet targeting Ms Berger. "We will not bow to Jews. We will not be silenced by Jews. We will not allow Jews to destroy the nations that our ancestors spilled blood on to build on this sacred land." When Twitter began to block the tweets late last week, Daily Stor-mer users began posting Ms Berger's email address on internet forums. A website claiming to be Britain's "number one nationalist newspaper" also highlighted Helm's conviction.
The Daily Bale, run by "nationalist" Joshua Bonehill-Paine, said that as a former director of Labour Friends of Israel, Ms Berger was a supporter of "institutional state child murderers", a "money grabber" and a war criminal. The JC understands police are investigating the comments. Ms Berger's parliamentary colleagues and members of the Jewish community have responded by posting messages of support online. Lord Wood, a Labour peer and adviser to party leader Ed Miliband, wrote on Twitter: "The vile antisemitic abuse of Luciana Berger online only succeeds in uniting everyone in her support and in revulsion against those behind it." Baroness Royall, Labour's leader in the Lords, tweeted: "Luciana Berger is a terrific MP, friend and colleague - a very fine woman. The racist abuse against her must stop. It's abhorrent."
Board of Deputies vice president Jonathan Arkush tweeted: "Racist abuse of Luciana Berger is nauseating and disfigures our country. Perpetrators should expect to go to prison. We value and support her." The case was raised in Parliament on Wednesday, with Commons Speaker John Bercow condemning the abuse as "despicable and beneath contempt".
© The Jewish Chronicle
Internet trolls are among the worst specimens the human race can offer. But they are not a reason to nod through another restriction on personal freedom
By Nick Cohen
26/10/2014- No one has tested my commitment to liberalism so sorely as Edinburgh University’s Feminist Society. I know I should believe in freedom of speech and changing minds with arguments, not punishments, and all the rest of it. And, trust me, I do. Or rather I did, until the moment Edinburgh’s feminist students said they wanted to kick the Socialist Workers party out of their campus. The BNP of the left has had a malign influence on public life far beyond its numbers. In the universities, it has been at the forefront of thuggish demands that there must be “no platform” for fascists or supporters of Israel or, it seems, anyone else it disagrees with. The desire to censor has reached the absurd state where the academic left has banned women’s rights campaigners, who have upset transsexuals, and admirers of Friedrich Nietzsche, who have upset students who had not read him but know he was a bad person.
After this disgraceful record, it is worth enjoying the plight of the SWP for hours – maybe weeks. The censor faces censorship. The fanatics who have screamed down so many others could be screamed down themselves. No one can deny that Edinburgh’s women have good reason to go after the Trots. Like priests in the Catholic church and celebrities in light entertainment, the leaders of a Marxist-Leninist party are men at the top of a hierarchy that demands obedience. Last year, a succession of women alleged that senior figures in the party had demanded their sexual compliance. Rather than tell them to take their cases to the hated “capitalist” courts, the SWP set up its own tribunals. The alleged victims said it subjected them to leering questions worthy of the most misogynist judge about their sex lives and alcohol consumption, then duly “acquitted” the “accused”.
Eleanor Brayne-Whyatt of the Edinburgh Feminist Society has a point when she says that universities will show they do not tolerate “rape apologism and victim blaming” if they order the SWP to leave. Even if you want to differ, you may find the task of contradicting her beyond you. We have reached a state where arguing that a speaker has the right to free speech is the same as agreeing with his or her arguments. If you say that racist or sexist views should not be banned, you are a racist or rape apologist yourself. Your opponents then go further and accuse you of ignoring the “offence” and “pain” of the victims of racism and sexism have suffered and turn you into an abuser as well. With remarkable speed this double bind knots itself around its targets. Defend a repellent man’s right to speak and you become that repellent man and his victims, real or imagined, become your victims too. Small wonder so many keep quiet when they should speak up.
Observer readers may not care, as most modern prohibitions on speech are – to put it crudely – instances of leftwing censorship of prejudiced views. If so, you should notice how easy the right finds it to march in step alongside you. Chris Grayling, a Tory bully boy, announced last week that he would quadruple the maximum jail sentence for internet trolls who spread “venom” on social media or, rather, he fed an old story from March to a naive and punitive media. Even though internet trolls are among the worst specimens the human race can offer up for inspection, there are many reasons not to nod through yet another hardline restriction of personal freedom. Interest groups like nothing better than exploiting the law. We’ve already seen supporters of the McCanns, who were understandably aggrieved by the abuse the family received online, turn into troll catchers. They collected a dossier and passed it to Sky News and the police. The hunters unmasked one of the McCanns’ tormentors as Brenda Leyland, who took her own life within hours of her exposure, a reminder that many trolls are mentally ill and need treatment rather than prison.
Meanwhile, as the free speech campaigners at English Pen reminded me, the white right and far right have learned from the left and can be as politically correct. Their most recent success was to demand that the police prosecute one Azhar Ahmed from Dewsbury. He admitted posting a Facebook message two days after the killing of six British servicemen in Afghanistan: “All soldiers should die and go to hell,” it read. A disgusting statement, no doubt, but put in different terms, the belief that British troops should not be in “Muslim lands” is a political sentiment, not a criminal act. The court nevertheless found him guilty of the criminal offence of making a “grossly offensive communication”. The prosecutors did not say that he was inciting violence against British troops, simply that he was offensive. Two can play at that game. The Islamist religious right can respond in kind and demand prosecutions for Islamophobia, and before you know it we will be off on a cycle of competitive grievance.
Only last week, the authorities recalled Tommy Robinson, the former leader of the extreme right English Defence League, to prison – apparently for tweeting that he planned to criticise the police. I carry no brief for the man, but his detention feels all wrong. It would be far better if social media sites and newspapers stopped inciting people’s ugliest instincts by allowing them to post anonymously. It would be better still if politicians reformed a law that is alarmingly vague. The state can charge citizens for words that are “grossly offensive,” as Azhar Ahmed found. No government should be allowed to get away with such a catch-all charge. Every sentiment beyond the blandest notions “offends” someone. “Offensive” is a subjective term, which is wide open to political manipulation by loud and vociferous interest groups and the government of the day.
The only respectable reason for banning organisations or punishing individuals is if they incite violence against others. Unless feminists can prove that the SWP promotes rape as a matter of party policy – and I don’t think they can – they remain free to despise it, harangue it and oppose and expose its many stinking hypocrisies, but they have no moral right to order it off campuses. I know I am going to regret writing that last sentence. Indeed, I am regretting it already. But it remains the case that a country where it’s a crime to be offensive is a country where everyone can try to ban everyone else.
© Comment is free - The Guardian
Editors' Note: This story includes references to hate speech and other language that readers may find offensive.
26/10/2014- In September, a group of black women penned an impassioned letter to the people who run Reddit entitled: "We have a racist user problem and reddit won't take action." See also: Reddit: A Beginner's Guide
Posted by the username of pro_creator, who serves as a moderator on the subreddit /r/blackladies, it was cosigned by the moderators of more than 60 other subred-dits. "Since this community was created, individuals have been invading this space to post hateful, racist messages and links to racist content, which are visible until a modera-tor individually removes the content and manually bans the user account," the message said. "reddit admins have explained to us that as long as users are not breaking sitewide rules, they will take no action," the letter added. Therein lies the issue. Reddit has a hate speech problem, but more than that, Reddit has a Reddit problem.
A persistent, organized and particularly hateful strain of racism has emerged on the site. Enabled by Reddit's system and permitted thanks to its fervent stance against any censorship, it has proven capable of overwhelming the site's volunteer moderators and rendering entire subreddits unusable. Moderators have pled with Reddit for help, but little has come. As the letter from /r/blackladies mentions, the bulk of what racists perpetrate on the site is within Reddit's few rules. And the site's CEO has made clear, even through criticism surrounding high-profile events like the celebrity nude leak, that those rules are not going to change.
This has put the front page of the Internet in a tenuous position. Having just completed a funding round, the site is poised to begin monetizing. That will mean convincing advertisers to put adds next to its user-generated content. It is a situation in which an unstoppable force meets an immovable object. Hate speech on Reddit is proving uncontainable while Reddit refuses to change. The situation has left moderators — essential cogs in the site's operation — as the site's last line of defense against some of the darkest parts of the Internet. It is a battle they are losing.
Down with the upvotes
It's just not that hard to manipulate Reddit. Motivated racists have proven capable of affecting everyone from smaller groups like /r/blackladies to huge subreddits like /r/news, which has more than 3.9 million subscribers. Reddit relies on a democratic “upvote” and “downvote” system that surfaces or buries content and com-ments. It’s a system that can be gamed by motivated groups. Allied redditors can vote en masse to push content and comments to the top of subreddits, a move known as "brigading." This is frowned upon — but it’s not technically against the rules. The site also allows users to quickly create anonymous accounts. Bands of anonymous, racist users can completely overrun smaller subreddits, which is what happened to /r/blackgirls, a predecessor to /r/blackladies.
“Our sub was created after a previous sub we'd frequented was overrun [by] hate groups,” pro_creator said in an email to Mashable. The user requested anonymity out of fear of “doxxing,” or the public disclosure of personal information online. The abuse “would come in waves as they grew upset with being rejected and banned.” Racist redditors had previously congregated at /r/n*ggers, a subreddit that was eventually banned for its open attempts to brigade other subreddits, including /r/ black-girls. A year and a half later, /r/blackladies, which bills itself as "designed specifically to be a safe space for black ladies on Reddit," is dealing with the same problem. Moderators are growing weary.
In addition to the upvote and downvote system, moderators, know as “mods,” are also a key part of Reddit. These unpaid volunteers regulate each subreddit, some of which have millions of subscribers. They have the power to block comments and ban users from their particular parts of the site. In the face of the types of organized attacks that hate groups have mounted on subreddits large and small, those tools are woefully inadequate, moderators say. Tyler Lawrence, a moderator of a variety of subreddits including /r/news, said that consistent and coordinated attacks have caused him to consider drastic action. This has become such a huge issue in /r/news alone that I've at multiple points conside-red outright closing comment sections to prevent hateful brigading from racist communities within Reddit," Lawrence told Mashable in an email.
Moderators’ pleas have almost entirely fallen on deaf ears. Reddit’s commitment to remaining as open as possible is well documented. Most recently, Reddit CEO Yishan Wong penned a defense of the site’s lack of action concerning its role in disseminating leaked celebrity photos. Moderators who spoke with Mashable are fatalistic about the site’s future. If Reddit was built in part by the darker corners of the site, why would it change now? “There's no desire to address the various -isms that have grown to dominate the site, so it doesn't seem like it will be resolved any time soon. Which is unfortunate, because the attitudes displayed by a good number of Reddit's target demographic are firmly on the wrong side of history,” pro_creator wrote. “The site is positioning itself as a playground for racists and misogynists. And if racism and sexism are paying the bills, why would they move against it?” Reddit declined to respond to questions on this topic.
Reddit at its core is a group of communities. The site's structure and format — relying on the voting system to elevate or bury content and comments — made it the ideal place for users with any number of interests to connect. Reddit now hosts thousands of sections, known as subreddits, and served more than 170 million unique users last month. Censorship is the site's mortal sin, even when being applied to the most odious content. This laissez faire ideology is an ingrained part of the plat-form, lending it a certain legitimacy. All are welcome and governed by the same rules. This led to the site playing host to a certain amount of racism and hate speech. Racism on the Internet preceded Reddit, and it will exist if the site ever goes away. But there was a relative peace among the various groups, which operated under something of an unspoken detente. You stay in your corner, we stay in ours.
That is until the 2012 shooting of Trayvon Martin by George Zimmerman. The Zimmerman trial really stands out in my mind. It served as a rally point for racists every-where," said Logan Hanks, a former Reddit programmer, in an email to Mashable. "This manifested on Reddit as a lot of new racist memes popping up here and there, drama around racists squatting on the 'TrayvonMartin' subreddit to mock the African-American community, and an uptick in bullying directed at minority subreddits," he said. "It became a prime opportunity to mock and harass minorities on Reddit." Since then, a battle has raged between Reddit's corps of volunteer moderators and racist activists. "After the Zimmerman trial, they were briefly dispersed, but never entirely gone, and this year they've returned as strong and bold as ever," Hanks said.
Racism is nothing new to the Internet, but rarely has it been so organized and on a platform that can quickly put it in front of millions of users. Numerous moderators who spoke with Mashable for this story say that hate groups are coordinating to disrupt large, mainstream sections of the site and occupy others. Moderators can delete posts and comments that violate subreddit rules. Some have taken screenshots of attacks in hopes of providing evidence to admins — Reddit employees that help run the site — and spurring them to take action. Examples can be found here, here and here. There's also evidence of plans to take these efforts to Twitter. Lawrence, the moderator, sent the following screenshot as an example of the type of action that he has had to deal with on a near-daily basis.
Many subreddits have their own rules, enforced by moderators. It is up to them to regulate content and comments with limited tools. They can block users and delete comments, but these efforts are sometimes not enough. While Reddit has 65 employees, it relies on thousands of unpaid mods. "The tools available to mods mainly offer limited reactive approaches, so they have to monitor submissions 24 hours a day to remove slurs and ban each new account created specifically to bully them," said Hanks, who was known to be a particularly active admin during his time at Reddit. "Whenever they were hit by a particularly hard deluge they would escalate to us, and sometimes we were able to stem the tide briefly," Hanks said. "If things get too bad, they have to close their subreddit until the bullies and trolls forget about them and move on."
It's also a strain on the mods. Ryan Perkins, a moderator of several subreddits, said in an email that he had lost count of the number of racist commenters he has had to ban. "This makes moderating any reasonably large subreddit with an eye towards being inclusive actually quite a lot of very emotionally and mentally taxing work," he said. Racism is only one type of hate speech on Reddit. The site has seen similar battles surrounding misogyny and more recently the GamerGate fiasco. Reddit has taken some action against organized hate groups. Banning the original hub for anti-black hate speech was a big step, but one that lacked much impact. Banning either a subreddit or a user is among the most aggressive moves that Reddit administrators can take. It also barely changes anything. New subreddits are easily formed, and new usernames created.
The moderators Mashable spoke with pointed to “the Chimpire,” a group of subreddits that had become the new hub for hate speech on Reddit. Two moderators associated with the Chimpire told Mashable through Reddit’s messaging system that brigading was forbidden in their subreddits and denied organized attempts at vote manipulation.
No help in sight
Successful platforms that began with a spirit of openness have learned to quickly change as they attempted to turn into successful businesses. Facebook and Twitter decided, whether as part of a moral or business decision, that freedom of expression on its platforms has limits. Tumblr cracked down on porn. Reddit recently announ-ced a $50 million round of funding. It has been eight years since Condé Nast parent company Advance Publications bought Reddit, and it’s no secret that the site is try-ing to figure out how to monetize. The recent celebrity leak just about coincided with news of the fundraising round, putting the site in an awkward position. In this case, Reddit took action. A subreddit called /r/TheFappening that had been created to host the leaked pictures was eventually banned. That move drew no shortage of criticism within Reddit for a perceived double standard.
"The core in this case is the same as the core in the celebrity hacking scandal, except in that instance, they only removed the subreddits once they received significant media coverage and legal pressure," said Lawrence, the /r/news moderator. Reddit is walking a fine line. The site is trying to be tough on content that could harm its prospects while also catering to its users that demand Reddit retain its anything-goes foundation. In the calculus between Reddit’s ideals, its business, its users and its moderators, the site seems to have decided that it can most afford to lean on the moderators. This has left them frustrated and angry, but still redditors for now. “We are here, we do not want to be hidden,” the letter on /r/blackladies concluded, “and we do not want to be pushed away.”
Facebook’s new chat app hopes to resurrect the glory days of Microsoft Chat, bringing message boards to the iPhone
24/10/2014- Facebook has released a new iPhone app, Rooms, that allows users to create near-anonymous chat rooms like those from the mid-1990s internet relay chat (IRC) systems. Rooms does not require a Facebook account to use – only an email address to re-login if switching between devices. The app connects users in a pseudo-anonymous fashion to chat about almost anything, away from the main Facebook experience, and is almost a recreation of IRC - but with Facebook’s terms and condi-tions applied.
Developed in 1988, IRC allowed users to connect anonymously across the internet and exchange simple text-based messages. Unlike message boards, IRC did not rely on a website and browser; instead users installed an app on their computers, such as MS Chat, and connected directly to a server. Later files could be transferred, creating a direct connection between users which marked the beginnings of peer-to-peer filesharing. To join a Room, users scan a 2D barcode, which can be shared publicly or privately to invite only a small selection of people to chat. Moderators of each room can filter content requiring approval to post and ban anyone, blocking their device from re-joining. Unlike the original message boards, it’s not “anything goes”; Facebook’s community standard guidelines will apply, banning abusive behaviour and the sharing of certain types of material like child abuse images.
Rooms can also have an age rating, although bypassing the age gate is as simple as taping the “Yes, I’m over 18” button. Age is not verified, however. Rooms does not require a Facebook account to use, only an email address to log in again if switching between devices. The app, which is presently iPhone-only, is the latest from Facebook’s Creative Labs, responsible for Facebook’s Paper and Slingshot apps among others, and marks another Facebook app divorced from the core Facebook social network. Josh Miller, former chief executive of the discussion site Branch and now Facebook product manager, acknowledge the debt to older text-based chat sys-tems, saying Rooms was “inspired by both the ethos of these early web communities and the capabilities of modern smartphones.” In a blog post, Miller said: “One of the magical things about the early days of the web was connecting to people who you would never encounter otherwise in your daily life … Forums, message boards and chatrooms were meeting places for people who didn’t necessarily share geographies or social connections, but had something in common.”
‘Be whoever we want to be’
Rooms attempts to replicate that scenario, where users can chat about anything using a distinct username for each room. The purpose isn’t to be anonymous, but users are not limited to their real name – they can call themselves whatever they would like. “One of the things our team loves most about the internet is its potential to let us be whoever we want to be,” said Miller, whose stance over real names in Rooms seems very different from that of chief executive Mark Zuckerberg on Facebook. “It doesn’t matter where you live, what you look like or how old you are – all of us are the same size and shape online. “That’s why in Rooms you can be “Wonder Woman” – or whatever name makes you feel most comfortable and proud,” Miller said.
Each Room can contain text, images and videos, with the topic determined by the room creator. The service brings 1990s chat rooms into the 21st century with the ability to add cover photos, change the colour scheme and look of buttons in the room, create pinned messages and set whether content shared in the room can be linked to from the outside world.
Start up and make things
The app was developed by the London branch of Facebook’s Creative Labs, which was set up to enable a section of Facebook to operate like a technology startup, taking risks and trying things that the social network could not. Its primary focus has been smaller, single-purpose apps, fitting in with Facebook’s push to unbundle its apps and services from the main “big blue” Facebook app, and increasing the pace of development and iteration within these separate apps. The free app is iPhone-only – currently ranked as two-stars out of five on the App Store – although an Android Rooms app is planned for early 2015.
© The Guardian
23/10/2014- A little over a year after a French court forced Twitter to remove some anti-Semitic content, experts say the ruling has had a ripple effect, leading other Internet companies to act more aggressively against hate speech in an effort to avoid lawsuits. The 2013 ruling by the Paris Court of Appeals settled a lawsuit brought the year before by the Union of Jewish Students of France over the hashtag #UnBonJuif, which means “a good Jew” and which was used to index thousands of anti-Semitic comments that violated France’s law against hate speech. Since then, YouTube has permanently banned videos posted by Dieudonne, a French comedian with 10 convictions for inciting racial hatred against Jews. And in February, Facebook removed the page of French Holocaust denier Alain Soral for “repeatedly posting things that don’t comply with the Facebook terms,” according to the company. Soral’s page had drawn many complaints in previous years but was only taken down this year.
“Big companies don’t want to be sued,” said Konstantinos Komaitis, a former academic and current policy adviser at the Internet Society, an international organization that encourages governments to ensure access and sustainable use of the Internet. “So after the ruling in France, we are seeing an inclination by Internet service providers like Google, YouTube, Facebook to try and adjust their terms of service — their own internal jurisprudence — to make sure they comply with national laws.” The change comes amid a string of heavy sentences handed down by European courts against individuals who used online platforms to incite to racism or violence.
On Monday, a British court sentenced one such offender to four weeks in jail for tweeting “Hitler was right” to a Jewish lawmaker. Last week, a court in Geneva sentenced a man to five months in jail for posting texts that deny the Holocaust. And in April, a French court sentenced two men to five months in jail for posting an anti-Semitic video. “The stiffer sentences owe partly to a realization by judges of the dangers posed by online hatred, also in light of cyber-jihadism and how it affected people like Mohammed Merah,” said Christophe Goossens, the legal adviser of the Belgian League against Anti-Semitism, referring to the killer of four Jews at a Jewish school in Toulouse in 2012.
In the Twitter case, the company argued that as an American firm it was protected by the First Amendment. But the court rejected the argument and forced Twitter to remove some of the comments and identify some of the authors. It also required the company to set up a system for flagging and ultimately removing comments that violate hate speech laws. Twitter responded by overhauling its terms of service to facilitate adherence to European law, Twitter’s head of global safety outreach and public policy, Patricias Cartes Andres, revealed Monday at a conference in Brussels organized by the International Network Against Cyber Hate, or INACH. “The rules have been changed in a way that allows us to take down more content when groups are being targeted,” Cartes Andres told JTA. Before the lawsuit, she added, “if you didn’t target any one person, you could have gotten away with it.”
The change went into effect five months ago, but Twitter “wanted to be very quiet about it because there will be other communities, like the freedom of speech community, that will be quite upset about it because they would view it as censorship,” Cartes Andres said. Suzette Bronkhorst, the secretary of INACH, said Twitter’s adjusted policies are part of a “change in attitude” by online service providers since 2013. “Before the trial, Twitter gave Europe the middle finger,” Brokhorst said. “But they realized that if they want to work in Europe, they need to keep European laws, and others are coming to the same realization.”
According to Komaitis, the Twitter case was built on a landmark court ruling in 2000 that forced the search engine Yahoo! to ban the sale of Nazi memorabilia. But the 2013 ruling “went much further,” he said, “demonstrating the increasing pressure on providers to adhere to national laws, unmask offenders and set up flagging mechanisms.” Still, the INACH conference showed that big gaps remain between the practices sought by European anti-racism activists and those now being implemented by the tech companies.
One area of contention is Holocaust denial, which is illegal in many European countries but which several American companies, reflecting the broader free speech protections prevalent in the United States, are refusing to censure. Delphine Reyre, Facebook’s director of policy, said at the conference that the company believes users should be allowed to debate the subject. “Counter speech is a powerful tool that we lose with censorship,” she said. Cartes Andres cited the example of the hashtag #PutosJudios, Spanish for “Jewish whores,” which in May drew thousands of comments after a Spanish basketball team lost to its Israeli rival. More than 90 percent of the comments were “positive statements that attacked those who used the offensive term,” she said. Some of the comments are the subject of an ongoing police investigation in Spain launched after a complaint filed by 11 Jewish groups.
But Mark Gardner of Britain’s Community Security Trust wasn’t buying it. “There’s no counter-speech to Holocaust denial,” Gardner said at the conference. “I’m not going to send Holocaust survivors to debate the existence of Auschwitz online. That’s ridiculous.”
© JTA News.
Ill-will, incompetence or indifference. In which category does the inactivity of the Czech Police with respect to racist threats and verbal attacks belong?
22/10/2014- The failures of the criminal justice authorities result in making it possible for incitement to racism and threats to be made with impunity in the virtual realm, especially on social networking sites. Zdenìk Ryšavý, director of the ROMEA organization, recently became the target of such threats. More and more Czech citizens are personally experiencing this every day. People are becoming the victims of online threats because of their alternative opinions, religion, skin color, or - in the case of the director of ROMEA - because they refuse to agree with incitements to racism or to participate in disseminating xenophobic opinions.
When people fear for their lives, it is natural for them to turn to the police for help and protection, as the police motto goes. However, after experiencing bureaucratic obstacles and the time it takes to write up various documents and requests or make official statements, many realize the futility of seeking such police assistance; while rank and file detectives in the police departments do their best to help, their dependency on the often absurd instructions given them by police command ties their hands.
Incitement to murder
On 17 February a Czech-language Facebook page was launched with hateful content and an unambiguous name: "We Demand the Public Execution of the Executive Director of Romea, o.s., Zdenìk Ryšavý" ("Poadujeme veűejnou popravu výkonného űeditele Romea o.s. Zdeòka Ryšavého"). In addition to other texts inciting violence against a particular group, on 28 February the following discussion post also turned up there: "Not only will Zdenìk Ryšavý and his daughter have to pay with their blood, but so will Tomáš Bystrý, Jarmila Baláová and the dubious artist and perverted homosexual David Tišet" [sic, the correct spelling is Tišer - editors]. A Facebook user appearing under the name Gabriel Zamrazil then posted: "I totally agree. He deserves death.... Let me do it."
This commentary indicated a readiness to personally commit a crime or to otherwise ensure its realization. Ryšavý reported the page to Facebook as hateful and demanded that it be removed. "We immediately reported the page and called on our fans to do the same," Ryšavý told news server Romea.cz. Facebook sent a response within moments. "We have checked the page you reported as containing hateful language or symbols and found it does not violate our Community Principles," read the answer. This is the automatic reply that Facebook sends out within just a few minutes in such cases.
Ryšavý, afraid for his own life and for the security of his family, filed a criminal report on 5 March about the facts indicating that the making of criminal threats (Section 353 Act No. 40/2009, Coll.), incitement to commit a crime (Section 364) and approval of a crime (Section 365) had all been perpetrated. The presumption also exists that the people who supported these Facebook threats by clicking the "like" button (another 27 people) have committed the felony of approving of a crime. The police response that followed could have been a model for an absurd tragicomedy about how the rule of law works, one that should be screened in police academies as an example of how police officers and the state prosecutor are definitely not supposed to proceed when fulfilling their obligations. Ultimately, what helped the case was publicizing it; most probably, when the perpetrator learned from the media that a criminal investigation was underway, he got scared and erased the Facebook page himself.
Lost in translation
"The unwillingness of the Police of the Czech Republic to pursue serious verbal crimes like this is alarming," said Klára Kalibová, a lawyer who directs the In IUSTITIA organization, which participated in writing up the criminal report. The correct URL address of the Facebook page was included in that communication. Police had to first have the text of the report translated into English, and it then underwent approval according to a so-called Telecommunications Service Monitoring protocol, in accordance with the Czech Criminal Code, after which it was sent by the Police Presidium to the country at issue. In the first phase, that was Ireland, which is where Facebook has its European branch.
Not only did that entire procedure take several months, but the Czech Police sent the wrong URL address to Ireland. "Understandably, they wrote back from Ireland that the URL address was wrong and needed correction," Kalibová comments, adding, "but [the Czech Police] didn't correct it - instead they issued an absurd decision that was not based on the truth, claiming that they had not managed to find the perpetrator and that the case was being postponed." After some time, there was nothing left to do but to resubmit the motion to the police, again with the correct URL address. The police were repeatedly called upon to communicate with Face-book.
In the interim, however, an internal methodological instruction for the Police of the Czech Republic took effect according to which officers must first consult every-thing with the state prosecutor, who will decide on how to proceed. This, of course, meant that the excruciating process of the criminal investigation was far from over. "One state prosecutor, whom I will not name, but who is presented as a leading specialist in extremism, by the way, has already shelved several cases of verbal crimes, saying they are allegedly not serious and are covered by freedom of speech protections," Kalibová said. Those cases have involved, for example, right-wing extremists from the National Resistance, or Patrik Banga's criminal report filed against a journalist who invented and published a "news" story about Romani people allegedly robbing a collection that had been taken up for flood victims. "In Zdenìk Ryšavý's case, a police officer consulted it with [the state prosecutor] and she decided not to file charges. She allegedly insisted in her decision that in her experience, the Americans would not pursue this," Kalibová said.
The excuse of freedom of speech in the USA
What is absurd about the state prosecutor's approach in this context is the fact that she has argued in her decision that freedom of speech is extensive in American legislative practice. The state prosecutor's interpretation of that information is that US law tolerates these kinds of threats. That claim is dubious to say the least, because death threats against a specific individual are prosecutable in the USA, just as they are in the Czech Republic. It is mainly dubious in another sense: The state prosecutor either does not know or does not want to know that she was supposed to have been turning in this case not to the USA, but to Ireland, where EU legislation applies.
She is, therefore, involuntarily participating in creating de facto impunity for verbal crimes committed in a racist context in the Czech Republic. What is paradoxical is that according to our information, the Irish branch of Facebook responsible for Central Europe is friendly and helpful when it comes to intervening against such excesses, but of course they need the correct information to do so, and the Police of the Czech Republic, and indirectly the state prosecutor, basically were incapable of supplying it. "I was in contact with Irish Facebook's head of public relations for Central Europe, who said that if the police can prove this to her, she would cooperate with them. She told me: Have them write it up properly and we will be happy to oblige," said Kalibová, "but the Czech police officers, of course, did not respond to that."
Calls for murder illegal in US too
Kalibová believes this points to a serious systemic problem in addressing hate crime in a cybercrime context, because Europe cannot be toothless in its cooperation with the United States, and the clarification of specific crimes should not have to depend upon whether Czech police officers speak English or not. The state prosecu-tor's key argument, that the case of Zdenìk Ryšavý falls under the protection of freedom of speech as it is interpreted in the United States, is doubly moot. Even if the case were to fall under American legislation (and not Irish law, as it actually does), any call for the specific murder of a specific person is clearly illegal in all of these systems. "This is extremely serious misconduct by the criminal justice authorities and it is endangering the security of a specific person and his family," Kalibová stresses; she is considering using her final enforceable procedural tool, that of a complaint to the supervising Prosecutor's Office, which could order the state attorney to proceed in accordance with the Criminal Code.
Grist to the mill of the xenophobes
Giving the excuse that threats to publicly execute a Czech citizen and his family cannot be prosecuted by referring to the practically unlimited freedom of speech in the United States of America is unacceptable for two reasons: Such an excuse not only contravenes the facts, it mainly contributes to a false legal analysis and reinforces Czech racists and other extremists in the illusion that their behavior is tolerated by society and the state. This is particularly dangerous in a situation where blogs, the media, and social networks are abuzz with incitements to hatred.
Such lack of action further disseminates the feeling that calls for violence against ethnic minorities, or against those whose opinions differ from ours, are generally tolerated. In this context, the futile, long-term, strenuous efforts of this author to contact those responsible at the Police of the Czech Republic for a statement on this issue is symptomatic of a bigger problem; if the Czech Police provide us a statement after this piece is published, we will be glad to publish it.
21/10/2014- Destructive Creations, the Polish studio behind the upcoming game Hatred, has been accused of being neo-Nazi sympathizers and anti-Islamic xenophobes because of the organizations and people that they "like" on Facebook. Their game is, in their own words, over-the-top violent and purposefully insensitive to "social justice" themes. Yesterday, CEO Jaroslaw Zielinski spoke with Polygon about his feelings over the accusations and promised more clarification. Today, individual members of the development team made personal statements. In a blog post titled "The First Storm Resisted" Zielinski and others formally responded to the accusations made against them. "My great-grand father was killed by Gestapo," writes Zielinski. "Some members of my family were fighting against nazi occupation in the Polish underground army called 'Armia Krajowa'. My forefathers suffered greatly because of totalitarian regimes, so who the fuck would I be if I'd truly support any of Nazi activists?
"The hateful title I'm working on (where virtual character hates virtual characters), doesn't have any connection to what I truly believe and think, there is a real-life outside, you know? Maybe you should try it? I will never ever again respond to any of those accusations, this is my ultimate statement." "Nazi Germany is responsible for killing 6 million people in Poland," writes Marcin Kazmierczak. "Half of them were Jews, half of them Polish. My family suffered many losses during the World War II. Anybody accusing me for being a follower of said ideology should really think twice before doing so and consider reading some books on the topic. ... Values like pluralism, democratic opposition and the right to manifest one's own views shouldn’t be called ‘the lack of tolerance’. Finally regarding my attitude towards gays let me just say that I have a few gay friends that I deeply respect as people and have no problem with their sexual orientation."
"In response to repeated allegations against me," writes Jakub Stychno, "I’d like to state that I’m opposed to all totalitarian ideologies. The t-shirt that I’m wearing on our team picture refers to National Polish Army troops, that in 1945 refused to lay down arms and continue fighting against the new invader, to regain independent Poland. They did so because they’ve rightly anticipated Soviet security service repressions against Poland's already demilitarized army. I would also like to emphasize that until the year 1945 those troops were actively fighting against the Third Reich occupation. Those soldiers are Polish national heroes and as such deserve commemoration." CEO Jaroslaw Zielinski also states that while "we knew that our reveal will cause some shitstorm" his team did not expect such a wide or vocal reaction.
"Many can call us 'attention whores,'" Zielinski continued. "Well, we try to get world's attention to our product and as you can see — it worked perfectly. ... We wish to thank all of our haters and all upset press for a great marketing campaign they've done for us. "A week ago, we were a little company from the middle of nowhere, just some guys making some game. Today everyone heard about 'Hatred' and us. All thanks goes to those who were trying to harm us (with no desired effect, what a pity)."
Edit: The original version of this story listed the company name as Destructive Games. It is in fact Destructive Creations. Additionally, we've cleaned up the quoted sections of copy for readability.
By Katie Engelhart
21/10/2014- Last Thursday, at a public hearing about the “right to be forgotten” in central London, Google Executive Chairman Eric Schmidt had a bit of trouble pronouncing the names of the eminent Europeans with whom he shared a stage. But he tried his best. And he muddled his way through. It’s an apt metaphor for the way that one of the world's most powerful companies has been struggling in the wake of a ruling by the European Court of Justice (ECJ) in May on the so-called “right to be forgotten.” The court ruled that Google (and other search engines) must allow individuals to erase certain results that appear on web searches of their names—when the linked-to information is “inadequate, irrelevant, or excessive.” The court’s reasoning: Normal people have a right to be forgotten online. Reaction to the ruling bordered on hysterical. Depending on your view, the ECJ has either safeguarded individual privacy or heralded the slow death of the free and fair Internet in Europe. MailOnline publisher Martin Clark said that de-linking was “the equivalent of going into libraries and burning books you don’t like.”
When a site is “forgotten” on Google, it’s not actually deleted at the source, or even erased from all internet searches—but it does disappear from searches of the individual requester’s name. Now, if you Google search for a name in Europe, a notification appears at the bottom of the search page: “Some results may have been removed under data protection in Europe.” But the court was vague in its definition of “inadequate, irrelevant, or excessive” data. As a result, Google has been reluctantly cast in the role of pan-European judge and jury of the internet's collective memory, responsible for deciding (behind closed doors) what constitutes the continent’s public interest. Shortly after the May ruling, an evidently pissed-off Schmidt cobbled together an “Advisory Council on the Right to be Forgotten,” which includes an Oxford ethics philosopher, a former German justice minister, and Wikipedia boss Jimmy Wales. Google followed that up by launching a road trip. Last Thursday’s event was one of seven town-hall style meetings being held by the company across Europe.
Some have dismissed the tour as a PR stunt, and suggested that the company is engaging the public only to show up the clusterfuck that has been born of the ECJ ruling. If that’s true, well, mission accomplished. On Thursday, I went to one of Google’s public meetings in London to find out who does and does not have the right to be forgotten on the internet. Four hours later, I left feeling sure of one thing: Implementing the ECJ’s decision is going to be really, really hard. Schmidt began the day by discussing some more clear-cut cases. A victim of physical violence wanted references to the assault removed from web searches of his/her name. Google said OK. A pedophile wanted recent data about his conviction de-linked. Google said nuh-uh. So far, so simple. But in other cases, lawyers have wavered. Google has struggled with the case of an adult who wanted reference to a teenage drunk driving incident de-linked and the case of a former member of a far-right party who no longer holds extreme political views.
In deciding whether or not to de-link, Google must consider how “relevant” online data is, taking into account factors like “time passed,” the “purpose” of the information, and the role that the data subject plays “in public life.” Google must balance “sensitivity for the person’s private life” with “the public interest.” And it must determine if linked-to data is “inadequate” or “excessive.” But what does all that it even mean, at a practical level? What terrible things can you do and then have expunged from the internet’s collective memory forever? Let’s start with a fairly likely scenario. You’ve been recorded or photographed doing something that the internet deems hilarious at your expense—enthusiastically making out with someone who looks really bored, dancing really energetically and embarrassingly, that kind of thing. If the web has tied that meme to your name, do you have the right to hide from the digital public?
Gabrielle Guillemin of the nonprofit Article 19 suggested that embarrassment is not a good enough reason to request de-linking. But Google Advisory Council member Peggy Valcke, a law professor in Belgium, suggested that it could be. And anyway, argued Oxford University philosopher Luciano Floridi, another Advisory Council member, “Embarrassment comes in degrees. Social embarrassment becomes social stigma becomes losing your job… Do you we have a way of understanding when embarrassment, discomfortm and unpleasantness become harm?” Does the calculation change when the data involves a child? Or an otherwise vulnerable person? It didn’t really get cleared up. What if the source of the embarrassing material is you? Say you posted an emo selfie on MySpace ages ago and now it’s ruining your nascent cage-fighting career. Schmidt conceded that things get tricky when requesters themselves published the data that they now want de-linked. Recently, a media professional in Britain asked Google to erase links to “embarrassing content” that he himself posted online. Google said no.
What about if you’ve done something more serious? Say you’d rather everybody didn’t know about all that embezzlement you got caught doing at your last job. Panellists agreed that de-linking information on things like criminal convictions would depend, in part, on whether the requester is a public figure. But how do we define a “public figure”? David Jordan, the BBC’s director of editorial policy and standards, introduced the hypothetical case of a voluntary school board member—a guy who's "famous" evaluating the quality of school lunches. Is this man a public figure? And so, does all his data belong in the public domain? Evan Harris, a former member of UK Parliament and now associate director of the Hacked Off campaign, suggested that people might ask for information about prior fraud to be de-linked, then later run for public office. By extension, is everyone’s data in the public interest on the grounds that we’re all potential future elected officials or important people? Again, it wasn’t made clear, but to stand a better chance in that election, you should request that Google forgets before printing your campaign posters.
Already, many de-link requests have come from criminals. Schmidt gave the real-life example of a convicted criminal served his time and who now wants reference to the conviction erased from search results. Should old convictions be “forgotten”? How old is old enough? This was also left—you guessed it—unclear. Increasingly, advocates on both sides of the line are joining together to issue a common plea that these critical decisions be made in European courtrooms rather than in Google boardrooms. They also insist that Google’s decisions be subject to external review. Currently, there is no appeals process for content publishers who disagree with a Google de-linking decisions. That may change, and soon. European regulators are already at work, beefing up the continent’s data protection policy, with an eye to codifying the right to be forgotten.
Philosophy aside, Google is faced with a logistical nightmare. The company has reportedly hired dozens of lawyers and paralegals to deal with de-link requests on a case-by-case basis. “It’s not obvious to me that this can ever be automated,” said Schmidt on Thursday. Already, Google admitted to errors—and has re-linked some of the half a million de-link requests that they have fielded since May. And yet, for now, there remains a simple way to maneuver around this new European internet. Going to google.com (rather than, say, google.co.uk or google.fr) transports European internet searchers to virtual America—and thus gives them access to the entirely “remembered” internet that they once knew. On Thursday, Schmidt was asked whether European searchers should simply start using the .com site. “I am not recommending that,” he said, with a wry smile.
Thousands of Facebook users 'liked' the post, featuring a picture of Lynda with All Creatures Great and Small co-star Christopher Timothy
24/10/2014- Lynda Bellingham’s tragic death has been exploited on Facebook by far right extremists, it emerged last night. Britain First encouraged people to like and share a picture of the Loose Women star minutes after her death was announced on Monday. Thousands of Facebook users 'liked' the post, featuring a picture of Lynda with All Creatures Great and Small co-star Christopher Timothy. However, many would not have been aware that the photo was being spread by Britain First, an ultra-right campaign group. Its supporters use the Britain First Facebook page to call for British Muslims to be “wiped out” and non-whites deported. Formed from former BNP and EDL members, Britain First made headlines this year by invading mosques and threatening imams.
Men in the group’s paramilitary-style uniforms pushed their way into several mosques in England and Scotland. Founder of Britain First, Jim Dowson, later quit the group over its “unchristian” paramilitary-style “mosque invasions”, saying they were “provocative and counterproductive”. He added that they were attracting “racists and extremists” to the organisation, which has taken over from the British National Party and the English Defence League as the biggest far-right threat in the UK. Mr Dowson, from Belfast, left the BNP in 2010 to form a “Christian” group opposing the rise of radical Islam. But he told the Mirror he had pulled the plug on the group’s funding, closed their office in Belfast and severed all links.
He described the mosque invasions as “unacceptable and unchristian”, adding: “Most of the Muslims in this country are fine."They are worried about extremists the same as us. So going into their mosques and stirring them up and provoking them is political madness and a bit rude.” Matthew Collins, of anti-racist group Hope not Hate, said: “It is the most dangerous group to have emerged on the far right for several years.” But a Britain First spokesman told The Sun: “We do this regularly when British celebrities pass away. We pay our respects.” Brave Lynda lost her battle with cancer at the weekend after the disease spread from her colon to other parts of her body. She died in her husband Michael's arms on Sunday, aged 66.
© The Daily Mirror
22/10/2014- A 21-year-old British man was sentenced to four weeks in jail for sending an anti-Semitic tweet to a Jewish member of Parliament. Garron Helm pleaded guilty Monday to sending the offending message to Labour Party member Luciana Berger. In addition to the jail sentence, Helm was ordered to pay Berger $128. The tweet, which called Berger a “communist Jewess,” showed a photograph of her with a Holocaust yellow star photoshopped onto her forehead and the words, “You can always count on a Jew to show their true colours eventually.” It had the hashtag “Hitler was right.” Helm’s home contained Nazi memorabilia and a flag for an extremist right-wing group called National Action. “This sentence sends a clear message that hate crime is not tolerated in our country,” Berger said in a statement. “I hope this case serves as an encouragement to others to report hate crime whenever it rears its ugly head.”
© JTA News.
Representatives From Google, Twitter and Others to Meet with Cameron Advisers on Thursday
22/10/2014- The U.K. government is intensifying efforts to enlist the help of large technology companies such as Twitter Inc. and Facebook Inc. in combating extremist content online amid growing concerns about terrorist threats. Representatives from the companies, which also include Google Inc. and Microsoft Corp. , are due to meet with policy advisers for British Prime Minister David Cameron on Thursday to discuss how they can reduce ways for terrorists to recruit and spread their messages online, according to government officials. While the large technology companies have been generally cooperative, British officials say, thorny issues remain. Among them: what to do about material authorities consider extremist and want removed but that isn’t necessarily illegal, such as some videos of sermons by radical preachers or posts by extremists encouraging Westerners to join the fight in Syria. Privacy considerations are another challenge in instances where U.K. authorities have asked technology companies to hand over details of the people posting the content, such as names, usernames, email addresses and Internet protocol addresses, which can help identify a person’s general location.
Thursday’s meeting, which is due to take place at Mr. Cameron’s official residence at Downing Street, will be chaired by Jo Johnson, head of the prime minister’s policy unit. A spokeswoman for Mr. Cameron said the purpose of the meeting is to discuss “what we can do collectively in this area.” She added that the big technolo-gy companies have been collaborative in working with the government to remove terrorist and extremist material, though they have raised some concerns in general about data protection. Facebook, Google, Twitter and Microsoft declined to comment on the Downing Street meeting. Big technology companies say they are generally responsive to government requests in removing terrorist-related content and many have policies against posting violent or threatening content.
Technology companies have tried to push back against some of the requests to remove content or turn over user data, though. Internally, some technology executives say they are worried that censorship techniques more common in countries such as Russia and Turkey could become more generalized as governments grant themselves more power. Some companies are skeptical about handing information about their users to governments, particularly if the user hasn’t done anything illegal, said Michael Clarke, director at the Royal United Services Institute, an independent think tank on defense and security. “It’s a very delicate relationship at the moment,” he said. Still, the companies say they seek to work with government. In the U.K., for instance, Google handed over information in response to 1,100 of the more than 1,500 requests it received from the government, according to data released by the company. French police officials say many companies have dedicated pages that allow law enforcement organizations to send requests directly to the firms for information such as names, email addresses, credit card billing information and other information.
Sophisticated and prolific use of social media for propaganda purposes has been a hallmark of Islamic State, the militant group that has captured large stretches of territory in northern Iraq and Syria. Extremists have posted content ranging from images of killings to promotional-type videos intended to lure young Westerners to fight. The concern for many European countries, including the U.K., France and Belgium, is that the material will serve to fuel the already large numbers of citizens going to fight with extremist groups overseas—and that they will be more likely to take part in terrorist activity when they return. U.K. authorities say, on average, five people a week travel from Britain to Syria and Iraq to fight and there has been a sharp increase in the terror-related arrests at home. On Wednesday, police arrested a man and a woman separately on suspicion of terrorist activity as part of separate Syria-related investigations.
As a result, the U.K. and other governments are stepping up efforts to delete content and track down the authors of extremist content online. London’s Metropolitan Police, known as Scotland Yard, says it has been removing around 1,000 pieces of such content from the Internet each week, most of which is related to Iraq and Syria. This includes videos of beheadings and other killings, torture and suicides. “Dealing with material which may be described as extremist, but does not obviously infringe (upon) U.K. terrorist legislation, is more difficult,” a senior U.K. government security official said. “We have proposed to companies that they consider seriously whether this material is consistent with their terms and conditions.”
European Union officials met in Luxembourg earlier this month with representatives of Google, Facebook, Twitter and other companies to discuss ways to combat online propaganda from terrorist groups. France has recently beefed up its antiterrorism laws to allow, among other actions, authorities to cut off Internet access for people defending terrorism and websites labeled “terrorist.” The measures also permit wider terrorist surveillance online. But some specialists in counterterrorism question the effectiveness of governments increasing reliance on censorship and filtering to counter online extremism. Ghaffar Hussain, managing director of London think tank Quilliam Foundation, said such moves tend to be costly and potentially counterproductive. He said a more effective method is producing content for online initiatives that counter extremist ideas, such as parody videos making fun of recruits militant group Islamic State. “To simply shut the debate down doesn’t allow any progress to be made on the counter-extremist front,” Mr. Hussain said.
© The Wall Street Journal.
Violent hooligans, backed by right-wing extremists, have teamed up against a new enemy: Salafists. For months now, they have lashed out online - and now they're taking to the streets.
18/10/2014- It began on Facebook, where anti-Islam soccer fans have been venting their anger in online forums for months now. But lately, in German cities, like Essen, Nuremberg, Mannheim, Frankfurt and Dortmund, hostile and extremely violent hooligans, usually at odds with each other, have united against a new enemy: Salafists - a radical and militant branch of Islam. Their initiative, currently known as Ho.Ge.Sa. - "Hooligans gegen Salafisten" ("Hooligans against Salafists") - has seen its profile repea-tedly blocked by Facebook, but it always reappears under another name. It's here that the group is stoking the flames against the hard-line Salafist movement. Next stop: a demonstration planned for October 26 in front of the Cologne Cathedral.
The current mood and the protests organized by Kurds across Europe are giving hooligans and right-wing sympathizers the chance to "apparently demonstrate against the Salafists, but really only to express their own Islamophobia," Olaf Sundermeyer, a journa-list and author, told DW. "We are 'hooligans against Salafixxxx.' Together, we are strong," reads the group's Facebook page. They see themselves as "a movement that has brought together hooligans, ultras, soccer fans and ordinary citizens in a common fight against the worldwide 'Islamic State' terror campaign and the nationwide Salafist movement." In Facebook posts and on banners at their demonstrations, they call their group the "resistance" against "the true enemies of our shared homeland." The latest protest in Dortmund drew around 400 people. "On 26.10.2014 in Cologne, we will significantly increase this number of participants," a moderator recently announced on the site. "Peaceful, unmasked and without rioting."
'Salafists are the greater evil'
These slogans have actually served to bring together opposing hostile fan bases, who usually meet up before and after sports events to fight each other. Gunter A. Pilz, an expert on fan behavior from the Sport University in Hanover, calls this phenomenon "a temporary fighting alliance." However, he said that this coalition will only last as long as the common enemy: the Salafists. Sundermeyer, who points out that anti-Islam attitudes are widespread in the soccer fan scene, said there's a risk that ex-treme right-wing groups will be tolerated because the brutality of "Islamic State" militants in Syria and Iraq is proof to many that Salafists are the greater evil. In an interview with German public radio Deutschlandfunk, Sundermeyer said that "Hooligans against Salafists" is still a relatively small group, but stressed that it could attract more followers - even those with less radical viewpoints. Soccer, he said, is the ideal environment to radicallize and recruit young people to the extreme right-wing cause. Officially, though, the league has distanced itself from the right-wing extremist movement.
Mobilizing apolitical hooligans and soccer fans
But there's an obvious overlap with the neo-Nazi scene: Ho.Ge.Sa. is backed by Dominik Roeseler, a member of the rightwing Pro NRW party who sits on the Mönchen- gladbach city council. He plans to be at the demonstration in Cologne. Roeseler is considered to be quite extreme and is, like all right-wing party members, under observation by German security officials. And there are further connections: At the protest in Dortmund, many shirts, jackets and banners were adorned with neo-Nazi symbols. The next day, a post on the Facebook group backtracked, saying that "unfortunately, we have found out that many neo-Nazis came to this event. We want to once again make it clear that we are not political."
There doesn't even seem to be a consensus over Dominik Roeseler among the Ho.Ge.Sa. members. A few days ago, they announced that they had parted ways with him. But one thing is certain: the Cologne demonstration is being organized by right-wing political officials. Is Ho.Ge.Sa., therefore, an attempt by right-wing extremists to drum up new members from within the ranks of hooligans and extremist soccer fans? At the most recent count, the number of Ho.Ge.Sa. fans had risen to more than 16,000. "We continue to grow, the media can hound us all it wants. This time, you will not be able to stop us," wrote a follower on the site. Until recently, soccer associations, clubs and other fans had been able to keep the hooligans in check, said Sundermeyer. Now, however, faced with the threat posed by the Salafists, the cause of the right-wing extremists is seeing increasing support.
© The Deutsche Welle.
17/10/2014- In Italy, in 2013, for the first time ever, online racial discriminations exceeded those recorded as part of public life and in the workplace. More than ¼ of the cases (26.2%) refers to the mass media (compared to 16.8% in 2012). For a total of 354 cases. These are some of the figures, already widespread by the Italian National Bureau against Racial Discrimination), and reported in the “Third White Paper on racism in Italy” by Lunaria. The work, nearly three years after the publication of the second one, has monitored, analyzed, studied and summarized the multiple forms of xenophobia in this country.
Download Lunaria, Cronache di ordinario razzismo - Terzo Libro bianco sul razzismo in Italia, 2014 (PDF)
© West Info
B’nai B’rith says Etsy, Ebay, Amazon, Sears and Yahoo! guilty of allowing users to sell offensive items
17/10/2014- International Jewish organization B’nai B’rith demanded several online retail outlets Wednesday to enforce policies against users selling “hateful parapher-nalia,” The Times of Israel reported Thursday. According to B’nai B’rith web retailer Etsy had “456 swastika-themed items...available for sale, as were 479 Hitler-themed items, 13 Ku Klux Klan-themed items, and one racist, Jewish caricature candlestick listed specifically under the topic ‘anti-Semitic.’” B’nai B’rith said Ebay, Amazon, Sears Marketplace and Yahoo!, were also guilty of allowing users to sell offensive items on their sites. Sears then removed a swastika ring from the roster of items offered for sale, the Jewish Telegraphic Agency reported. The item description quoted in the report read "this Gothic jewelry item in particular features a Swastika ring that’s made of .925 Thai silver.” It then featured the following curious disclaimer: “Not for Neo Nazi or any Nazi implication. These jewelry items are going to make you look beautiful at your next dinner date.”
According to JTA, the item also was for sale on Amazon.com, though it is listed currently as unavailable. Sears issued an apology in a statement and on Twitter:
“Like many who have connected with our company, we are outraged that more than one of our independent third-party sellers posted offensive items on Sears Market-place,” the company said in a statement. “We sincerely apologize that these items were posted to our site and want you to know that the ring was not posted by Sears, but by independent third-party vendors.”
© i24 News
These days social media allows strangers and their opinions into our homes at all times of the day or night – but only if we allow it to
By Jade Wright
17/10/2014- It’s not every morning that I’m described as a fascist and ‘a silly young hack who resorts to insults at the first provocation’. Not before before I’ve finished my toast, anyway. Admittedly I am quite strict about separating my vegetarian fry up from my boyfriend’s carnivorous version, but most mornings are fairly peaceful in our house – until either of us picks up our phones and looks at Twitter. This week I spotted a message from a bloke (at least I think it’s a bloke, but there was no picture), which read: “Just read your June article in the Echo about Britain First. You are the reason people re-post their stuff. Wake up!” That one story, which I wrote in response to people sharing Britain First’s D-Day posts on Facebook, is still the best-read column I’ve ever written. I don’t know why, but it still gets re-posted and read every week, and I still get plenty of abuse from far-right supporters about it, as well as some nice comments too.
This bloke had clearly taken exception to me pointing out that Britain First are a right-wing political party and street defence organisation who encourage people to share their posts to spread their message. He didn't like me warning people against re-posting things without checking what they are. He said: “The issue is that people like YOU are wilfully ignoring why people like me turn to the far right. Only they give us a voice... We agree with your multicultural hogwash or you dismiss us as fascists. YOU are the fascist.” I laughed so hard I almost spat my tea out. Boyfriend looked crossly across the table, briefly distracted from his plate full of sausages and bacon. We try not to spend our rare time at home together arguing with strangers on Twitter. We have a no-phones-at-mealtimes rule.
But this was too funny for me not to respond. The man, who said he was part of the far right, was using fascism as an insult. That’s like me accusing someone of being a ‘lefty’ as a bad thing. He didn’t seem to realise that fascism is a form of authoritarian nationalism – the very thing he claims to support. Presumably he thought it was just a catch-all insult for anyone whose opinions he disagreed with. My response was probably a bit mean, looking back. I made fun of his insult and his poor use of grammar. I told him to come back and debate when he’d read his history books. This prompted the “just a silly young hack who resorts to insults at the first provoca-tion” tweet.
He’s not that far wrong – I am silly and I quite liked being described as young – but then I came to my senses, put down my phone and picked up my knife and fork. Time was when I had to leave the house to be insulted by a stranger (rather than insulted by someone I know, which happens all the time). These days social media allows strangers and their opinions into our homes at all times of the day or night – but only if we allow it to. I’m putting down my phone.
© The Liverpool Echo
In late September and early October every year, hundreds of thousands of new and returning students journey from their family homes to university campuses across Britain. This includes over 8,500 Jewish students who, in the addition to the usual pressures associated with resuming university life, are having to consider what this summer’s record spike in anti-Semitic incidences will mean for them in the coming year.
13/10/2014- At JW3 in north-west London, The Times of Israel spoke with Ella Rose, president of the Union of Jewish Students (UJS), the peer-led body which represents British Jewish students and is a confederation of 64 Jewish societies (JSocs) from across Britain’s universities. As president, Rose is responsible for representing the interests of students to the wider community as well as setting the strategic goals and objectives for the UJS during her one-year term. During our interview, we discussed how the UJS has prepared for the new university year, as well as issues of anti-Semitism on campus and the place of Israel advocacy in the union’s work.
Tell us what you and the UJS have been doing in the past two weeks and what you’ve found.
This is my first day in the office in about two weeks! Last night, I slept on the floor of a freshers’ dorm in Bristol – which was awful. We’ve been doing our campus visits, about forty visits in two weeks between the eight members of our program staff, going to all the different freshers’ fayres, freshers’ events. For example, yesterday I went to visit Bath JSoc for their freshers’ fayre and then over to Bristol for their freshers’ barbecue. We’ve been building relationships on campus, making sure they’re comfortable going onto campus, that they can sign people up and just being a friendly face and a helping hand. Jewish students are getting on with their lives. At Bristol last night, there were around 150 people at the barbecue, which is fantastic. There was very little Jewish life there four or five years ago. Now, they’re one of the biggest JSocs in the country and that’s because people created a welcoming Jewish life, other people hear about it and they come along. I think there are about sixty kids from JFS [a Jewish secondary school in north London] at Bristol now. It’s brilliant.
Given the summer we’ve had in terms of heightened anti-Semitism in the UK, what has the UJS been doing in preparation for the start of the new university year?
We were worried. There’s rising anti-Semitism and campus is a microcosm, so what you see in our communities is often reflected on campus. But, we’ve had really strong and positive start to term. As far as I am aware, we’ve not had any incidences where people have felt uncomfortable because they’re Jewish. We had a leadership and political training summit at the beginning of September and we talked about these issues and we said, ‘This might be an issue, this is what you should do, this is what you should think about preparing for campus.’
We started a campaign called #keepitkosher, with the tagline ‘Snap It, Send It, Stop It’, and it’s about stopping online anti-Semitism because that’s where some of the students would feel it more strongly, and we work with the CST [on that]. It’s about creating a safe space for Jewish students and students feeling that there is someone there to support them. But I went to Nottingham, I never experienced anti-Semitism when I was there and I’m pretty sure everyone knew I was Jewish because it’s not something I keep quiet about. I believe it’s an incredible time to be a Jewish student and I don’t believe that will change this year.
What are your plans for the coming year concerning Israel advocacy and creating safe spaces on campus to discuss Israel?
On Israel, we are unified but not uniform. We are a union of Jewish students but that doesn’t mean we expect anyone to have the same uniform opinion within that. We do not mandate what individual JSocs do: some choose to be involved in Israel debate, some don’t. It’s important to recognize that while the majority of Jewish students do have a connection with Israel as part of their identity, all identities are multi-faceted and none of them are the same.
Having said that, we do have mandated policies that are voted on every year at the UJS conference. We proudly support the two-state solution, we proudly stand against anti-Semitism, we also stand against BDS. At Sussex last year, for example, which is seen to be a very left-wing university, a BDS resolution failed because Jewish students took their own initiative and said that an academic boycott would be unacceptable. This policy isn’t imposed on JSocs but, as a union, we are opposed to BDS and will combat the delegitimization of Israel and work with our communal partners to do so.
What did you think of the discussion earlier this year about whether the UJS’ mandated policies on Israel exclude anti-Zionist students from JSocs?
It was a really interesting discussion and it stemmed from a debate we had at the UJS conference about how we do Israel. UJS is an inclusive space: we are cross-communal, peer-led, and representative, and I would hate to not to be to able to include anyone because of their beliefs. It’s difficult because you have some students who are anti-Zionist and some for whom Zionism is part of their Jewish identity and if they didn’t get Zionism at a Jewish society, they would feel like they were missing something. It’s two Jews, three opinions – it’s impossible. I’m not convinced it’s something you can ever completely solve. I would want an open and inclusive space and it’s up to students within that to have the conversation.
When did you become involved with UJS?
I started university in 2011 and I decided that I was going to sign up to women’s football and JSoc. I did play women’s football but I was part of a team that lost 24-0, which is approximately a goal conceded every three minutes, which is quite impressive and very tragic. That was around the time I gave up – obviously I wasn’t a very good striker. So, I got really involved in JSoc when I gave up football. I ran for the campaigns committee and was involved in their Israel work and then got involved in the UJS because of this.
Two years ago, Alex Green [a former president of the UJS] put the idea of running for president into my head. I was on the UJS National Council, went on a trip with the EUJS to the UN Human Rights Council in Geneva, and I loved it. It was different, interesting, fun and all about peer leadership which is a value I grew up with in BBYO and I loved that idea that you could empower someone to do things themselves rather than just doing it for them.
What are your ambitions for your term as president?
I ran on a platform of accountability and representation and strengthening the functions we already have. One thing that’s already gone live is that we have a feedback form on our website because, as a first year [student], it’s really intimidating to call someone who works at the UJS. It shouldn’t be for them to feel like they have to make that move, we should be accessible to them, and through the feedback form people can have an instantaneous connection to the union. Another priority is improving student services, including our liberation networks [a women’s, LGBT+, and disabled students’ network] which I feel can really grow over the next year. They’re relatively new, started in 2011, and I still think they have a way to go before they can enact change on the ground. Even if it’s just a social tool, these networks are really important as a space to help people come together, although I think they can be much more than that.
What major campaigns or initiatives is the UJS running at the moment?
Jewish Experience Week, which actually started last year, and it was unbelievable. You had thirty different campuses, and around 300 Jewish student leaders reaching around 3,000 non-Jewish students, talking to them about what it means to be Jewish. We had Jewish students telling people, ‘Did you know that there are Ethiopian Jews, Sephardic, Ashkenazi, Irish and Indian Jews?’ That we’re not a uniform body. I’m so excited to see that re-run this year and I think it’s going to be even stronger in its second year. UJS hadn’t done something that big campaigns-wise in around seven years and I’m so proud of Maggie Sussia [UJS campaigns director] for pulling that off.
© Times of Israel
14/10/2014- Dutch internet companies are coming under pressure from the government to censor comments and place limits on freedom of speech, the Financieele Dagblad said on Tuesday, quoting industry campaigners. Justice ministry officials are asking providers and hosting companies to remove websites from the internet without any legal basis, industry representatives told the FD. ‘They are making us responsible for deciding if something is against the law,’ said Michiel Steltman, director of the Dutch Hosting Provider Association (DHPA). ‘We rent web space and platforms. But because the justice ministry can’t trace the tenant, they dump the problem on us.’
In particular, providers are critical of the government’s plan to tackle radicalisation and jihadism which involves curbing the spread of ideas supporting violence. The paper did not give any examples of sites which have been shut down or videos which officials have requested be removed. But Steltman quoted the recent example of a video of a group of men sitting around a campfire firing guns and shouting 'allahu akhbar'. 'Have they just killed someone, are they angry that someone has been killed or have they killed a goat for a party?' he said. Alex de Joode, company lawyer with the Netherlands' biggest hosting provider is also criticial. 'We are not about checking ages and censorship,' he said. 'The government has the right legal instruments to remove content but chooses not to use it when it comes to claims of jihadism.' Dutch counter terrorism chief Dick Schoof said in a reaction that he understands the providers position but that ‘I believe they should assist efforts to counteract jihadist radicalisation within the legal limits’.
© The Dutch News
Justice ministers are struggling to balance the right to freedom of expression and the right to be forgotten in the EU’s data protection reform bill.
10/10/2014- The political debate on Friday (10 October) in Luxembourg surfaced following a ‘right to be forgotten’ ruling in May against Google by the European Court of Justice (ECJ). In the ruling, the Court concluded it was reasonable to ask Google to amend searches based on a person’s name if the data is irrelevant, out of date, inaccurate, or an invasion of privacy. Google has so far received 143,000 requests, related to 491,000 links, to remove names from search results. The ECJ decision only affects search requests based on a person’s name. The content at source remains untouched. But critics like Wikipedia founder Jimmy Wales described the decision as "one of the most wide-sweeping internet censorship rulings that I've ever seen".
Others say it produced clarity on issues of jurisdiction but did not go far enough in explaining how Google – or other data controllers – should handle people’s requests to have their names swiped from search engine results in the first place. "We can't leave it up to those who run search engines to take a final decision on the balance between these different fundamental rights," said Austria's justice minister. Ireland, where Google has its European head quarters, also doesn't like the idea. For the European Commission, the ECJ ruling does not pose a problem with right to be forgotten in the draft bill. It notes the right is already included in the proposed regula-tion along with an exception on the freedom of expression.
EU justice commissioner Martine Reicherts also noted the EU’s main privacy regulatory body, the “Article 29” Working Group, is coming up with operational guidelines for big companies like Google on how best to put the court’s decision into practice. “This will strengthen legal certainty, both for search engines and individuals, and will guarantee coherence,” she said
Not everyone is convinced.
The justice ministers differed on to what extent the ruling will affect the EU data protection regulation currently under discussion at member-state level. The heavily lobbied bill, which was tabled in early 2012, is set for adoption next year but has run into problems among national governments. The EU Italian presidency is hoping to reach an agreement by the end of the year in order to start formal talks with European Parliament. On Friday, the ministers managed to come to a general agreement on parts of the text in terms of international data transfers and exempting businesses with fewer than 250 employees. But member states like the UK still want to downgrade the bill into a directive, a weaker legal instrument compared to a regulation. "We need to be careful about creating rights that are not deliverable in practice as well as wider regulatory burdens," the British minister said.
As for the ECJ ruling, the question remains if additional rules or clarifications based on the court’s judgment should be inserted into the bill. Germany, Luxembourg, Poland, Portugal, the UK and others oppose referencing the court’s judgment in the bill. “To us this could be a dangerous precedent for the future and could perhaps negatively affect the freedom of speech,” said Poland’s justice minister. Instead, Germany wants more text in the bill to guarantee the freedom of expression by lifting the article from Charter of Fundamental rights and inserting “it into our data protection regulation.” Lithuania backs this idea. France also expressed reservations, noting that the right to be forgotten cannot be an absolute right. “How can we respect our citizens right to be forgotten without standing in the way of the freedom of expression and the freedom of the press at the same time?,” said the French minister. Spain, which brought the case against Google in May, backs the ECJ ruling and says it is "no way incompatible" with the right to the freedom of expression and information.
© The EUobserver
8/10/2014- The website of the Czech Helsinki Committee (ÈHV) has been targeted for attack by "nationalist" hackers from the White Media group. The hackers publicly announced on their own website that they attacked the human rights organization as part of their annual "Week against Anti-Racism and Xenophilia", which began on 28 September. In addition to the ÈHV's website its Facebook profile was attacked, as was the personal Facebook profile of director Lucie Rybová and her personal email account. The Brno branch of Amnesty International in the Czech Republic was hacked as well.
"It's alarming how defenseless you are in such a situation," Rybová told news server Romea.cz. While negotiations with Facebook regarding the blocking of the profiles and creating new slogans took place fairly quickly, negotiations with the operator of her email account and the operator of the ÈHV website are remarkably problematic, according to Rybová. "The operator of the Czech Helsinki Committee's website, the Forpsi server, says it has never encountered such a situation. We reported the hacking to them and asked them to post a text on the site explaining why the pages are not available, but all that shows up there is the message "inoperative", thanks to which we seem unreliable. It looks like we haven't paid for the domain, and it is also harming us in other areas, including our clients - they can't access our contact information so they can't call the counseling center," Rybová said.
In addition to the organization not being able to fully focus on some of its activities because of its non-functioning website, ÈHV also cannot now report online about those activities, which is usually a frequent obligation with respect to projects. Addressing the situation with the stolen email account is even more complicated. The hackers have stolen Rybová's password to her personal email account on Seznam and have changed it. "I have to prove the email is actually mine, using the same online form as when you forget your password. I have done it three or four times and nothing happens. When I call the hotline they refer me back to the online form and are unable to connect me with anyone who can handle my situation or even temporarily block the account," she explained to Romea.cz.
While Seznam has taken a passive approach to the situation for several days already, the neo-Nazis have continued to enjoy unfettered access to Rybová's personal email account. ÈHV is considering filing a criminal report against the hackers. Even that, however, will not be easy, because while the racist and xenophobic content of the White Media website violates Czech law, its domain is registered with a web hosting company in California and is subject to the laws there. Those laws are much more benevolent when it comes to freedom of speech, including the dissemination of hate, than are laws in the Czech Republic.
A "private" dinner between tech firms and government officials from across the EU is to take place on Wednesday.
7/10/2014- The purpose of the meeting is to discuss ways to tackle online extremism, including better cooperation between the EU and key sites. Twitter, Google, Microsoft and Facebook will all be attending in Luxembourg. Governments are becoming increasingly concerned over how social media is being used as a recruitment tool by radical Islamist groups. Further details about the meeting could be shared by the EU later on Wednesday ahead of the dinner taking place. It will be attended by ministers from the 28 EU member states, members of the European Commission and representatives from the technology companies. The European Commission said: "There is strong interest from the European union and the ministers of interior to enhance the dialogue with major companies from the internet industry on issues of mutual concerns related to online radicalisation."
In particular, it said the meeting would focus on:
@ "the challenges posed by terrorists' use of the internet and possible responses: tools and techniques to respond to terrorist online activities, with particular regard to the development of specific counter-narrative initiatives"
@ "internet-related security challenges in the context of wider relations with major companies from the internet industry, taking account due process requirements and fundamental rights"
@ "ways of building trust and more transparency"
The BBC understands this is the second time since July that the firms have been called in to discuss possible measures. However a notable absence at the meeting will be Ask.fm, a social network believed to have been extensively used as a recruitment tool for radical Islamist groups. The firm was owned by Latvian brothers Ilja and Mark Terebin, but in August was bought by the American company behind Ask.com. The site's new owners told the BBC: "Ask.fm has not been invited. "If we had known about it, we would have attended for sure."
Representing the UK government at the meeting will be security minister James Brokenshire. "We do not tolerate the existence of online terrorist and extremist propaganda, which directly influences people who are vulnerable to radicalisation," he told the BBC. "We already work with the internet industry to remove terrorist material hosted in the UK or overseas and continue to work with civil society groups to help them challenge those who promote extremist ideologies online. We have also made it easier for the public to report terrorist and extremist content via the gov.uk website." The government's Counter Terrorism Internet Referral Unit (CTIRU), set up in 2010, has removed more than 49,000 - pieces of content that "encourages or glorifies acts of terrorism" - 30,000 of which were removed since December 2013.
Details on the EU dinner are sparse.
But there is increasing concern over the role social media plays in disseminating extremist propaganda, as well as being used as a direct recruitment tool. However, there is also a significant worry that placing strict controls on social networks could actually hinder counter-terrorism efforts. "The further underground they go, the harder it is to gleam information and intelligence," said Jim Gamble, a security consultant, and former head of the Child Exploitation and Online Protection Centre (Ceop). "Often it is the low level intelligence that you collect that you can then aggregate which gives you an analysis of what's happening." Mr Gamble was formerly head of counter-terrorism in Northern Ireland. There were, he said, parallels to be drawn. "There's always a risk of becoming too radical and too fundamentalist in your approach when you're trying to suppress the views of others that you disagree with. "In Northern Ireland, huge mistakes were made when the government tried to starve a political party of the oxygen of publicity. I would say that that radically backfired."
Current estimates put the number of British citizens recruited to fight for radical Islamist groups in Syria and Iraq at more than 500. Mr Gamble said the recruitment process focused on singling out those who looked most susceptible. "They identify the isolated, the lonely, those people who have perhaps low self-esteem, and are looking for something, or someone." Ask.fm's site hosted several discussions regarding the practicalities of getting to Syria or Iraq. Many of these discussions remained online for a considerable amount of time - some for several weeks.
However, in an interview with the BBC, Ask.fm said it had had few requests from governments to take such material down. "In the past 18 months we've only received about a dozen requests from law enforcement," it said. "Sometimes these issues are really hard to discover when you've not got the full concept of what's going on outside the social network that you run. "We really do want to forge partnerships with law enforcement to be able to take meaningful action on this." In a statement, a spokeswoman for the Met Police said 1,100 pieces of content that breached the Terrorism are removed each week from various online platforms - approximately 800 of these are Syria/Iraq related.
Update 09/10/14: EU commissioner Cecilia Malmström and Italian interior minister Angelino Alfano - who both hosted the dinner - have issued a statement. It reads: "The participants discussed various possible ways of addressing the challenge. It was agreed to organise joint training and awareness-raising workshops for the representatives of the law enforcement authorities, internet industry and civil society."
© BBC News
Let's talk about nude photo leaks and other forms of online harassment as what they are: civil rights violations
By Danielle Citron
7/10/2014- Over the past few weeks, a prominent—and nearly all female— group of celebrities have had their personal accounts hacked, their private nude photos stolen and exposed for the world to see. Friday brought the fourth round of the aggressive, invasive, and criminal release of leaked photos. Whether the target is a famous person or just your average civilian, these anonymous cyber mobs and individual harassers interfere with individuals’ crucial life opportunities, including the ability to express oneself, work, attend school, and establish professional reputations. Such abuse should be understood for what it is: a civil rights violation. Our civil rights laws and tradition protect an individual’s right to pursue life’s crucial endeavors free from unjust discrimination. Those endeavors include the ability to make a living, to obtain an education, to engage in civic activities, and to express oneself—without the fear of bias-motivated threats, harassment, privacy invasions, and intimidation. Consider what media critic Anita Sarkeesian has been grappling with for the past two years. After Sarkeesian announced that she was raising money on Kickstarter to fund a documentary about sexism in video games, a cyber mob descended.
Anonymous emails and tweets threatened rape.
In the past two weeks, Sarkeesian received received tweets and emails with graphic threats to her and her family. The tweets included her home address and her family’s home address. The cyber mob made clear that speaking out against inequality is fraught with personal risk and professional sabotage. Her attackers’ goal is to intimidate and silence her. Revenge porn victims face a variant on this theme. Their nude photos appear on porn sites next to their contact information and alleged interest in rape. Posts falsely claim that they sleep with their students and are available for sex for money. Their employers are e-mailed their nude photos, all for the effort of ensuring that they lose their jobs and cannot get new ones.
Understanding these attacks as civil rights violations is an important first step. My book Hate Crimes in Cyberspace explores how existing criminal, tort, and civil rights law can help combat some of the abuse and how important reforms are needed to catch the law up with new modes of bigoted harassment. But law is a blunt instrument and can only do so much. Moral suasion, education, and voluntary efforts are essential too. Getting us to see online abuse as the new frontier for civil rights activism will help point society in the right direction.
Danielle Citron is the Lois K. Macht Research Professor & Professor of Law at the University of Maryland Francis King Carey School of Law. She is an Affiliate Scholar at the Stanford Center on Internet and Society and an Affiliate Fellow at the Yale Information Society Project. Her book, Hate Crimes in Cyberspace, was recently published by Harvard University Press.
In the past twelve months racist attacks in Northern Ireland have increased by 50%.
6/10/2014- In the early hours of Sunday morning yet another home was attacked in South Belfast – an attack that the PSNI described as a ‘hate crime’. A bottle was thrown and smashed the living room window of a house owned by a Bangladeshi family on Ulsterville Avenue and a car owned by a Kuwaiti family was set alight. The attacks have been widely condemned by politicians from across the political spectrum. Where do the attitudes that provoke these hate crimes originate and why are racist attitudes seemingly on the increase? A few hours before the latest attack a Facebook user in South Belfast posted this video. The video has been viewed more than 10,000 times and numerous comments have been posted in support of the man responsible.
The Facebook user subsequently attempted to defend his actions seemingly oblivious to the fact that – regardless of the circumstances – verbally and racially abusing a fellow human being in broad daylight would be regarded by most as unacceptable. The true nature of his motivations are perhaps best summed up by one of his own comments on the original video thread. On Saturday 4th October Shankill Leisure Centre permitted the use of a hall to celebrate Eid al-Adha, one of the most important festivals in the Islamic calendar. The loyalist Facebook page Protestant Unionist Loyalist News TV picked up on the news with predictable results: A torrent of racist commentary followed the original post – all unchallenged by the administrators of the page.
An even more sinister Facebook page has seen significant growth in recent days. The subtly named N.I. Resistance Against Islam so far has 735 followers and users have posted a selection of choice comments. Stung by criticism the page administrators have banned anyone who dares to challenge their racist mindset and have set up a closed group where no doubt the select few who share their warped views can interact in private (the administrators of the page are visible on some browsers). In all cases the posts and pages responsible have been reported to Facebook and complainants have received the stock response that such activity does not contravene “community standards.”
“Facebook does not permit hate speech, but distinguishes between serious and humorous speech. While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition”. However Facebook adds the caveat that “because of the diversity of our community, it’s possible that something could be disagreeable or disturbing to you without meeting the criteria for being removed or blocked”. So in effect Facebook and not civil society is the final arbiter of what is or is not ‘hate speech’.
In a society that is already riddled with sectarianism and where there is clear evidence that Facebook has been used to stir up sectarian tension in the past, is it not incumbent on the organisation to act swiftly and remove posts that would be viewed as ‘hate speech’ in every day society? There are those that would argue that such action would be a form of censorship and an attack on free speech but surely social media giants such as Facebook and Twitter have a social responsibility to prevent the spread of dangerous views that can lead to attacks such as this in August 2014?
© Slugger O'Toole
6/10/2014- jugendschutz.net continuously analyses how right-wing extremists try to attract young internet users and takes action against endangering or harmful content. Furthermore, jugendschutz.net focuses on prevention and develops concepts to give young people encouragement to critically deal with right-wing extremism on the internet. This report informs about the work and findings of jugendschutz.net in the field of right-wing extremism on line in 2013.
6/10/2014- The U.S. Supreme Court opens a new term Monday, but so far the justices are keeping quiet about whether or when they will tackle the gay marriage question. Last week, the justices met behind closed doors to discuss pending cases, but when they released the list of new cases added to the calendar, same-sex marriage was nowhere to be seen. But that really doesn't mean very much. About 2,000 cases have piled up over the summer, each seeking review on all manner of subjects. So when the court met last week to sift through all that, there really wasn't enough time for the justices, as a group, to focus on the same-sex marriage cases. With a big issue like this, and multiple appeals before the court, the justices need to decide which cases are the "best vehicles" (as it's known in the trade) for review. Indeed, all of the vehicle talk prompted one media wag to comment last week that all of the flossy lawyers, each pointing to their own case as the best vehicle, sounded more like car salesmen than Supreme Court advocates.
With seven cases currently before the court, the justices will likely pick just one or two to hear. They might, as Justice Ruth Bader Ginsburg suggested earlier this fall, even wait for more cases. Right now, the only cases pending before the court are lower court decisions favoring the right of same-sex couples to marry. But a Sixth Circuit Court of Appeals panel, which heard arguments last August in Ohio, sounded as if it might go the other way. If it does, that would provide the kind of traditional conflict the Supreme Court looks to resolve. Truth be told, with both sides already pressing the court to act, most court observers think the justices will want to take the plunge sooner rather than later. For now, though, all is speculation.
This term will mark the 10th year that John Roberts has served as chief justice. Without a doubt, the court has grown dramatically more conservative since his appointment. But, as Brianne Gorod of the Constitutional Accountability Center observes, the question is: "What role has John Roberts played in this movement?" Is he "strategically and deliberately leading the court to the right?" Kendall asks, "Or is it, as some have suggested, the 'Kennedy Court' or even the 'Alito Court'?" Justice Anthony Kennedy is often referred to as the "swing justice," and has written many of the court's major 5-to-4 opinions. Justice Samuel Alito is far more conservative than the justice he replaced, Sandra Day O'Connor, and has cast many votes and written major opinions that have shifted the court in a more conservative direction. The issues on the docket this term range from race and religion cases, to pregnancy discrimination, and even to threats on Facebook.
But once again the court, responding to challenges brought by conservatives, has chosen to delve into some elections issues that had been thought long settled. In a case from Arizona, the court could prevent the increasing use of citizen commissions to draw congressional district lines. Arizona, California and some other states have, in one way or another, used these commissions to take the redistricting issue out of the hands of self-interested state legislatures. But in Arizona, where the independent commission was enacted by referendum, the Republican-controlled Legislature is now challenging the practice as unconstitutional. In a case that could dramatically alter the way judicial elections are conducted, the court will decide whether states that elect judges can bar judicial candidates from personally soliciting campaign contributions. Of the 39 states with judicial elections, 30 have such bans. The test case is from Florida, where the state Supreme Court upheld that state's ban on the grounds that allowing judicial candidates to personally solicit campaign contributions would raise questions about their impartiality on the bench. Those challenging the ban say it violates their free speech rights.
Another free speech case involves the question of what constitutes a threat on Facebook. The facts are pretty hairy. Anthony Elonis was convicted of making threats against his estranged wife and an FBI agent. His posts said things like, "I'm not going to rest until your body is a mess, soaked in blood and dying from all the little cuts." Soon he moved on to suggest that he might make "a name" for himself with a school shooting. "Hell hath no fury like a crazy man in a kindergarten class. The only question is ... which one?" At that point, a female FBI agent paid him a visit, which provoked a post in which he said that he'd had to control himself not to "slit her throat, leave her bleeding from her jugular in the arms of her partner." At Elonis' trial, the judge instructed the jurors that to convict, they had to conclude that this was not merely exaggeration. His Facebook posts needed to be statements that a reasonable person would interpret as a serious expression of an intention to inflict bodily injury. Elonis contended that he was just mimicking rap songs — indeed, he often linked to songs with his post. He argued that he should not be convicted without actual proof that he intended to threaten, intimidate or harm.
The intent standard that Elonis argued for might make it much more difficult to win a conviction for making illegal threats. But whatever rule the justices come up with, observes University of Virginia law professor Leslie Kendrick, it will likely apply not just to Facebook and Twitter, but to all forms of communication — including people speaking face to face or publishing in the newspaper. In other words, says Kendrick, when crafting a rule, the justices will ask if the standard "is going to chill people who engage in speech that is borderline but ultimately protected." Protected, that is, by the First Amendment guarantee of free speech. Most court experts seem to believe that Elonis may win because of the culture of today's social media. "The context of rap music these days suggests that what Elonis put out there really isn't all that unusual for what's going on on Facebook and what's going on in the popular culture," says professor William Marshall of the University of North Carolina School of Law.
After all, the current Supreme Court may be viewed as conservative, but it has, with little or no dissent, already upheld a fair amount of "fringe speech" — whether it's crush videos, demonstrations at military funerals or the sale of violent video games to kids. Not everyone, however, agrees that the Facebook threat case is in the same category. Former Solicitor General Gregory Garre notes that Elonis' posts "ticked off all the boxes" — domestic violence, school shootings, violence against a federal officer. Garre says he "wouldn't be surprised if [Elonis' Facebook posts] struck the justices as something very problematic." A different part of the First Amendment — the free exercise of religion — is at issue in two cases involving federal statutes. One case tests whether retailer Abercrombie & Fitch illegally discriminated against a Muslim woman when she was denied a job because her headscarf conflicted with the company's dress code. The other case tests Arkansas' refusal to allow a Muslim prisoner to wear a short beard for religious purposes.
The prisoner sued under a federal law aimed at shoring up prisoners' religious rights. Interestingly, in this case, the prisoner has the backing of a wide variety of corrections officials and organizations, plus the federal government. The federal prison system and 43 states allow beards, largely because it is much easier to hide weapons and other contraband in clothes, hair and body cavities. There is a similar coalition of strange bedfellows in a pregnancy discrimination case before the court. Anti-abortion and women's rights groups have joined together to urge the court to require employers to treat pregnancy the same way other temporary disabilities are treated on the job. In this case, a UPS driver asked for light duty, carrying less than 20 pounds, during the latter part of her pregnancy. But the company refused, and she lost both her job and her insurance coverage.
The company contends that it had "no animus" toward the employee because of her pregnancy; her request for light duty just wasn't covered by either the provisions of federal disability law or the union contract. She argues that she should have been covered under the 1978 federal law barring discrimination based on pregnancy. The case is very important for businesses because pregnancy accommodations cost money. But it's very important to women too, observes Emily Martin of the National Women's Law Center. "Lots of women with some sort of work limitation arising out of pregnancy face similar issues — especially women in low-wage jobs that are often more physically demanding," she says. The first case the court hears on Monday is one that amazes former Solicitor General Paul Clement, who wants to know: "How in the world did we go 225 years and not have this issue decided?" The issue is whether police may make a traffic stop based on a mistaken understanding of the law, and then use evidence from a subsequent search to convict the car's occupants of a crime.
Other controversies to look forward to include cases that involve racial gerrymandering and Medicaid funding, and a major housing discrimination case that could make it harder to prove discrimination. The court will even be tackling a case about fish — yes, fish! It's an obstruction of justice case that, depending on your point of view, involves either the deliberate concealment of illegal fishing or a classic example of prosecutorial overreach. More to come on that later.
He calls himself Montero. But that’s all that’s known about him – that and his “vicious” anti-Semitic posts on Twitter. And his hate speech has been singled out as one reason that the country’s Jewish community is on high alert during Yom Kippur this weekend.
4/10/2014- Montero, says Mary Kluk, the Jewish Board of Deputies national chairman, “is probably one of the most vicious individuals we have ever come across”. He is untraceable. His Twitter account leaves no clue as to who he is, what he does or who he works for. On Thursday alone, he posted 50 anti-Semitic pictures, and he regularly makes reference to the board, she reveals. These messages include: “F*** the Kikes,” and “Jew parasites should all be killed and wiped off the earth.” Others profess: “Keep calm, kick a kike,” and “I like my Jews like I like my bread… toasted.” “I support Isis and all other Muslim freedom fighters who kill Jews… Every Jew they kill is one less I have to kill.” Synagogues around the country have increased their security – and are now guarded by 24-hour security teams, concrete barriers and the Joburg Metro Police, who have closed roads during worship.
Joburg metro police spokesman Wayne Minnaar says the board approached the traffic police to “assist with security and road closures during the holy month”, though he could not confirm what threats the Jewish community faced. Kluk claims that since the recent war in Gaza, “this anti-Semitic rhetoric has reached levels unseen for many decades. We are concerned about an increased security risk to our community over the high Holy days. “What is particularly alarming is his (Montero’s) ability to tweet anti-Semitic images with untold venom. He talks about personally killing Jews and supporting the work of Isis,” she said. “This is an individual who we feel deems thorough investigation as he violates the constitutional laws of this country.” Groups he subscribes to include New age Nazi, Notorious anti-Semites and Neo-Nazi Monsters. Brigadier Neville Malila, the provincial police spokesman, says they have not received any complaints by, or threats to the Jewish community. “The deputy provincial commissioner as well as the provincial CPF (community policing forum) chairperson are in constant liaison with the Jewish board to discuss security issues.”
Moulana Ebrahim Bham, the secretary-general of the Council for Muslim Theologians, believes it is “alarming” the Jewish community perceived itself to be under an increased security threat from “jihad terrorism”, as stated by Chief Rabbi Warren Goldstein. “As the Jewish community beefs up safety and security around shuls and tips its members off on precautions, it’s only proper that any credible reports of threats be brought to the attention of the relevant national authorities. “By the choice of his words, the rabbi’s claim places the source of this threat on the doorstep of Muslims,” said Bham. “As a Muslim community, we are not aware of any such condemnable plots of potential attacks on South African soil. “It is therefore important that the rabbi should be careful with his language that is prejudicial and likely to incite violence against members of the Muslim community. “It’s our sincere hope that this development will not again lead to situations where clandestine Zionist-linked security agencies start to harass innocent civilians at public facilities, as has happened before, even when those targeted did not pose any danger to anyone.”
On Thursday, the Jewish Board met President Jacob Zuma and a high-level government delegation where it briefed him on “the sharp rise in anti-Semitic activity in South Africa, including threats and intimidation against the Jewish community and its leadership”. Zuma, said the board, “stressed that his government remained committed to combating such prejudice. He further emphasised the need for there to be harmony between people of different backgrounds and opinions” in the country. Referring to a Twitter post cited earlier by Kluk that “Hitler was right, pity he didn’t finish off all Jews”, Anneli Botha, a terrorism expert at the Institute for Security Studies, believes the Jewish community’s reaction to these social media messages is “a bit extreme”. “The reality is that there are many people with anti-Semitic views in the country, and it’s sad that’s the case, but to heighten security based on messages on social media, that might be taking it a bit far.”
© The South African Independent
On the Morning of Rosh Hashannah a petition signed by 10,306 people arrived at Facebook Headquarters asking the company to change their policy on Holocaust denial. Facebook’s current position on Holocaust denial is that “the mere statement of denying the Holocaust is not a violation of our policies”. They justify this by treating the Holocaust not as a unique tragedy in human history, but as just another historical event, and they say they won’t prohibit Holocaust denial because they “recognize people’s right to be factually wrong about historic events”.
By Andre Oboler
3/10/2014- A letter from Facebook outlining their position is on the public record as part of a report in online antisemitism published by the Israeli Government last year. In recent times Facebook has moved away from the inflexible application of generic rules and has reversed their position across a whole range of issues. The new approach is much more strongly based on common sense and meeting reasonable public expectations about community standards. The arrival of the new petition is a timely call for Facebook, and its founder Mark Zuckerberg, to reflect and reconsider their position on Holocaust denial, which remains an open wound to not only the Jewish community but all civil society more broadly. The existing policy simply cannot be sustained in light of the way Facebook in 2014 responses to similar concerns.
In May 2013, after two years of regarding content that made light of rape as “humorous”, and therefore “acceptable” on Facebook, the company relented and agreed that misogyny was not acceptable under its community standards. At the time Facebook stated that “it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate”. This is another positive example of Facebook changing its approach to meet users’ expectations. It’s a pity it took two years and a major campaign, including loss of significant advertising, to make this happen.
A few months ago Facebook quietly lifted a ban on pictures of breastfeeding women. The ban was considered a form of gender-based discrimination by some women’s groups. The ban dates back to 2008 and news of a major effort to enforce it was announced by the same spokesperson and at the same time as news of Facebook’s position of permitting Holocaust denial on their social media platform. Michael Arrington wrote a very powerful article about the hypocrisy of these policies, the article was called “Jew Haters Welcome At Facebook, As Long As They Aren’t Lactating”. It seems half the issue has been solved, and the problem we are left with is simply “Jew Haters Welcome at Facebook”. It’s time that was addressed.
In recent days Facebook has reversed course over an effort to close the profiles of members of the LGBT community on the basis they were not using their ‘real names’. As David Campos explained, “for many members of the LGBT community the ability to self-identify is a matter of health and safety. Not allowing drag performers, transgender people and other members of our community to go by their chosen names can result in violence, stalking, violations of privacy and repercussions at work.” In this case Facebook recognised the damage their approach was causing and reversed course. Holocaust denial too is dangerous, it helps rehabilitate Nazi groups and facilitate their recruitment drives.
The problem of users posting Holocaust denial on Facebook was first raised at a meeting of the Global Forum to Combat Antisemitism in February 2008, it was one of the primary examples of “antisemitism 2.0”. Facebook’s unwillingness to tackle this problem gained major media attention from early 2009. Their position is so out of touch with global public expectations that it has led to international meetings in which Facebook has been questioned, a protest letter from Holocaust survivors organised by the Simon Wiesenthal Center, a grassroots protest outside Facebook’s offices, efforts to resolve the issue through cooperation by the Inter-Parliamentary Coalition to Combat Antisemitism, and many other initiatives from organisations, communities, individuals and companies. Facebook has grown as a company, and it has also matured, but this one issue is holdover of social media history.
The new petition is the result of dedicated work over three years and comes from the administrators of the closed Facebook group “Ban ALL Holocaust Denial Pages and Groups from Facebook” who also operate a Facebook page with just shy of 22,000 supporters. The decision to close the petition and send it to Facebook at this point in time was a choice and I believe it was a good one. Facebook’s response to the LGBT issue shows they are now taking public concern more seriously and are able to check themselves and reverse course when needed. The change of policy in respect to pictures of breastfeeding months shows that even old well established positions can be changed.
As Facebook improves the way it deals with sensitive topics and community expectations, the lack of resolution on the Holocaust denial problem is a weight that grows heavier. Holocaust denial should not be a sacrificial goat, blessed by Facebook, and sent into the wilderness to placate those demanding the sort of free speech which costs others their dignity and safety. This Yom Kippur, it’s time for those at Facebook to reflect, reconsider, and yes, repent. It’s time those Holocaust survivors who wrote to Facebook in 2011 to receive a new answer, while at least some are still alive to receive it. It’s time this issue was put to bed.
Dr Andre Oboler is CEO of the Online Hate Prevention Institute and co-chair of the Online Antisemitism Working Group of the Global Forum to Combat Antisemitism.
© The Online Hate Prevention Institute
Google grapples with the consequences of a controversial ruling on the boundary between privacy and free speech
3/10/2014- Sometimes a local spark can cause a global fire. In 1998 La Vanguardia, a Spanish daily, ran an announcement publicising the auction of a house to pay taxes owed by Mario Costeja González, a lawyer. The event would have been consigned to oblivion had the newspaper not digitised its archives a few years later. Instead, it came first in Google’s results for searches for Mr Costeja’s name, causing him all manner of professional problems. When the online giant refused to remove links to the material, Mr Costeja turned to Spain’s data-protection authority. The case ended up in the European Court of Justice (ECJ), which ruled in May that Google must remove certain links on request. The ruling has established a digital “right to be forgotten”—and forced Google to tackle one of the thorniest problems of the internet age: setting the boundary between privacy and freedom of speech.
The two rights had coexisted, occasionally uneasily, offline. But online, border skirmishes have become increasingly common. “It’s like two friends who don’t always get along, but are now being confined to one room,” says Luciano Floridi, a professor of philosophy and the ethics of information at Oxford University. Complicating matters is a transatlantic split. America allows almost no exceptions to the first amendment, which guarantees freedom of speech. Europe, not least because of its experiences of fascism and communism, champions privacy. The ECJ’s ruling was vague. Even if information is correct and was published legally, the court said, Google (or indeed any search engine) must grant requests not to show links to it if it is “inadequate, irrelevant or no longer relevant”—unless there is a “preponderant” public interest, perhaps because it is about a public figure. With no appeal possible, Google went to work. It helped that it already had a procedure for removing links to copyrighted material published without permission. Just a few weeks later it had put a form online for removal requests.
The firm’s dozens of newly hired lawyers and paralegals have their work cut out. Between June and mid-September, it received 135,000 requests referring to 470,000 links. Most came from Britain, France and Germany, Google says. It will publish more detailed statistics soon. Meanwhile numbers from Forget.me, a free website that makes filing removal requests easier, give a clue to the sort of information people want forgotten. Nearly half of the more than 17,000 cases filed via the service refer to simple personal information such as home address, income, political beliefs or that the subject has been laid off. Nearly 60% were refused. If the material is about professional conduct or created by the person now asking that links to it be deleted, removal is unlikely. Requests relating to information which is relevant, was published recently and is of public interest are also likely to fail.
Many of the decisions look quite straightforward. Google has removed links to “revenge porn”—nude pictures put online by an ex-boyfriend—and to the fact that someone was infected with HIV a decade ago. It said no to a paedophile who wanted links to articles about his conviction removed, and to doctors objecting to patient reviews. In between, though, were harder cases: reports of a violent crime committed by someone later acquitted because of mental disability; an article in a local paper about a teenager who years ago injured a passenger while driving drunk; the name on the membership list of a far-right party of someone who no longer holds such views. The first of these Google turned down; the other two it granted. The process is “still evolving” says Peter Fleischer, Google’s global privacy counsel. A Dutch court recently decided the first right-to-be-forgotten case, upholding Google’s refusal to remove a link to information about a convicted violent criminal. After more appeals have been heard by data-protection authorities and courts, the firm can adjust its decision-making. The continent’s privacy regulators are working on shared guidelines for appeals.
Another steer will come from an advisory council set up by Google itself. Its eight members include Mr Floridi; Jimmy Wales, the founder of Wikipedia; a journalist at Le Monde, a French paper; and a former director of Spain’s data-protection agency. It has already held four public meetings in as many European cities, with three more to come before it reports back to Google early next year. One question asked at the meeting in Paris on September 25th was how users should be made aware of the fact that the results of a search have been affected by the ruling. Currently, a notice at the bottom of the results page says that “some results may have been removed”, which perhaps defeats the purpose by raising a red flag. Another was how publishers should react. In Britain newspapers published articles about the fact that Google no longer linked to previous articles, again bringing to prominence information that the firm had found merited being forgotten.
More broadly, many wonder whether Google should remove links from searches everywhere, not just on its European sites. That would lead to a transatlantic row, but could also trigger a debate in America about why, for instance, American victims of revenge porn should not also be able to ask Google to stop linking to such content. Some have dismissed Google’s advisory council and its tour through Europe as a public-relations exercise. “Google is trying to set the terms of the debate,” said Isabelle Falque-Pierrotin, the head of France’s data-protection watchdog, last month. Predictably, those involved see it differently. Asked why he joined Google’s council, one of the members said: “Because it’s terribly interesting.” As the virtual world’s boundaries are redrawn, it matters who gets to hold the pen.
Clarification: Google displays "some results may have been removed" at the bottom of the results page for any search in Europe for a name (unless it is that of a public figure), not just those for names of people whose removal requests have been granted.
© The Economist
Berlin-based SoundCloud, which allows anyone to share audio files online, plays host to huge numbers of jihadi accounts and postings supporting the Islamic State (Isis). But the uploads do not contravene German law and are not being caught by the startup's moderators.
2/10/2014- A search for the word “jihad” in Arabic on the site returned page after page of matches on Monday, although it was impossible to say how many track postings there were as SoundCloud's counter only goes up to 500. Many feature amateur images from Middle Eastern conflicts, including men brandishing black Isis flags and Kalashnikov rifles, or embellished propaganda images of figures such as Osama bin Laden. There are also several accounts whose names are variations on Isis and Islamic State.
Commonly posted content includes Nasheed songs which have been used by Salafists to accompany propaganda videos. Three Nasheed “battle songs” by former Berlin rapper Denis Cuspert, who went by the name Deso Dogg before his conversion to radical Islam, were banned by the Federal Department for Media Harmful to Young Persons (BPjM) in 2012. Cuspert has since left Germany to fight for Isis in Syria and has become close to the group's leader Abu Bakr Al-Baghdadi, according to a dossier published recently by the Federal Office for the Protection of the Constitution (BfV). He is just one of almost 400 fighters believed to have left Germany for Syria since 2012.
Banned in Germany
In September 2014, the Interior Ministry took the drastic step of banning Isis in Germany. This means that the group and its symbols are illegal and any activities under-taken on behalf of the group, including publicizing or supporting it, are forbidden. Activities supporting Isis are punishable under the criminal law's section 89a, “Preparation of a serious violent act that endangers the state”. If Cuspert, for example, were to return to Germany and publish pro-Isis propaganda, he could be prosecuted under the law. But the nature of the internet causes a problem for authorities when regulating content posted to platforms like SoundCloud, which hosts its content on Amazon Web Services servers, all of which are physically located outside of Germany.
'The internet isn't German'
“The internet isn't German, and most of the sites which contain this content are not hosted in Germany,” a BfV spokeswoman told The Local. “Of course we try to have these things removed. We flag things up to them [social networks], the police can do that too if a crime has been committed.” But she added court cases were only likely to be brought under the Isis ban against individuals or companies who upload propaganda in Germany. “The point of contact is an act committed within Germany,” an Interior Ministry spokeswoman confirmed. “It's not about whether it's a German company, but where the servers are located."
© The Local - Germany
2/10/2014- The owner of a well-reviewed Bushwick coffee shop took to Instagram on Wednesday to tell the world that just about the only thing worse than a bad coffee is a greedy Jew. Why is this new, artisanal coffee shop (simply known as the Coffee Shop) mad at the People of the Bagel? Because they’re gentrifying, silly, and pushing out real Bushwick residents, like proprietors of fancy coffee shops. Hello, pot. Meet kettle.
Of course, that might not be the real reason, as his Instagram screed is barely intelligible, reading, in part:
My stubborn Bushwick-oroginal neighbor is a hoarder and a mess- true.. and he's refused selling his building for lots and lots of money. His building and treatment of it makes the hood look much less attractive and I would like him to either clean up or move along. BUT NOT be bought out by Jews however, who in this case (and many cases separate- SORRY!) function via greed and dominance. A laymen's terms version of a story would simply be- buying buildings, cutting apartments in half, calling them luxurious, and ricing them at double. Bushwick IS rising and progressing, and bettering, but us contributing or just appreciating this rise and over all positive change do not want to be lumped with greedy infiltrators.
Further clues are found on the shop’s Facebook page, where owner Michael Avila posted a video praising ultraorthodox Jews for opposing Zionism. “I love LOVE these Jews [smiley emoticon],” he wrote. “These men have the right idea.” On his personal Facebook page, he acknowledged the controversy, writing, “Sometimes I cause a little trouble just because I know I can handle it. I'm pretty good with the fine line so I go for it.” (There's expanded anti-Jewish ranting, too.) Yeah, that “fine line” post seems like hubris now. Silva explained himself to DNAinfo, saying, "I think they [Jews] took it personally even if it doesn’t to apply to them. Sometimes I feel misunderstood. I’m fine with being misunderstood. I’m quite used to it. I don’t really mind." (This post originally featured a photo of Avila with Giovanni Finotto, a man Avila identified as his mentor. Finotto has vehemently disavowed Avila's comments, saying, "Regardless of excuses he has made, claiming that he was misunderstood, his behavior is completely inexcusable.")
Disagreeing with Zionism is one thing. Expanding your views into a rant about greedy Jews in Brooklyn? Quite another. And that’s unfortunate, since by most accounts, the coffee was good. Now it just smells like decaf. And anti-Semitism.
© New York Magazine
Could Artificial Intelligence Root Out Online Hate?
2/10/2014- Last week, the Anti-Defamation League released a list of “Best Practices” to counter hate speech on the Internet. Sober and serious, it includes suggestions like “Share knowledge and help develop educational materials and programs that encourage critical thinking in both proactive and reactive online activity” and “Respond to user reports in a timely manner.” It even advises to try “comedy and satire when appropriate.” Google’s executive chairman, Eric Schmidt, hopes there might one day be a more exciting option for dealing with hate speech: artificial intelligence.
“AI systems may ultimately allow us to better prioritize and better understand how to rank and deal with evil speech,” Schmidt told JTA in a phone interview. Schmidt, who was presented last Friday with the ADL International Leadership Award, said Google’s current philosophy is for its search engine to mirror what is available on the Internet as accurately as possible. Google searches are based on an algorithm that is content neutral, so the prospect of nudging aside hate speech would mark a shift.
“It’s a very tight line to walk because we are against filtering and we are against censorship, so you have to be careful here,” Schmidt said. Even without invisible anti-hate bots, Schmidt said the Internet makes it easier to track and counter hate — and to identify hateful people, if necessary — and thus is a greater tool in defeating hate rather than spreading it. Of course, identifying hate speech via computer will be plenty difficult given how often humans disagree over what is or isn’t hateful. And given the prevalence of existing concerns about privacy and tracking, AI-enhanced search engines will probably add another layer of complexity to such debates rather than resolving them. Who knows? They may even provide some fodder for comedy and satire. When appropriate, of course.
© The Forward
Facebook has agreed to make changes to the way it works, after locking the accounts of a number of drag queens because they weren’t using their “legal names”.
1/10/2014- The social network has been under fire over the policy, after it last month began locking the accounts of users with noticeable drag names. Following protests the company agreed to temporarily reinstate some drag performers’ profiles , but previously insisted the policy itself would remains unchanged. However, at a meeting with the San Francsico drag community organised by Supervisor David Campos today, Facebook representatives said the ‘flawed’ policy had hurt people, and would be changed. Mr Campos said: “The drag queens spoke and Facebook listened! Facebook agreed that the real names policy is flawed and has unintentionally hurt members of our community. “We have their commitment that they will be making substantive changes soon and we have every reason to believe them. “Facebook apologized to the community and has committed to removing any language requiring that you use your legal name. “They’re working on technical solutions to make sure that nobody has their name changed unless they want it to be changed and to help better differentiate between fake profiles and authentic ones.”
Drag artist RuPaul had previously weighed in to the controversy, saying: ” it’s bad policy when Facebook strips the rights of creative individuals who have blossomed into something even more fabulous than the name their mama gave them.” Facebook’s Chief Product Officer, Chris Cox, updated his page with a lengthy apology which read: “I want to apologize to the affected community of drag queens, drag kings, transgender, and extensive community of our friends, neighbors, and members of the LGBT community for the hardship that we’ve put you through in dealing with your Facebook accounts over the past few weeks. “In the two weeks since the real-name policy issues surfaced, we’ve had the chance to hear from many of you in these communities and understand the policy more clearly as you experience it. We’ve also come to understand how painful this has been. We owe you a better service and a better experience using Facebook, and we’re going to fix the way this policy gets handled so everyone affected here can go back to using Facebook as you were.
“The way this happened took us off guard. An individual on Facebook decided to report several hundred of these accounts as fake. These reports were among the several hundred thousand fake name reports we process every single week, 99 percent of which are bad actors doing bad things: impersonation, bullying, trolling, domestic violence, scams, hate speech, and more — so we didn’t notice the pattern. “Our policy has never been to require everyone on Facebook to use their legal name. The spirit of our policy is that everyone on Facebook uses the authentic name they use in real life.
“We see through this event that there’s lots of room for improvement in the reporting and enforcement mechanisms, tools for understanding who’s real and who’s not, and the customer service for anyone who’s affected. These have not worked flawlessly and we need to fix that. With this input, we’re already underway building better tools for authenticating the Sister Romas of the world while not opening up Facebook to bad actors. And we’re taking measures to provide much more deliberate customer service to those accounts that get flagged so that we can manage these in a less abrupt and more thoughtful way. To everyone affected by this, thank you for working through this with us and helping us to improve the safety and authenticity of the Facebook experience for everyone.”
© Pink News
By Raihan Ismail
1/10/2014- Following the national news and social media over the last fortnight, one might be led to believe that women wearing burqas and niqabs are as significant a threat to Australia's security as the alarming number of young men who have been caught by the spell of ISIS. The burqa kerfuffle seemed to escalate when Liberal Senator Cory Bernardi woke up to the news of anti-terror operations in Sydney and saw pictures of a veiled woman outside the raided houses. He responded on Twitter by referring to the burqa as a "shroud of oppression and flag of fundamentalism". Presumably Bernardi saw different news footage from me, as the woman displayed prominently in news photographs that I saw was wearing the niqab. The niqab is a face-covering veil, worn by a very small number of Australian Muslims, which leaves open a slit for the eyes. The burqa, on the other hand, even more rarely worn, has mesh covering the eyes. Whatever Bernardi saw or meant, his comments unleashed yet another firestorm of Islamophobia on its most fertile breeding ground: the internet.
Last week, after Bernardi's comments, I was interviewed by the ABC for an explanatory article on the burqa, the niqab, and my choice of garment, the hijab, which covers only a woman's hair, neck and shoulders. Bizarrely, when posted by the ABC on Facebook, the article received more comments than the ABC's reports on the anti-terror raids themselves. The comments section is sobering reading for anyone with any doubts about the perniciousness of Islamophobia in Australia. To give one example from among the comments, a self-described "maintenance planner" for Fortescue Metals Group in Perth stated: "It's Australia you came here for whatever reasons embrace our culture" [sic], and asked why minorities should be allowed to "influence our awesome country".
Twitter is another haven for Islamophobia. The ABC tweeted the article, accompanying it with the question "Why do some women wear the burqa, niqab or hijab?" A real estate agent from Frankston, Victoria, responded "Cause they are butt ugly". This real estate agent is one of over 800 on Twitter who openly follow a self-described mother, psychology student and cat lover from Perth, who tweets almost daily with missives such as "It's time practicing Islam in Australia is outlawed and all that [sic] practice it are charged and prosecuted", and diatribes against Islam as a "cult" of violence and paedophilia.
This could all be ignored, and it would almost be amusing, if it were not for the fact that Islamophobia is increasingly affecting real people in their daily lives. Last week, a mosque in Brisbane was spray-painted with the words "Get the f--k out of our country!" A teacher and a student at a Sydney school were reportedly threatened with a knife by an uninvited guest who asked whether it was a "Muslim school". Even in Canberra, an enlightened and educated town, I have been harassed on the streets and in shopping malls, from Woden, to Belconnen to Civic. Sometimes it is no more than a snarling look from a passer-by; sometimes it is the muttering of an epithet such as "terrorist"; on two occasions it has amounted to physical intimidation.
This is the real and ultimate manifestation of Islamophobia. It is practiced a small group of Australians, no more representative of Australia than ISIS sympathisers are of Muslims, but their actions are making Muslims – and women in particular – fear for their safety. The Islamophobic movement is not as small as we would wish. Nor is it hidden in the dark corners of the internet. Many online practitioners of Islamophobia can very easily be identified with full names, and their addresses and employers traced with a few short Google searches. Of course, the rampant Islamophobia should not obscure the presence of plausible and considered critiques of the burqa and the niqab. They are worn by a small minority of Muslim women. Most Muslims consider the garments to be the result of an unnecessarily strict interpretation of the religion's modesty requirements, grounded more in culture than in the text of the Quran or the teachings of its principal prophet, Muhammad.
Those concerned with women's rights suggest, with some force, that some women might wear the burqa or the niqab due to oppression from male relatives, especially husbands. But this is not sufficient reason to ban the wearing of the garments. Where they are worn because of oppression, any ban would simply result in the women concerned remaining house‑bound, while women who wear the garments as a genuine personal choice would find their religious freedoms curbed by the state. Laws banning the burqa or niqab in limited places, or requiring their removal for identification and security reasons, may have more merit. But it needs to be demonstrated that people wearing the garments pose a genuine security risk, and that the laws would be effective in addressing that risk. Without that justification, off-the-cuff calls by politicians to ban the garments, whether generally or in limited circumstances, do no more than inflame the internet hordes. The effect of this practice on Australian Muslims is real.
Dr Ismail is an Associate Lecturer in Middle East politics and Islamic studies at the Centre for Arab and Islamic Studies (The Middle East and Central Asia) at the Australian National University.
© The Sydney Morning Herald
A 27-year-old man has been handed a €7,200 fine and a one year prison sentence for inciting hatred and re-engagement in National Socialist activities.
26/9/2014- Korneuburg Regional Court convicted the man after he confessed to posting countless Nazi and xenophobic comments and content online. The prosecutor noted that he had trivialised the Holocaust and had an ‘88’ tattoo on his back, which stands for HH, or Heil Hitler. The prosecutor said he would not bother reading out any of the man’s postings as “any normal person would find them disgraceful”. When questioned the 27-year-old admitted that he had extreme right-wing views and said that he had developed an aversion to immigrants, Jews, Muslims and Africans since being at school. He also admitted possessing illegal weapons purchased in the Czech Republic. The 27-year-old already had a criminal record after being involved in violent brawls. His defence argued that he had been unemployed for some time and in his frustration had become influenced by right-wing propa-ganda. He said that he has since had most of his tattoos removed, or altered into Hawaiian symbols, and was a “changed man”.
© The Local - Austria
25/9/2014- Facebook has been the center of controversy many times, but this may be the first time that their changing of the rules may hit them where it hurts. LGBT+ users who are shocked, saddened and offended by Facebook's new "real name" policy are flocking to a new network: Ello. If you haven't heard of Ello before this week, you're not alone. Just this morning my Facebook timeline blew up with friends offering invite codes for what I assumed was a new Gilt-like shopping site, and what turned out to be a new and friendlier social network, which would allow anyone who wanted to be a part of it be who they wanted to be, complete with the name they've chosen for themselves.
Ello's uptick in popularity comes from Facebook's new decree that everyone on the site must now use their real name. For some, like me, this isn't a problem. I use my real name for everything (because I am fairly histrionic). For others, those who are better known by their drag names, those who are concerned about being stalked and those who don't want to be found under their real name, this is a huge problem. Facebook claims that the new policy (which requires all users to register under the name which appears on their ID and not under GIRL YOULOOKINGFINE) is meant to keep the community safe, but The Daily Dot points out that it may also be a way for making performers migrate from personal profiles to fan pages in an effort to make more coin for the site's already overflowing coffers. And, according to Sister Roma, a sister of Perpetual Indulgence who's been very vocal about the new rules, using your legal name might even be dangerous or traumatizing for some.
This issue is discriminatory against transgender and other nonconforming individuals who have often escaped a painful past. They've reinvented themselves or been born again and made whole, adopting names and identities that do not necessarily match that on their driver's license. Enter Ello, the Facebook alternative that's less icky than Google+, ad-free and willing to let you be the person you've always wanted. Well, with carefully chosen photographs and status updates, of course. According to The Daily Dot, more and more users have been flocking to the site and, after an influx of radical faeries, Ello's creator says that the site is having a huge surge in registrations from those in the LGBT+ community. Ello is refreshingly simple. According to creator Paul Budnitz. The Daily Dot reports that the social network's abuse team can quickly respond to users and that the network takes any form of harassment very seriously. "You don't have to use your 'real name' to be on Ello. We encourage people to be whoever they want to be," Budnitz said. "All we ask is that everyone abide by our rules (which are posted on the site) that include standards of behavior that apply to everyone. We have a zero tolerance policy for hate, stalking, trolls, and other negative behavior and we'll permanently ban and nuke accounts of anyone who does any of this, ever."
Awesome! No wonder people are migrating. But how capable is Ello of handling even a small percentage of users? It's definitely not big yet, but as word spreads, how long before it's also inundated with more users than the abuse team can handle. And how long before Ello's creators decide that ad revenue isn't just desirable, but possibly necessary? As for Sister Roma, she'll continue fighting Facebook's new policies. A protest is scheduled for October 2nd.
Major Internet Companies Express Support for Initiative
23/9/2014- The Anti-Defamation League (ADL) today announced the release of “Best Practices for Responding to Cyberhate,” a new initiative that establishes guide-posts for the industry and the Internet community to help prevent the spread of online hate speech. The Best Practices initiative is the outcome of months of discussions and deliberations by an industry Working Group on Cyberhate convened by ADL in an effort to develop a coordinated approach to the growing problem of online hate speech, including anti-Semitism, anti-Muslim bigotry, racism, homophobia, misogyny, xenophobia and other forms of online hate. Members of the Working Group included leading Internet providers, civil society leaders, representatives of the legal community, and academia.
As participants in the Working Group, representatives of Facebook, Google/YouTube, Microsoft, Twitter, and Yahoo have expressed support for ADL’s efforts. They were among those who offered advice to ADL in the formulation of the Best Practices, and the final document embodies some of their own current practices. In conjunction with today’s announcement, these companies are taking new steps to remind their own communities of their policies regarding online hate and how users can respond when they encounter it.
“We challenged ourselves collectively to come up with effective ways to confront online hatred, to educate about its dangers and to encourage individuals and communities to speak out,” said Abraham H. Foxman, ADL National Director and co-author, with Christopher Wolf, of Viral Hate: Containing Its Spread on the Internet. “The Best Practices are not a call for censorship, but rather a recognition that effective strategies are needed to ensure that providers and the wider Internet community work together to address the harmful consequences of online hatred. This is an opportunity for the Internet community to present a united front in the fight against cyberhate.”
“It is our hope the Best Practices will provide useful and important guideposts for all those willing to join in the effort to address the challenge of cyberhate,” said Christopher Wolf and Art Reidel, ADL leaders and co-chairs of the Working Group. “We urge members of the Internet community to express their support for this effort and to publicize their own independent efforts to counter cyberhate. We believe that, if adopted widely, these Best Practices could contribute significantly to countering cyberhate.”
The Best Practices call on providers to:
Take reports about cyberhate seriously, mindful of the fundamental principles of free expression, human dignity, personal safety and respect for the rule of law.
Providers that feature user-generated content should offer users a clear explanation of their approach to evaluating and resolving reports of hateful content, highlighting their relevant terms of service.
Offer user-friendly mechanisms and procedures for reporting hateful content.
Respond to user reports in a timely manner.
Enforce whatever sanctions their terms of service contemplate in a consistent and fair manner.
The Best Practices call on the Internet Community to:
Work together to address the harmful consequences of online hatred.
Identify, implement and/or encourage effective strategies of counter-speech — including direct response; comedy and satire when appropriate; or simply setting the record straight.
Share knowledge and help develop educational materials and programs that encourage critical thinking in both proactive and reactive online activity.
Encourage other interested parties to help raise awareness of the problem of cyberhate and the urgent need to address it.
Welcome new thinking and new initiatives to promote a civil online environment.
ADL has long played a leading role in raising awareness about hate on the Internet and working with major industry providers to address the challenge it poses. In May 2012, the Inter-Parliamentary Coalition for Combating Anti-Semitism (ICCA), an organization comprised of parliamentarians from around the world working to combat resurgent anti-Semitism, asked ADL to convene the Working Group on Cyberhate, including representatives of the Internet industry, civil society, the legal community and academia, with a mandate to develop recommendations for the most effective response to manifestations of hate and bigotry online.
In the coming weeks, ADL and industry leaders will be urging others in the Internet community to join in this effort. A number expressed support for the initiative on its launch today. “Facebook supports ADL’s efforts to address and counter cyberhate, and the best practices outlined today provide valuable ways for all members of the Internet community to engage on this issue,” said Monika Bickert, head of global policy management at Facebook. “We are committed to creating a safe and respectful platform for everyone who uses Facebook.” “Every day, millions of people post content on YouTube, Blogger, and Google+. In order to maintain a safe and vibrant community across our platforms, we offer tools to report hateful content, and act quickly to remove content that violates our policies,” Google said in a statement. “We support the ADL’s continued efforts to combat hatred online.”
“Microsoft is committed to providing a safe and enjoyable online experience for our customers, and to enforcing policies against abuse and harassment on our online services, while continuing to keep freedom of speech and free access to information as top priorities,” said Dan Bross, Senior Director of Corporate Citizenship at Microsoft. “The Best Practices document is a tool that can foster discussion within the community and advance efforts to combat harassment and threats online.” “Twitter supports the ADL’s work to increase tolerance and raise awareness around the difficult issue of online hate, the company said in a statement. “We encourage the internet community to seek diverse perspectives and keep these best practices in mind when dealing with difficult situations online.” "Yahoo is committed to confronting online hate, educating our users about the dangers and realities, and encouraging our users to flag any hostile language they may see on our platform," Yahoo said in a statement. "As a member of ADL's Working Group on Cyberhate, we support ADL's efforts to promote responsible and respectful behavior online."
More information on the Best Practices is available on the League’s web site at www.adl.org/cyberhatebestpractices.
© The Anti-Defamation League
Members of Australia's Muslim community have set up a Facebook page to track religious hatred and discrimination.
22/9/2014- Amid increasing anti-Muslim sentiments coupled with anti-terror police raids, a Facebook page has been launched to track Islamophobia in Australia, encouraging the Muslim minority to report attacks on them. "We have been hearing about a recent surge in incidents of Islamophobia but unfortunately there has been no formal register to record the incidents," the page, Islamophobia Register Australia, said in a post seen by OnIslam.net. Launched last week, the page was followed by one of the biggest anti-terror raids in Australian history in which 15 people were arrested from north-western Sydney. The page, that has attracted more than two thousand followers, urged Australian Muslims to report incidents through sending a private "message" to the page or by emailing it at firstname.lastname@example.org. Details like full name, street address, city, state, post code, email address, contact phone number are required to submit the report, along with the details of the incident.
Victims of the anti-Muslim sentiments also have to select the category of the attack from a list of Islamophobia incidents categories provided by the page. A few days after launching the page, several Islamophobic attacks were reported by Facebook users. The anti-Muslim attacks include "a mosque being defaced in Queensland, a senior scholar and member of the Australian National Imams Council detained for over 2.5 hours at Sydney airport, direct threats issued against the Grand Mufti of Australia," the page said. "Lakemba Mosque and Auburn Mosque from anonymous members of the Australian Defence League, women in hijab verbally abused in the streets of Sydney, at shopping centers and whilst driving. "Countless examples of social media vitriol targeting Muslims." The page itself became a target for hate messages and Islamophobic posts since its creation on September 16.
Bracing to enact the new controversial anti-terror measures, the Australian premier Tony Abbott said that "Australians must accept a reduction in freedom and an increase in security for some time to come”. Addressing the parliament on Monday, September 22, Abbott urged Australians to back a shift in “the delicate balance between freedom and security”. “I can’t promise that hideous events will never take place on Australian soil, but I can promise that we will never stoop to the level of those who hate us and fight evil with evil,” Abbott was quoted by The Guardian. Away from freedom restrictions trends, other voices have called for fostering "integration" in the Australian community. "I believe in bringing people of different races, different religions, to this country but once you're here you've got to become part of the mainstream community," former Prime Minister John Howard told the Seven Network.
Premier Colin Barnett has taken a different direction, choosing to assure Muslims in Western Australia that they were welcome in the state. "Australia and is a very welcoming country and a very peaceful country," Barnett was quoted by Sky News. "And the vast, vast majority of Muslims living in Australia are peace-loving, hard-working." Muslims, who have been in Australia for more than 200 years, make up 1.7 percent of its 20-million population. In post 9/11-era, Australian Muslims have been haunted with suspicion and have had their patriotism questioned. A 2007 poll taken by the Issues Deliberation Australia (IDA) think-tank found that Australians basically see Islam as a threat to the Australian way of life. A recent governmental report revealed that Muslims are facing deep-seated Islamophobia and race-based treatment like never before.
© On Islam