- 'Holohoax' hashtags and 'Zio' slurs - when will Twitter take anti-Semitism seriously?
- UK/USA: How British anti-racist group infiltrated the Ku Klux Klan online
- USA: Pepe the Frog creator teams with ADL to ‘save’ the image
- USA: Boulder Neo-Nazi Facebook Group Leads to Teen Suicide
- USA: Students Expelled After Neo-Nazi Facebook Group Discovered
- Russian LGBT Teen Support Site Deti-404 Blacklisted
- Preventing Artificial Intelligence Discrimination
- UK: CPS publishes new social media guidance and launches Hate Crime consultation
- India: Cases against four for online hate spread
- German justice minister: AfD uses hate speech online
- Future of 4chan uncertain as controversial site faces financial woes
- UK: New tool to detect and prevent hate speech online
- Far-right Twitter and FB users make secret code to avoid censorship
- Italy: IPI concerned by draft Italian ‘cyber bullying’ bill
- On Twitter, Hate Speech Bounded Only by a Character Limit
- Canada's anti-hate law challenged by man convicted of promoting hate on web
- Ethiopia: How social media is despoiling civility
- Google invites Kenyan anti-gay activist to Web Rangers conference
- UK: Anti-Semitic Abuse ‘Not Clear Violation Of Our Rules On Abuse,’ Twitter Tells Labour NEC Member
- Pakistan: Build people up, don’t tear them down
- Why a German Lawyer Named Mark Zuckerberg in a Hate-Speech Complaint
- German Regulator hits out at Facebook
- Russia steps up trolling attacks on the West, U.S. intel report finds
- Jordan: Online hate speech subject to prosecution –– Minister
- Disney Is Working With an Adviser on Potential Twitter Bid
- BY THE NUMBERS: The Twittersphere of the Trolls
- What It’s Like to Fight Online Hate
- USA: Anti-Defamation League Boosting Presence In Silicon Valley
- Should Silicon Valley Really Decide What Is and Isn’t Hate Speech? (opinion)
- USA: Can racist tweets help predict hate crimes?
- US police to scan social media for violence alerts
- FB Plans to Expand Program to Fight Against Online Hate-Speech
- Netherlands: Social media overload: many discrimination complaints go unanswered
- Netherlands: AIVD calls for powers to monitor online chat messaging
- Australia: FB slammed over anti-Semitic 'Jewish remains' photo
- Senior Facebook delegation in Israel to discuss ‘incitement’
- Norway: Facebook censors PM
- The black metal origins of an anti-Muslim meme
- South Africa: End the cyber-war on women (opinion)
- UK: Kick It Out reveals large rise in reported football discrimination on social media
- UK: Crime victims will be able to track their cases online
- India: Increasing pendency of cyber crime cases, claims govt report
- South Africa: Laws against online hate speech inadequate, Parliament hears
- UPDATED: Facebook explains flip-flop on Jewish genocide post
- Microsoft Launches Tool to Report Hate Speech on Its Services
- Europe Net Neutrality Win Continues Global String of Victories for the Open Internet
- USA: White Supremacists and Neo-Nazis Use Twitter With ‘Impunity
- USA: Click’d: Confronting Twitter’s harassment problem
- US authorities investigate cyber-attack against Ghostbusters actress
- Germany: How teenager used the dark web to buy gun for Munich mass murder
- The dark web is a dangerous new frontier for those who try to keep terrorists at bay
- Canada: “Kill All Jews Now” is an Acceptable Message, FB Says Or not?
- Ireland: Black woman inundated with racist abuse while tweeting for @Ireland
- Finland: Court throws out motion to close down anti-immigrant website MV-Lehti
- How Trolls Are Ruining the Internet
- Twitter Announces Tools That Seem Intended To Curb Harassment
- Twitter suspends 360,000 accounts for terrorist/hate ties
- Skype and WhatsApp face tougher EU privacy rules
- UK: 289 Islamophobic tweets were sent every hour in July
- Scotland Yard to use civilian volunteer ‘thought police’ to help combat social media hate crime
- UK: The 'yellowface' Snapchat filter is nothing new
- Pakistan: Cyber crime bill passed in National Assembly
- Brazilian Olympians face organized racist attacks online
- Facebook's walls of hate: Sickening abuse plastered online tells minorities to LEAVE UK
- Europe's Radical Right Is Flourishing On Social Media
- USA: Neo-Nazi Hacker Distributes Racist Fliers Calling for the Death of Children
- Instagram Will Feature a Hate Filter to Stop Harassment
- Anti-Semitic hatred is now part of daily life for Jews online
- UK:Far right targets Muslim women in Facebook hate campaigns
A British government report has branded online attacks on Jews via social media 'deplorable'
19/10/2016- Each week my newspaper, Jewish News, shines a light on social media's darkest corners, publishing the latest anti-Semitic and anti-Zionist Tweets in a section called 'Thanks For Sharing'. Recent charm school graduates to feature include @MisGrace ["Israeli Jews harvest Palestinian kids' organs"] and @AJCTmusic ["The Holocaust is a massive lie, indoctrinated by foreign parasites"]. We provide this public service because platforms like Twitter and Facebook don't seem to give a flying fig about the fact they are wining and dining this sewage. Rather than be a gallant little Dutch boy, holding back the tide with finger in dam, these multi-billion dollar businesses choose to accommodate hashtags like #Holohoax, #filthyjewbitch and #Hitlerwasright.
It's long been open season on Jewish parliamentarians, with Labour MPs Luciana Berger [2,500 hate tweets in three days] and Ruth Smeeth [25,000 since June] among the abused. You don't even have to be a public figure to be a target. My own fan mail includes 'You look like any Jew you c***' and 'I'll put you in a f****** ash tray'. Meanwhile, over on Facebook, random kooks from intolerant Islamist to right-on lefties – with nothing in common aside from their fear of Jewish plans for world domination (ah, those) whip each other up into a frenzy. [See 'Zionist Israel A Threat To Mankind', 36,206 members; 'Israeli Plots Against Islam', 6,512]. It's enough to make you hanker for the good old days when Jew-haters looked like Jew-haters, swam like Jew-haters and quacked like Jew-haters, without the obfuscation and hidden agendas [Don't kid yourself that the slur 'Zio' has anything to do with Israel – it's shorthand for 'Jew']. You knew where you stood with a good-old sieg heil and swastika. Simpler times.
No doubt inspired by 'Thanks For Sharing', this week a Home Affairs Select Committee report on anti-Semitism pulled no punches in branding Twitter "deplorable" for allowing itself to become an "inert host for vast swathes of anti-Semitic hate speech and abuse, despite having "necessary resources and technical capability" to address the problem. The cross-party group said it was "disgraceful" that Jews using social media were targeted by "appalling" levels of abuse and called on internet bosses to act "proactively" to identify abusive users, rather than rely on victims to report it to them. It also called for investment in enforcement and more resources to identify abusive users. Twitter currently employs one moderator for every 130,000 tweets.
Wise words indeed.
These recommendations should not be seen as censorship. Free speech, after all, is priceless. Mess with it at your peril. Criminalise idiots? Where does that end? Any fool should be free to say fish ride bikes and Donald Trump should lead the free world. Others react accordingly. We should all question, confront, and, yes, offend. But even democratic privilege has limits and free speech ends where race hate, threats and incitement begin. Just ask Luciana Berger, Ruth Smeeth or any proud Jew, for whom intimidation and abuse is simply the price they pay for logging on.
© The International Business Times - UK
Investigation by Hope Not Hate finds police officers among members of Loyal White Knights, names of expelled ‘race traitors’ and links to violence
15/10/2016- One of the most notorious Ku Klux Klan groups is stepping up attempts to ignite race war across the US with a call to arms against black people and violent support of the White Lives Matter campaign. An inside account from within the Loyal White Knights of the KKK also reveals that the group is linked to stabbings of anti-fascists, Holocaust denial, threats to attack gay men and extreme anti-Black Lives Matter propaganda. During a 15-month online infiltration of the Klan, British anti-racist group Hope Not Hate obtained the membership list of what is described as the largest KKK faction, a list of 270 individuals including police officers. (The group claims it has 3,000 members.) Most hailed from southern states such as Louisiana, Mississippi, Alabama, Georgia and North Carolina, although there was a considerable cohort from the Midwest, the east coast and California.
Among them is a 28-year-old British man from Suffolk who claims to be a member of the Knights Templar, an “interdenominational association of active Christians”. Another is a 44-year-old Frenchman based in Marseille who recently uploaded a series of anti-Muslim pictures to a secret Klan chatroom. Investigators also obtained a list of members expelled from the Loyal White Knights for so-called violations, ranging from drug use to sleeping with “a Jew whore” or a Mexican, watching Asian porn or having a “mixed child”, which made them a “race traitor”. Based in North Carolina, the Loyal White Knights was founded in 2012 by Chris Barker, a far-right supporter who last year was linked to a plot by a New York white supremacist convicted of conspiring to use a remote-controlled radiation device he called “Hiroshima on a light switch” to harm Muslims.
Barker is a contentious figure among Klansmen, partly because of his connections to neo-Nazis. He recently became part of the Aryan Nationalist Alliance, an extreme coalition of white nationalist groups, including notorious US organisations such as Matthew Heimbach’s Traditionalist Worker Party. Heimbach , who is dubbed the “face of a new generation of white nationalists” by critics, and has advocated racial segregation – was banned from entering the UK last year by Theresa May, who was then home secretary. Hope Not Hate’s investigation found considerable evidence that Loyal White Knights retains its desire for extreme racist violence, seeking to exploit the anti-Muslim, anti-immigrant climate fostered by Donald Trump. “Once inside, we came across some of the worst racism we have ever encountered and learned about their dangerous racist ideology, witnessing a culture which encouraged extreme violence,” said one of the infiltrators.
It also found that the Klan is actively involved in “Knight Rides”, where members drive around communities at night and throw white supremacist leaflets on to the lawns of black people’s homes. “They organise White Lives Matter demonstrations where they get ‘tooled up’ and also Knight Rides that hark back to when members rode horses through towns at night, to terrify communities,” said an investigator. In February this year, members of Barker’s group held an anti-immigration demonstration in Anaheim, California, during which they held White Lives Matter signs. The protest erupted into violence, with three people stabbed and 13 others arrested. Barker then emailed the infiltrator and wrote: “We just had a fight between our members and communist [sic] our members stabbed 3 in California.” Five KKK members were arrested following the brawl but later released as police said they had evidence the KKK members acted in self-defence. Barker, who calls himself the imperial wizard of the Loyal White Knights, claimed his members were holding a peaceful anti-immigration demonstration. “If we’re attacked, we will attack back,” said Barker, who did not attend the rally.
Eventually, Hope Not Hate investigators were invited into the closed sections of the group’s website, where they found members circulating images of themselves posing with firearms or holding a hangman’s noose – a symbol linked to the lynching of black people – with one mocked-up picture showing President Obama apparently being hanged. Jokes and memes about hanging and running over black people were also posted. Investigators were sent magazines and leaflets, some of them deeply antisemitic. One image depicted a hooded figure in front of the confederate flag with the words: “Help save our race; everything we cherish is under assault by ZOG” – an acronym for zionist occupation government, which is an antisemitic conspiracy theory that claims Jews secretly control world power.
During the undercover operation Barker, a Holocaust denier, wrote: “They said there [sic] goal was to destroy the white race. Here they are doing just that – by brainwashing our people through the media.’ The most extreme leaflet encouraged violence against gay men, with one stating: “Stop Aids: support gay bashing,” and “Homosexual men and their sexual acts are disgusting and inhuman.” The same leaflet also espouses racism, adding: “Ban non-white immigration. Outlaw Haitians – deport mud people.” The extremism of the modern Klan movement may appear to be undimmed, but its membership has rapidly declined over recent decades. During the 1920s, the organisation’s four million members were able to stage huge demonstrations in Washington. The Southern Poverty law Centre estimates that there are between 5,000 and 8,000 Klan members active at the moment, split across dozens of groups.
In their responses to questioning about the findings of the investigation, Barker and his wife Amanda referred to the Holocaust as a “money-making scam”. They added: “Our group does not call for the killing of black people, but we do tell our members to arm and protect themselves.” Barker’s statement also defended the group’s homophobic stance.
© The Guardian.
14/10/2016- The creator of Pepe the Frog, a cartoon that has become a symbol frequently circulated by anti-Semites online, is joining forces with the Anti-Defamation League to reclaim the image as a “force for good.” Matt Furie will create a series of positive Pepe internet memes that the ADL will promote through its social media channels with the hashtag #SavePepe, the organization announced in a news release Friday. The character, which Furie created for an online comic in the mid-2000s, has been co-opted in recent months by white nationalists associated with the the alt-right. “Pepe was never intended to be used as a symbol of hate,” said ADL CEO Jonathan Greenblatt. “The sad frog was meant to be just that, a sad frog. We are going to work with Matt and his community of artists to reclaim Pepe so that he might be used as a force for good, or at the very least to help educate people about the dangers of prejudice and bigotry.”
Furie is also scheduled to speak at the ADL’s inaugural “Never is Now” summit on anti-Semitism in New York City next month. “It’s completely insane that Pepe has been labeled a symbol of hate, and that racists and anti-Semites are using a once peaceful frog-dude from my comic book as an icon of hate,” Furie said in the ADL release. “It’s a nightmare, and the only thing I can do is see this as an opportunity to speak out against hate.” Images of Pepe, often depicting him in Nazi garb or with a Hitler style mustache, are frequently included in anti-Semitic and other attacks on Twitter. Donald Trump Jr. stirred up a controversy last month by posting a photoshopped image of Pepe alongside himself and various Trump advisers. The ADL added the frog to its online hate database last month.
© JTA News.
13/10/2016- After a Boulder, Colorado high school student took his own life, police found a neo-Nazi chat group that involved him and more than a dozen other students. “You can hang Jews on trees, shoot them right in the knees. Gas as many as you please,” wrote one commenter on the Facebook conversation, titled “4th Reich Official Group Chat,” a reference to Nazi Germany’s Third Reich. According to Boulder’s Daily Camera news site, the students also celebrated “white power” and called for the murder of African-Americans. Police learned of the group after the September suicide of its ringleader, who according to police reports attended Boulder Preparatory High School and took his own life “to show his allegiance to the Nazi party and the killing of Jewish people.” It remains unclear if he was himself Jewish, and none of the students involved have been identified because they were minors.
According to law enforcement, up to 15 students, attending different high schools in the area, participated in the group chat. As of the Daily Camera’s Tuesday report, at least five of them had been expelled from school, though the police have announced that no charges will be filed. The police learned of the group from a concerned parent. This presidential election has seen the growing popularity of the “alt-right,” the contemporary white supremacist movement that is backing Donald Trump and has incorporated anti-Semitic appeals into its rhetoric. Much of the movement’s energy comes from online, where hate sites like the neo-Nazi Daily Stormer draw millions of unique visitors each month. Anti-Defamation League Regional Director Scott Levin told the Daily Camera he thought the schools and police had adopted the right course of action. Nonetheless, he said, “it’s very disheartening when you hear this type of thing is taking place.”
12/10/2016- A disturbing Facebook chat group leaves five Boulder Preparatory High School students expelled after a friend’s suicide revealed a Neo-Nazi Facebook group among 15 high school students around Boulder. Boulder police say the students from six schools in the area, including Boulder Prep and Boulder High School, openly discussed executing Jews and African Americans in what they called the “4th Reich’s Official Group Chat” on Facebook. “I think they were just joking around and took it too far,” says Boulder High student Hailey Andresen. Police say the leader of the group, a Boulder Prep student, committed suicide last month to show his allegiance to the Nazi party. One of the students who claims to be part of the chat group told CBS4 the suicide was instead related only to depression.
In a police report released to CBS4, he stated in the chat group shortly before his death, “I have crippling depression but I shall cure it by killing Jews.” Other comments in the group include “White Power,” and “DEATH TO ALL JEWS.” They also used derogatory terms to describe executing African Americans. Andresen has friends who belonged to the group and was surprised to learn about their involvement. “The things that I saw on Facebook, I would not expect them to say that in real life,” Andresen said. The Boulder Valley School District says students involved in the online group faced disciplinary action: “Boulder Police Department initiated an investigation of comments on a social media site that involved some BVSD high school students. Boulder Police did not initiate any criminal charges at the conclusion of the investigation.
“Boulder Valley School District administered appropriate responsive action with the students involved. Any information involving interaction with these students regarding this matter is confidential under the Family Education Rights and Privacy Act (FERPA). “BVSD has a long history of teaching and modeling social justice, school safety, and equity for all students. Our district remains committed to this important work.” Andresen says racial slurs around Boulder High aren’t that uncommon. “Ninety-five percent of the kids here definitely yell out slurs and jokingly say racist things,” Andresen said. Boulder police say there was not enough evidence to support criminal action against the 15 students involved.
In a statement to CBS4, the department said: “As offensive and repugnant as the online conversation was, the law does not allow for criminal charges simply because we disagree with the content. In evaluating this case, Boulder police had to determine whether there were direct threats against any specific person. There was no evidence of this. Instead, law enforcement worked collaboratively with the school district, which took action consistent with its policies and standards. The city upholds the values of inclusivity and diversity and supports efforts by the district, the specific schools involved and the community as a whole to demonstrate that these views are neither widespread nor acceptable.” Boulder police said the investigation has been closed.
© CBS Denver
A Russian website supporting LGBT teenagers has been blacklisted by the state media watchdog, Roskomnadzor.
11/10/2016- Founded in 2013, online project Deti-404 provided help and support for young people in Russia who were questioning their sexuality. The site also published letters from LGBT teenagers as they documented the struggles and homophobia they faced in their everyday lives. The site has repeatedly attracted the attention of Russian authorities, who claim that the project illegally promotes “non-traditional relationships” to children. Writing on her VKontakte social media page, site founder Yelena Klimova said that the project had been found guilty of “spreading banned information,” but that the court had not explained the decision in detail. “Most likely, the site will be suspended in Russia in the near future,” she wrote. “We shall keep working.” The decision was originally made by Siberia's Barnaul District Central Court in March 2016, but Roskomnadzor only contacted Klimova in relation to the case on Monday, she said. Roskomnadzor previously tried to ban the site in February 2015 for “promoting suicide.” The project often features letters from teenagers who consider ending their lives after suffering homophobic abuse.
© The Moscow Times
Google Outlines A Strategy For 'Equal Opportunity By Design'
10/10/2016- Artificial intelligence can be just as biased as human beings, which is why experts are trying to prevent discrimination in machine learning. In a new paper, three Google researchers note that there is no existing way to ensure—as the White House calls it—“equal opportunity by design,” but they have an idea. “Despite the need, a vetted methodology in machine learning for preventing this kind of discrimination based on sensitive attributes has been lacking,” wrote Moritz Hardt, a research scientist with the Google Brain Team and co-author of the paper, in a blog post.
Hardt throws out two seemingly intuitive approaches, “fairness through unawareness” and “demographic parity,” but dismisses them for their respective loopholes. By learning from the shortcomings of the aforementioned methods, the team came up with a new approach. The core concept is to not use “sensitive attributes”—race, gender, disability, or religion—so that “individuals who qualify for a desirable outcome should have an equal chance of being correctly classified for this outcome.” “We’ve proposed a methodology for measuring and preventing discrimination based on a set of sensitive attributes,” wrote Hardt, whose co-authors are his colleagues Eric Price and Nathan Srebro. “Our framework not only helps to scrutinize predictors to discover possible concerns. We also show how to adjust a given predictor so as to strike a better tradeoff between classification accuracy and non-discrimination if need be.” The researchers call this method of not including sensitive attributes equality of opportunity.
“When implemented, our framework also improves incentives by shifting the cost of poor predictions from the individual to the decision maker, who can respond by investing in improved prediction accuracy,” wrote Hardt. “Perfect predictors always satisfy our notion, showing that the central goal of building more accurate predictors is well aligned with the goal of avoiding discrimination.” The equality of opportunity principle alone cannot solve discrimination in machine learning, as Google calls for a multidisciplinary approach. “This work is just one step in a long chain of research,” says Google. “Optimizing for equal opportunity is just one of many tools that can be used to improve machine learning systems—and mathematics alone is unlikely to lead to the best solutions. Attacking discrimination in machine learning will ultimately require a careful, multidisciplinary approach.”
© The International Business Times
10/10/2016- New Crown Prosecution Service guidance has set out the range of offences for which social media users could face prosecution. The guidance, published today (10 October), will be used to inform decisions on whether criminal charges should be pursued. Released during Hate Crime Awareness Week, it has also been updated in order to help prosecutors identify and effectively prosecute hate crime on social media. Today also sees the launch of CPS Public Policy Statements on Hate Crime which will now be put to a public consultation. These will focus on crimes against disabled people, racial and religious and homophobic and transphobic hate crime. The new social media guidelines for prosecutors make clear that those who encourage others to participate in online harassment campaigns - known as 'virtual mobbing' - can face charges of encouraging an offence under the Serious Crime Act 2007.
Examples of potentially criminal behaviour include making available personal information, for example a home address or bank details - a practice known as "doxxing" - or creating a derogatory hashtag to encourage harassment of victims. The social media guidance, which is informed by a public consultation and signed off by the Director of Public Prosecutions (DPP), Alison Saunders, also includes new sections on Violence against Women and Girls (VaWG), Hate Crime and vulnerable victims. The DPP said: "Social media can be used to educate, entertain and enlighten but there are also people who use it to bully, intimidate and harass. "Ignorance is not a defence and perceived anonymity is not an escape. Those who commit these acts, or encourage others to do the same, can and will be prosecuted."
The new guidance also alerts prosecutors to cyber-enabled VaWG and hate crime offences. These can include 'baiting', the practice of humiliating a person online by labelling them as sexually promiscuous or posting 'photoshopped' images of people on social media platforms. The guidance provides information for prosecutors considering cases of 'sexting' that involve images taken of under-18-year-olds. It advises that it would not usually be in the public interest to prosecute the consensual sharing of an image between two children of a similar age in a relationship. A prosecution may be appropriate in other scenarios, however, such as those involving exploitation, grooming or bullying. The DPP added: "This month marks the 30th anniversary of the CPS and this latest guidance shows how much the nature of our prosecutions has changed in that time. We are constantly working to ensure that our guidance stays relevant to modern crime and consultations are a crucial part of that process.
"We welcome the comments and opinions of communities and those affected by hate crimes to help us inform the way we deal with such cases in the future. "Our latest Hate Crime Report showed that in 2015-16 more hate crime prosecutions were completed than ever before. More than four in five prosecuted hate crimes result in a conviction; with over 73 per cent guilty pleas, which is good news for victims. We have undertaken considerable steps to improve our prosecution of hate crime and we are committed to sustaining these efforts."
© The Crown Prosecuters Service
9/10/2016- The Cyber Crime Cell Police in Coimbatore have registered three cases against four persons over the last few days on charges of spreading hatred and disharmony between communities and denigrating leaders. In the latest instance on October 7, police registered a case against Pon Sankar and Padmas on charges of posting messages denigrating Communist Party of India (Marxist) leader U. Vasuki and also spreading hatred between communities. Provisions of the Indian Penal Code and Women Harassment Act were invoked in the case that was registered on a complaint from All India Democratic Women's Association's Coimbatore Secretary M. Radhika.
In two other complaints registered on September 28, the police said that following the murder of Hindu Munnani leader C. Sasikumar on September 22, Omkar Balaji and Raifuddin have been posting comments on a social networking site that threatened to disrupt peace in Coimbatore and promote communal enmity. The Police said that they had registered the case based on a complaint from Sub Inspector K. Prema who had been monitoring social networking sites. The Police had invoked Sections 153, 153 (A) (1) and a few others, on charges that the two promoted communal hatred. The police were yet to arrest any of the accused, though.
© The Hindu
German Justice Minister Heiko Maas has said that the AfD party exploits online radicalization for political gain. He also called on social networks to take it upon themselves to more seriously police online hate speech.
5/10/2016- In an interview with the German newspaper "Handelsblatt" on Wednesday, German Justice Minister Heiko Maas said the Alternative for Germany (AfD) party "takes advantage of radicalization online and elsewhere for its political purposes." "Catering to xenophobic sentiment is part of the AfD's approach," Maas told the paper when asked if social media played a role in the AfD benefitting from the ongoing debate surrounding the refugee crisis in Germany. In the interview, which focused on the broader issue of how social networks such as Facebook and Twitter should deal with online hate speech in Germany, Maas went on to say that the AfD posts xenophobic statements online only to walk them back later. By the time the party starts qualifying its comments, "the oil has already been added to the fire," Maas said. To counteract this effect, Maas said, ignoring the AfD would not do the job. Instead, he called for more direct tactics to factually counteract the AfD's message. Maas is a member of the Social Democrats (SPD), the junior partner in Chancellor Angela Merkel's coalition.
Fighting with facts
With regard to broader policies, such as Germany's stance on refugees, Maas said it was a mistake to only search for party and parliamentary consensus without considering how the reasoning and facts behind a decision will reach the public. "We need to do a better job of explaining the facts," Maas said, "because there's a lot of stuff being said online that simply isn't true. I admit it's challenging to argue against firmly held prejudices, but we don't have a choice." Maas has been a vocal campaigner in recent months, calling on social media giants to censor user comments they deem inappropriate. The justice department had previously formed a task force with Facebook, Twitter, and Google to address online hate crime, and Maas said they had recently taken a look at the impact. He said that when an online watchdog, such as Germany's jugendschutz.net (Jugendschutz translates as youth protection) reports a hateful post online, the comment is deleted relatively quickly. But if a normal user reports hate speech, only one percent of Tweets and 46 percent of Facebook posts are deleted. "That is of course too little," Maas said.
Voluntary compliance vs. regulation
Maas said online platforms needed to take their customers more seriously. He also warned that Germany would take action if the task force's findings - slated for next year - show companies are not fulfilling their obligations. Creating laws that forced companies such as Facebook and Twitter to be more transparent when it came to online hate speech was an option, Maas said, but he added that the companies had the opportunity to take the initiative themselves now. "It is in no company's interest that its platform is abused to commit crimes," Maas said at the end of the interview.
© The Deutsche Welle*
The anonymous message board represents the darkest corners of the internet, but users aren’t ready to say goodbye
5/10/2016- The anonymous message-board site 4chan has come to represent the darkest corners of internet subculture, rife with the misogyny, web taste and the politically incorrect humor of the alt-right. Now it appears to be in financial trouble, according to the site’s new owner, Hiroyuki Nishimura, who said on Sunday that the site can no longer afford “infrastructure costs, network fee, servers cost and CDN [servers that help distribute high-bandwidth files such as video]”. The post begins: “Thank you for thinking about 4chan. We had tried to keep 4chan as is. But I failed. I am sincerely sorry.” Nishimura outlined three options for the future of the site: halving traffic costs by limiting upload sizes and closing some boards, adding many more ads including pop-up ads, or adding more paid-for features and “4chan pass” users.
An unlikely savior for the site may have already emerged in the form of Martin Shkreli, the controversial former CEO of Turing Pharmaceuticals, who rose to fame after his company bought the patent to an HIV drug and raised its price from $13.50 to $750 per pill, causing mass outrage. Shkreli announced on Twitter that he was “open to joining the board of directors of 4chan”. He then reached out to Nishimura directly, who responded: “I have replied your DM. Thank you for supporting 4chan @MartinSkreli.”
4chan was founded in 2003 by an American schoolboy, Chris Poole, as an English-language version of popular Japanese image-sharing board 2chan, and split into a number of sub-category boards based on interest, many of the most popular ones pornographic. It was sold by Poole to Nishimura, the founder of 2chan, in 2015. Nishimura did not respond to requests by the Guardian to comment. The site’s influence in shaping the identity and culture of the internet as we know it today is vast. It is the internet’s sweaty engine-room. It has birthed global movements: the hacktivist group Anonymous originated here, and the group is named for the “Anonymous” tag attached to 4chan posts.
More recently its anonymous message boards, especially the far-right leaning politics board /pol/, gave early succor to GamerGate and its spawn, the so-called alt-right movement, which have emerged in 2016 to ally themselves with Donald Trump’s presidential campaign. It produced Pepe, the frog meme that was picked up by white supremacists and trolls and was later condemned by the Southern Poverty Law Center as a hate symbol. Pepe is by no means the only meme 4chan has produced. Rickrolling (the practice of tricking someone into clicking on a link leading to a video of Rick Astley’s Never Gonna Give You Up, leading to nearly 250m views on YouTube and an unlikely revival of Astley’s career), LOLcats, and innumerable other memes and slang and inside jokes originated here.
It is the internet’s id, a place where anonymity runs free in its purest form. The true nature of mankind can be glimpsed there, in all its horror and glory and depravity.
Advertisement Predictably, the response to Nishimura’s message on the site was mixed. Posters from some boards – many of which have loyal, almost tribal user-bases – called for other boards to be closed. One suggestion was to close /b/, the wildly popular random topic board, to which other users expressed immediate worry that /b/ users would spread to other boards. “Oh god”, one posted. “HOW MANY PORN BOARDS DO WE NEED? NOT THIS MANY” read another post. Some heatedly discussed the perils of pop-up advertising, while a few suggested merchandising as a way to solve the site’s revenue problems. Others just seemed worried. “Please don’t fuck this website up for us,” one user plaintively posted. Another wrote: “Hiro please. Don’t ruin this for us. This is our only home.”
© The Guardian.
4/10/2016- A browser plugin that detects and prevents hate speech on platforms such as Twitter and Facebook has been selected as the winner of this year’s #peacehack, a competition designed to generate innovative and practical solutions to conflicts around the world. The hackathon, organised by the peacebuilding charity International Alert, took place in London over the weekend of 1-2 October 2016 and focused on tackling online hate speech in all its forms, from Islamophobia to cyber bullying. The winning product, titled Hate speech blocker, works by detecting messages containing hateful language and flags a warning to users through a pop-up window. The plugin is now available for use under the name of ‘Hate Free’ at the Google Chrome store. A demo of the plugin in action can also be viewed here.
Dan Marsh, Head of Technology at International Alert, said:
“The winning idea was chosen for its versatility, as it can be effective on any social media site, online forum or discussion board. The judges were also excited by its potential use as an educational tool in schools, colleges and libraries. Overall, it is a very practical solution to a complex issue”. The decision to focus #peacehack 2016 on hate speech came in the wake of reports that social media and technology are being increasingly used to bully and stir up hatred, in the UK and beyond. A new study by the European Commission against Racism and Intolerance (ECRI), published today, noted that online hate speech in the UK had soared and was linked to a rise in violence.
“The solution to hate speech has to be more holistic than policing, banning or repressing,” said Mana Farooghi, who runs International Alert’s project aimed at tackling Islamophobia in schools. “It has to involve more responsible practices, ones that re-introduce nuance and encourage respectful conversations. This is where technology can play a unique role.”
Farooghi was joined by a panel of judges including:
• Peter Barron, Google’s VP Communications and Public Affairs for Europe, Middle East and Africa;
• Dr Sue Black, award-winning computer scientist, academic and social entrepreneur;
• Georgiy Kassabli, Software Engineer at Facebook;
• Pupils from a secondary school in Lancashire working on International Alert’s project that aims to train Muslim and non-Muslim young people on how to tackle Islamophobia in schools.
A total of 10 teams participated in the competition.
Dr Sue Black said:
“The #peacehack competition is so inspirational! It clearly demonstrates that technology can be harnessed to make a positive difference in the world, improving the lives of those who are vulnerable to harassment, isolation and bullying.” The hackathon took place as part of the annual Talking Peace Festival at Google Campus in Tech City, east London. #Peacehack launched as a small London-based initiative in 2014 and has since gone global, with local events held in Beirut (Lebanon), Washington DC (USA), Colombo (Sri Lanka), The Hague (Netherlands), Zurich (Switzer-land) and Manila (Philippines) and past themes including countering violent extremism.
Athens (Greece) is next on the list. Find out more at: www.peacehack.io
© Techcity News
Communities aim to swap apparently innocent words like Google, Skittle and Skype into substitutions for racist slang
3/10/2016- People are using a secret code to discuss the far-right without being censored by social networks. An entire new language has developed online that attempts to facilitate racist discussions that go unnoticed by the automated tools that are usually used to block them out. And by making that same language go mainstream, the far-right internet users hope that they can damage companies by associating them with racist slang. Twitter users and those on other networks are attempting to use a whole range of words – like Google, Skype and Skittle – in place of traditional racist slurs. The code appears partly to be intent on hiding the messages from the view of automated monitoring by the networks themselves. Since the words used are so apparently innocent and commonly used, it would be next to impossible for any network to actually isolate the words themselves.
Some of the words appear to be connected to previous racist discourse – the word “skittle” to mean someone who is muslim or Arab appears to be a reference to the idea, referenced by a recent Donald Trump Jr tweet, that refugees from predominantly Muslim countries can be compared to sweets. In fact, many of the users appear to reference Mr Trump in the recent tweets, though none of them have actually been used or endorsed by the campaign. “Google” doesn’t appear to have come to life as a codeword so much as the opposite: a move by 4chan users to intentionally associate the word with racism. That emerged during what people called “Operation Google” – by using the name of the company as if it were a slang word for black people, users hoped to encourage the search engine to ban its own name. That was launched in response to Google’s Jigsaw, which uses AI technology to stop harassment and abuse online. Given that the system was powered by artificial intelligence, users pointed out, it would be possible to trick it into making false associations so long as words were used in the right context.
© The Independent
Changes introduced by Chamber of Deputies would compel removal of wide range of content without judicial oversight
3/10/2016- The International Press Institute (IPI) today expressed concern over draft Italian cyber bullying legislation that would allow purported victims to secure the removal of offensive content without judicial oversight. Under the bill, passed by the Chamber of Deputies last month, online service providers – including web sites, social media networks and instant messaging platforms – would be required to remove content reported as cyber bullying within 24 hours or face the possibility of stiff financial penalties from Italy’s data protection authority. The draft legislation defines “cyber bullying” in broad terms. In addition to targeting threats, incitement to self-harm, blackmail and “aggression or repeated harassment” intended to cause “anxiety, fear, isolation or marginalisation”, the definition includes “insults or ridicule” related to a person’s race, language, religion, sexual orientation, political opinion, physical appearance or personal and social situation.
Also subject to removal are any forms of digital media “detrimental to the honour, dignity and reputation of the victim”. The Italian Senate unanimously passed a previous version of the measure in May. That version, however, was exclusively aimed at protecting victims of cyber harassment under the age of 14. In its final working day in July before the summer holidays, the Chamber of Deputies introduced sweeping changes to the bill, most prominently a provision granting any person, not just minors, the right to demand content removal. But the Chamber also vastly expanded the scope of potentially offensive content and shortened the required response time by Internet service providers, in addition to introducing tougher criminal penalties for online stalking.
IPI Director of Press Freedom Programmes Scott Griffen said the Chamber of Deputies appeared to have overshot its stated attempt of combating cyber bullying. “Given the vast numbers of journalists subjected to campaigns of vicious online abuse, we’re sympathetic to efforts aiming to protect Internet users, from harassment,” he commented. “But unfortunately, this bill, which began as a focused and commendable effort to protect children, now acts as an invitation for adults to suppress unwanted speech. The notion that any person can compel the removal of such a broad range of subjectively offensive content – which could easily be an acerbic blog post, a sarcastic WhatsApp message or a strongly worded piece of online commentary – without proper judicial oversight is a dangerous proposition.”
Griffen added: “In messily conflating various issues – cyber bullying, hate speech and defamation in particular – the Chamber’s amendments have produced a draft law that is in serious breach of international standards and best practices in the regulation of freedom of expression online. The bill’s expansive concept of ‘insult’ and its references to honour and reputation offer a back door to suppressing unwanted or inconvenient content alleged to be defamatory.” He noted that the 2011 Joint Declaration on Freedom of Expression and Internet states that ISPs and other online intermediaries “should not be subject to extrajudicial content takedown rules which fail to provide sufficient protection for freedom of expression”.
“The Chamber’s version of the bill clearly provides no such sufficient protection for freedom of expression, a term that isn’t even mentioned in the text,” Griffen observed. “As a consequence of the increased liability on them, which ignores self-regulatory policies that many platforms have already adopted, ISPs will be encouraged to censor content that would otherwise be protected in a court of law.” Griffen said IPI urged the Senate, which must now approve the Chamber’s changes, to “reject the bill in its current form and replace it with a tailored and proportionate response to the phenomenon of cyber bullying”.
Broad range of criticism
Critics have accused the Chamber, which approved the bill on Sept. 20 by a vote of 242-73, with 48 abstentions, of deliberately distorting a measure intended to protect the privacy and well-being of children online. “The government has taken advantage of the law on cyber bullying to attack freedom on the web,” MP Vittorio Ferraresi, whose opposition Five Star Movement largely voted against the bill, told the free expression group Ossigeno Informazione in a statement later provided to IPI. Fulvio Sarzana, an Italian attorney and member of a group of lawyers and academics that unsuccessfully sought to counter the Chamber’s amendments, echoed that view. “Parliament is trying to pass this bill in order to have an instrument to prevent criticism,” he told IPI in a telephone interview.
Backers of the Senate’s original measure – including the Senator who introduced it, Elena Ferrara, as well as groups such as Save the Children Italy – have expressed concern at the vastly increased scope and suggested the changes would dilute the fight against child cyber bullying. The debate in Italy comes as lawmakers in other EU countries have sought to balance the need to combat cyber abuse with considerations for free expression. Recent proposals in Sweden and Austria, for instance, have included provisions strengthening criminal defamation laws, despite international calls to remove defamation from criminal law. Observers suggest the Italian Senate is unlikely to attempt to amend the bill again; if it does not, it will have to decide to either pass it in its current form or let it expire. A date for a vote has not yet been officially announced.
© The International Press Institute
By Jim Rutenberg
2/10/2016- If you go by what some Twitter users have to say, it’s a wonder I can string together a sentence. I don’t know how I ever manage to get myself to the office given what a “dumb ass” I am — a Jew, no less, and someone who soils his pants out of fear of a Trump presidency. And if you don’t believe that last bit, someone using a pseudonymous Twitter account was kind enough to provide a graphic photograph of the supposed soiling, but not his or her actual name, because it’s just so much easier to hurl bile while cowering behind anonymity. Then again, I don’t know what it’s like to be really savaged by Twitter. No one has threatened to rape me or kill me (unless being advised to kill myself counts). No one has relegated me to a gas chamber. And no one has hit me with anything like the sustained racist and sexist barrage that forced the “Saturday Night Live” and “Ghostbusters” star Leslie Jones to temporarily leave Twitter in disgust. Now that Twitter is contemplating putting itself up for sale, we can only wonder what lucky suitor is going to walk away with such a charming catch.
Twitter is seeking a buyer at a time of slowing subscriber growth (it hovers above the 300 million mark) and “decreasing user engagement,” as Jason Helfstein, the head of internet research at Oppenheimer & Company, put it when he downgraded the stock in a report last week. There’s a host of possible reasons for this, including new competition, failure to adapt to fast-changing media habits and an “open mike” quality that some potential users may find intimidating. But you have to wonder whether the cap on Twitter’s growth is tied more to that most basic — and base — of human emotions: hatred. It courses through Twitter at an alarming rate, turbocharged by this year’s political campaigns and the rise of anti-immigration movements that dabble in racist, sexist and anti-Semitic tropes across the globe. And this is to say nothing of its use by terrorist recruiters. It’s a lamentable turn that Twitter says it is urgently working to address.
Soon after Twitter took its place in the tech-driven media revolution a decade ago, it proved to be a forceful amplifier of ideas and personalities, one that could be a political game changer. Its role in enabling the Arab Spring movements remains inspirational. It helped foster bottom-up movements like the Tea Party and Black Lives Matter here in the United States. And, of course, it helped make possible the outsider candidacy of Donald J. Trump, who continues to use it, er, aggressively. The back-and-forth over his candidacy, and the news media’s coverage of it, have added a new cache of material to the uglier side of Twitter’s oeuvre. More often than not, the venom comes from pseudonymous accounts — the white hoods of our time. Just take a gander at @Bridget62945958, who published a series of anti-Semitic posts against my colleague Binyamin Appelbaum. One message showed a series of lampshades. Its caption read: “This is your family when Trump wins. Get your Israeli passport ready.”
Twitter suspended the account after Mr. Appelbaum brought it to the attention of Twitter’s co-founder and chief executive, Jack Dorsey, by way of his own Twitter feed. A new account sprang right up to continue the vitriol, prompting Jeffrey Goldberg, a national correspondent for The Atlantic, to write a post asking Mr. Dorsey, “How does it feel to watch Twitter turning into an anti-Semitic cesspool?” Mr. Goldberg says he is torn about what Twitter should do, given that its cause — openness and free speech — is a reason he and so many other journalists are drawn to the service. “That’s the fundamental problem,” he told me. “At a certain point I’d rather take myself off the platform where the speech has become so offensive than advocate for the suppression of that speech.”
Twitter clearly wrestles with the same fundamental problem. It warns users they may not “threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender” and various other traits. Yet it often fumbles the enforcement. Charlie Warzel of BuzzFeed News unearthed a doozy last week. After a user who identified herself as Kathleen posted a tweet criticizing the Trump campaign, a Twitter member going by Adorable Deplorable directed a message back at her featuring a photograph of a beheaded man — apparently an ISIS victim — and the words, “Your [sic] heading for a deep hole.” Twitter forced the photo’s removal after BuzzFeed’s inquiries, but it initially told Kathleen that the post did not violate its policies. This is apparently common. In a BuzzFeed survey of Twitter users, about 90 percent of those who said they had reported abuse said their complaints went unheeded.
So-called trolls are a problem for all social media — even Facebook, which keeps a tidier, more contained system. (To wit, the Facebook message a local New Jersey politician wrote to the Daily Beast writer Olivia Nuzzi after she posted something about Mr. Trump that he did not like: “Hope. You. Get. Raped. By. A. Syrian. Refugee.”) But the openness of Twitter, and the sheer speed and volume of information that moves through it, present a particularly hard challenge that executives there say they are rushing to meet. “Everyone on Twitter should feel safe expressing diverse opinions and beliefs,” the company said in a statement it sent me on Saturday. “But behavior that harasses, intimidates or uses fear to silence another person’s voice should have no place on our platform.”
In a letter to shareholders, Mr. Dorsey said the company was putting in place technology enabling it to more readily detect abusive accounts, make it easier for users to report them and even prevent them in the first place. It’s all a bit tricky for a company founded with an absolutist ethos, once calling itself “the free speech wing of the free speech party.” Some of its moves to curtail abuse have drawn accusations that it is applying a double standard aimed at conservatives. After Twitter placed the Breitbart News editor Milo Yiannopoulos on permanent suspension for his role in the Twitter campaign against Ms. Jones, he accused it of declaring “war on free speech,” specifically against “libertarians, conservatives and anyone who loves mischief.”
Another banned Twitter provocateur, Charles C. Johnson — whom my predecessor David Carr once called a “troll on steroids” — says he is planning a lawsuit to fight his suspension. In an interview, Mr. Johnson said he respected Twitter’s right to ban patently offensive speech but argued that it needed to set a consistent, uniformly applied standard. Still, he said, “the problem of trolls” might be unsolvable. “It might just be a human nature problem,” he said. “Maybe we don’t like each other that much — and that’s what Twitter has revealed.” We didn’t need Twitter to reveal that. And in the previous two media revolutions — radio and television — the country managed to strike some sort of accommodation between the right to free speech and the greater civic good. That happened because there was an immediate national recognition that these media could have tremendous power to shape culture, politics and government for good and for ill.
As Herbert Hoover moved to establish basic standards for radio, he acknowledged that it had “great possibilities of future influence” but was also of “potential public concern.” He declared radio should be developed with public interest in mind, an idea that carried over to television. What followed were standards that forced broadcasters to devote at least some of their hours to civic affairs while avoiding obscene and “grossly offensive” content. At times, the efforts have wandered dangerously into censorship. But at least there was a big national discussion about what should beam into American living rooms. There was no similarly robust discussion at the start of this, the latest media revolution, and we can only hope that the political mistrust isn’t so great that we can’t have a constructive one now. Each new media development has served as a mirror for the society that spawned it. It sure seems time for a good, hard look. But what does this dumb, pants-soiling Jew know?
© The New York Times
Canada's Criminal Code provisions out of step with internet reality, Arthur Topham's supporters say
1/10/2016- A B.C. grandfather convicted of wilfully promoting hatred against Jewish people on the internet is launching a charter challenge of Canada's hate crime laws. Arthur Topham is scheduled to appear today in B.C. Supreme Court at the small Quesnel courthouse for a week-long challenge under the Charter of Rights and Freedoms, funded in part by self-proclaimed "white nationalists." Topham was convicted in November of one Criminal Code count of communicating statements that wilfully promoted hatred against Jewish people through his website, RadicalPress.com. The defence is expected to challenge that conviction based on the charter right to free expression and the contention that Canada's hate crime law didn't anticipate the nature of the internet.
Although a Quesnel jury convicted Topham, the judge delayed a decision about shutting down his website until sentencing. If the charter challenge fails, Topham may be sentenced as early as later this week. "This is not a matter of Arthur Topham passing out pamphlets," said Paul Fromm, an avowed "white nationalist" who helped fund Topham's defence. "You have to want to read what he has on his website. You have to seek it out and sift through and read it." Fromm, director of the Canadian Association for Free Expression, is a controversial anti-immigration and free speech activist who has been linked to neo-Nazi groups in the past. He sat through Topham's two-week trial last winter and said he would travel from Ontario to be in court today for the charter challenge.
Wants website shut down
Harry Abrams will also be watching closely from Victoria. The businessman and B'nai Brith volunteer launched the complaint that led to the charges against Topham, and he's been trying for years to have the website shut down. Abrams dismisses the argument that people must intentionally search for Topham's web posts to find them. "If you put in the word 'breast' or 'mom' or the word 'Jew' [on an internet search engine], you'll get all kinds of things that come up in a search," said Abrams. "Sometimes it's hard-core pornography, sometimes it's what you're looking for, sometimes it will turn up the stuff Topham [writes], that Jews have no right to exist." "To call for the sterilization of all Jews, that's incitement to genocide," said Abrams."It wasn't that long ago that people tried to kill us all, so it's not something we take lightly. It's not a joke to us."
Topham's website has posted numerous anti-Jewish articles, including a "satire" urging the forced sterilization of Jews and posts accusing "world Jewry" of starting the Second World War. Topham's posts have linked Jews with the devil and world domination and used phrases like "synagogues of Satan." During Topham's trial, his lawyer conceded the 68-year-old grandfather's views "deviate from the mainstream" but defended Topham's website as free speech. "Under freedom of expression, some people will say some terrible things, some disgusting things," Fromm, the "white nationalist," told CBC. "But the law should stop treating Canadians as pathetic little children. Let Canadians make up their own minds. "They don't grab you by the neck, turn on your computer and force you to watch it." In final arguments at the earlier trial, defence lawyer Barclay Johnson called the hate crime prosecution of his client "an inquisition" by "lobby groups for a foreign government trying to shut down a Canadian website for criticism of Israel and Jews." Quesnel is an inland community of about 10,000 people 640 kilometres north of Vancouver.
© CBC News
By Ezana Sehay
1/10/2016- In his September 25, 2016 address to the United Nations General Assembly, Prime Minister Hailemariam Desalegn reminded the world about the inherent contradictions of the Social Media. How it has become a double-edge sword that can be used to cut the intended object but also cut the user if he is not careful. He is not alone in his observation. In its Aug. 18-25 edition Time magazine’s cover story was “Why we’re losing the Internet to the Culture of Hate”. Based on its extensive research, the magazine concluded the online media is full of individuals or groups, who are turning the web in to “a cesspool of aggression and violence”. Moreover, Cyber Media experts admit it [social media] has become a hot bed and are worried this might have an adverse effect. They fear it eventually might led to stringent control of the internet, or worse, be subjected to total censorship. Their concern is not unwarranted; few days ago, Swiss voters [one of the most liberal societies] overwhelmingly approved a legislation that allows the country’s intelligence to conduct surveillance on online activities of cyber criminals.
As the reader might be aware of, as of October 1, 2016 the US is going to relinquish oversight over key parts of the internet, ending the contract between the US Department of Commerce and the Internet Corporation for Assigned Names and numbers [ICANN], which regulates Domain Name registration for websites, handles Domain Name System [DNS] Root Zone to ensure internet users are directed to the website they intend to visit and also handles Internet Protocols. The question that is frequently being raised is who is going to fill the void? Many of the UN member nations have proposed the control of the internet to be assumed by the International Telecommunication Union [ITU]. The ITU, in its recent, somewhat clandestine meeting, in turn has proposed a law that “could give governments… the ability to sifter through all of the internet users’ traffic… without adequate privacy safeguards”.
But the most likely successor to the US control of the internet is going to be China. In fact china is so confident it will be, it is on the verge of introducing a new system of governing the internet. As you can imagine for advocates of online freedom, the Chinese control of the cyber space is the worst case scenario. But ordinary people are not worried, not so much anyway. Considering the destructive consequence of the social media on personal or society in general, most people view some kind of regulation as a necessary evil. The social media menace: The rise of digital crime in general and hate crime in particular is proving to be painful especially for developing countries like Ethiopia. Although bigoted ideology has been rampant in the Diaspora Ethiopian extremist community, the people inside have remained, for the most part, immune to such propensity.
But as the country’s connectivity expands and users of social media mushroomed, it is allowing the fanatics of varying spectrum access to potential audience of millions of Ethiopians – especially the impressionable youth. Evidently these bigots across the oceans are managing to spread their message of hate with dire consequences to the nation’s stability as proven by the recent wave of protest in some parts of the country. The protest was spearheaded by Oromo Ethiopians opposing the Addis Ababa and surrounding Oromia towns’ master- plan as well as the general mal-administration in the state. Both are legitimate issues and both the state and federal governments have acknowledged and consequently made concessions. Unfortunately, soon after, extremist Oromos began to exploit the temporary public discontent and marshaled their cyber warriors through the social media; sending whole spectrums of hate messages. Before long, the peaceful protest turned violent causing unfortunate loss of life, destruction of public and private property, and burning of churches and schools etc.
As causalities mount social media channels were, of course, united with emotional reaction. But as days pass by, however, divides started to appear between caught up in posting about the events. Giving different versions of what has transpired as well as what actually did it mean, where it is leading? What is the end game? For the extremist social media handlers the violence that transpired was exactly they were looking for, and, so was harvest time. They started making up stories of atrocities and posted them. Doctored pictures of alleged victims were all over the net. Images of mass killing committed elsewhere, were posted, as if they took place in Oromia. In some cases photos of security officers engaged in crowed control during a concert were presented as officers beating up protesters. Eventually the peace loving Oromo people realized they were being taken for a ride by the extreme nuts. And so relative calm surfaced in the state but I am afraid the damage was done. Sure, slowly but surely peace has come but the lives lost are forever.
Rally of hate: As the situation in Oromia began to turn to normalcy another protest flared up in another corner of the country; this time it was in Gondar in the Amhara region. The Gondar protest was shocking in many aspects. To begin with, unlike the Oromo protesters, the Gondar protesters had no justifiable question worthy of protest. What’s more it was vile, full of obscenity, uncivilized and un-Ethiopian. Bear in mind, the Gondar protest [riot] was hatched and directed by social media from abroad. Evidently the manner in which the protesters acted was reminiscent of the Diaspora hooligans. In addition to running amok, they burned the Ethiopian flag with a star, which symbolizes unity in diversity and replace it with the old version, which to most Ethiopians is the symbol of subjugation, oppression and atrocity. Among the signs displayed was one that read “one language, one religion”. Racist chants turned out to be ubiquitous as well.
It was profoundly disturbing to witness people espousing such hatred on the open air in Ethiopia. Such vile attitude is rife in the Diaspora extremist body politic, but never did I imagine seeing it inside the country. The event in Gondar is anomaly but as many will attest, events like this occur far too frequently in many cities of Europe and North America. Simultaneously the flat-earthers’ social media flame warriors went on the attack, and the outrage-prone macro-universe event kept on giving – exactly the equal of ISIS in their ability to provoke pique on a moment to moment basis. There was an abundance of easy targets for their predictable temper tantrum.
In the days following that riot, as clashes with the security forces ensued, social media backers of the rampage began to post make- believe stories. Stories of imaginary wars, heroics, victories by phantom anti-government forces were pervasive on the net. In tandem we were bombarded with graphic images of purporting to be bloodied protesters being abused by security forces; of massacres and cruelty allegedly committed by government troops. Of course, If you take the time to do research on line, you‘re likely to discover that all their stories are tall- tells and, the images are from other conflicts unrelated to Ethiopia. But the great causality of such abhorrent campaign in the social media is civility. As we speak the Gondar philistines are violating the centuries-old very basic principle of Ethiopian civilized cohabitation.
The perpetrators: Most of the sources of the hate campaigns in the social media directed at Ethiopians are the work of the Diaspora cyber-wasteland which are deeply sectarian and ethno-centric – who attract a core of like-minded individuals and tend to devolve in to vitriolic screeds or sophomoric insults more often than not. The news you see, hear or opinions you read on these on line media outlets are straight conspiracy theories, conjectures, fabrications, and outright lies. Besides the obvious, who should be faulted for poisoning the Ethiopian politico-social ecosystem? Should we blame the erudite enablers of the ultraists, like the aging opportunist professors who see in chaos their last chance at power and can’t be bothered to worry about the consequence to the nation? Or maybe we need to go back further. Does it lie in the Ethiopian intellectual chaos of the time, the easy cynicism that claims all truth is relative, and the nihilists’ pose that choices are without risk, that nothing matters because it is all a joke anyway.
Is the Voice of America [VOA] Amharic division to blame, for sawing the seeds of tribalism – for having played about for years with the coded appeals to ethnicity that most consider open bigotry? Should we blame the excess of identity politics in the Ethiopian society in and outside the country, the obsession with ethnicity to the exclusion of individual rights or common human value, the assertion that society is a zero-sum conflict? Or the youth of Ethiopia for succumbing to the fanatics’ war of attrition on human reason – the insults, the craziness, the elements of errors, the literally thousands of lies, by which they [the fanatics] manifest their disdain for any of the usual standards of behavior No question, the root cause of the deterioration of comity in the Ethiopian political and social ambiance is the coarsening of the culture, the dramatization of everything, the degradation of knowledge in the age of social media, where everyone with access to a computer thinks he knows all there is to know about anything.
A word to wise social media consumers: A social media platform like Facebook is a great way to keep your finger on the pulse of what is going on in the world. Yes it is an ever-updating source of information, which makes it easy to rely on the site as a highlight reel of events. There is of course, a huge benefit to sharing and reading news stories in our social media feeds, especially for compatriots back home where the social media culture is relatively new. For most of you this might be the first time you are regularly clicking on [or at least scanning headlines of] links to news articles or YouTube video clips. In other words it has become easy to depend on your social media circles to keep informed. But think carefully about the information you’re getting. The sheer volume of stories can provide the illusion of comprehensiveness, but that doesn’t mean your networks are a reliable source of facts.
In many cases, social media mediums can be an echo chamber, reflecting versions of our own view back to us from our friends or likeminded people. Moreover, they also have the tendency to rapidly amplify information that is skewed or untrue, which is the case in the recent events in Ethiopia. As much as the information itself, its source is also equally important. So be prudent not to be victims of the trolls who peddle gloom and doom. Be particularly careful of the perpetrators of online hate amongst you. Most of those engaged in such crime wear digital masks – a handle, screen name, or other aliases. Anonymizing yourself online feels liberating. It is like the forth drink [or depending on how much booze it takes to get you tipsy] at a party, when you muster the courage to approach the cute girl in the house. You say and do things you never would if you were your self – or if you had to put your name to your opinions or postings.
It is worth noting, that social media also allows direct access to people much more knowledgeable than we are or experts and media outlets who regularly provide credible update and analysis of relevant issues. Thus, it is each of us responsibility to navigate through to get the clear picture. The point is, when it comes to important issues, such as what goes in our country, the stakes are too high to rely on social media alone for information, and this goes beyond well-meaning people misconstruing media coverage of events. When information is uncritically consumed, we can end up with people advocating civil-war, pogroms, or people calling for “one religion, one language”, despite the fact, the nation happens to be multicultural, multilingual and at least bi-religious.
And while it’s tempting to dismiss this type of things as being “people being wrong on the internet”, those ideas could spill dangerously in to real life with physical attacks and damage to properties as it happened in Gondar and some parts of Oromia state. To sum up: we have certain responsibility to each other to know what is actually going on in our country – or at least make sure we are not spreading or be consumers of misinformation. It is easy to feel something is true, but it is better to know that it is.
© Gambella Media
Ezekiel Mutua has gained notoriety for banning music and films he feels ‘promotes homosexuality’ in Kenya, where homosexuality is illegal
30/9/2016- Google has invited a Kenyan government official and anti-gay activist to its Web Rangers conference in Mountain View, California, even sponsoring his visa. Ezekiel Mutua, who is the head of the Kenyan film classification board (KCFB), gained notoriety this year for banning from the country’s servers local band Art Attack’s cover of the Macklemore gay marriage anthem Same Love, saying it “promotes homosexuality” in Kenya, where homosexuality is illegal. “Kenya must not allow people to become the Sodom and Gomorrah through psychological drive from such content,” said Mutua. In 2014, Mutua banned Stories of Our Lives, a film about Kenya’s gay community, for “obscenity, explicit scenes of sexual activities and [for promoting] homosexuality, which is contrary to [Kenya’s] national norms and values”.
Another KCFB representative said in January that Netflix represented a threat to the country’s national security because it would make the nation a “passive recipient of foreign content that could corrupt the moral values of our children”. Mutua is not invited to speak at the Web Rangers conference, which promotes internet safety and takes place on Friday. A person familiar with the matter suggested that discussions about bullying, especially as it affects teens struggling with nascent sexual identities, could prove instructive. Google told the KCFB it would not remove Same Love. In May the government and the tech company compromised: the video stayed up with a warning of “imagery and a message that may be unnecessarily offensive to some”. “Because of my stand on moral values, including the banning of content promoting LGBT and atheists culture in Kenya, someone wrote in a local daily that I will never get a visa to the US,” Mutua wrote in a post, now deleted, on his Facebook page.
“Well, I not only got it but it came on a diplomatic passport and I didn’t even have to go to the embassy for biometrics or pay the visa application fee. It was delivered to my office free of charge thanks to our efficient ministry of foreign affairs and highly courteous US embassy officials. America here we come ... TO GOD BE THE GLORY!” The invitation has caused consternation within Google, which promotes itself as a bastion of diversity and support for the LGBTQ community. Google sponsors gay pride events across the world and was one of the largest corporations to back same-sex marriage at the US supreme court. The person familiar with the matter said there was internal conflict over Mutua’s invitation, and Google was working to determine how to better avoid apparent conflict with its stated values.
According to a report on abuses of LGBT people from the International Lesbian, Gay, Bisexual, Trans and Intersex Association (ILGA) linked from Google’s Pride landing page, in Kenya sexual contact between consenting adults of the same sex is criminalized by four statutes, the most recent from 2003. Prison terms for breaking anti-gay laws can stretch to 14 years.
© The Guardian.
‘What the hell is a violation of your rules then?’
30/9/2016- Telling a Jewish woman she is “dirty” and “ready for the oven” is not a breach of Twitter’s rules prohibiting abuse, the social media company has said. Labour MP Cat Smith is among those condemning Twitter for not acting on abuse sent to Rhea Wolfson, who sits on the party’s governing body the NEC. Wolfson posted a screengrab of the abuse and Twitter’s response, where it said it “could not determine a clear violation of the Twitter Rules around abusive behaviour”. She asked: “What the hell is a violation of your rules then?” Smith tweeted: “Really Twitter!?!? I think you probably want to look at this again. No space for anti-Semitism.” Left-wing journalist Owen Jones described this and another example of abuse someone else shared as “fucking nauseating”.
Twitter’s rules on abuse say: “In order to ensure that people feel safe expressing diverse opinions and beliefs, we do not tolerate behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user’s voice.” It cites “violent threats”, “harassment” and “hateful conduct” as examples of abuse it does not tolerate. Twitter did not comment on how the abuse sent to Wolfson did not breach the rules. “We don’t comment on individual accounts for privacy and security reasons,” a spokesman said. Wolfson said she did not believe the person who sent her abuse was anything to do with Labour, which is currently in the grips of row over whether anti-Semitism is on the rise within its ranks.
© The Huffington Post - UK
Something ‘Cyber-bullies’ need to learn
By Aneesa Tajammal
29/9/2016- Internet has given a whole new dimension to bullying. Sitting in the comforts of your homes, hidden behind screens one can conveniently bash people with hate and negativity. Posting offensive comments, attempting to demean others, passing terrible remarks on physical appearances and discrediting people for humanly errors is easy, but appalling to say the least. So here are my humble suggestions to all bullies of the internet who feel the urge to insult people on the web.
Have you ever pondered over the idea that each person you pass by in real life or stalk over the internet is struggling in ways you do not know, fighting battles that you’re not aware of or perhaps trying to get back up from falls you have never experienced, and the last thing you should be doing is feeding your negativity and hate into their lives.
A wise man once said that if you don’t have anything nice to say, don’t say anything at all. So here’s a thought, if you don’t like someone’s work, do not degrade them. You must realise that each piece of work takes a lot of effort and guts to present to the public, and not all of us like the same things, so if you don’t like it and you have no constructive feedback, please ignore and move on.
If you don’t find someone’s physical appearance appealing, please don’t go about shaming them for how God created them. You need to know that every person’s definition of beauty is different, and your opinion on someone’s appearance holds no value. Don’t make the effort to pinpoint flaws in people, because we all are uniquely flawed.
If you don’t approve of the fame and love a celebrity or any person for that matter is receiving, don’t attempt to balance it out with your hate. Not everyone appreciates people according to your standards. Learn politer ways of voicing your disapproval.
The list of what you shouldn’t be doing is rather long, but I’ll stop here assuming you got the point.
Now, let me share with you some things you should be doing. You should work on yourself, become a better person, work harder, influence people with your positivity, because trust me positivity is contagious. Water your side of the grass. Self-hatred is already a problem for many, instead of fuelling it, help them in appreciating themselves and tell them that they are worth more than they think. Build people up, don’t tear them down. I understand your urge to be mean is too high, but try being nice for a change. Make someone’s day by your positivity and encouragement, and watch as positivity makes its way back to your life.
© The Daily Times
The complaint alleges that Facebook broke strict national laws against hate speech.
30/9/2016- German prosecutors are again considering whether to press charges against Mark Zuckerberg and other Facebook executives for failing to staunch a tide of racist and threatening posts on the social network during an influx of migrants into Europe. Munich prosecutors said they had received a complaint filed by a German technology law firm two weeks ago alleging that Facebook FB 0.22% broke strict national laws against hate speech, sedition, and support for terrorist organizations. Attorney Chan-jo Jun, who filed a similar complaint in Hamburg a year ago, is demanding that Facebook executives be compelled to comply with anti-hate speech laws by removing racist or violent postings from their site. Jun is principal partner of the law firm Jun Lawyers of Wuerzburg in Bavaria.
Facebook said the complaint had no merit. “Mr Jun’s complaints have repeatedly been rejected and there is no merit to this (latest) one either,” a Facebook spokeswoman said. “There is no place for hate on Facebook. Rather than focusing on these claims we work with partners to fight hate speech and foster counter speech.” Facebook’s rules forbid bullying, harassment and threatening language, but critics say it does not do enough to enforce them. A spokeswoman for the public prosecutor in Munich said a decision would be taken in coming weeks on whether to act on the new complaint, which names Zuckerberg—Facebook’s founder and chief executive—and regional European and German managers. Hamburg prosecutors denied Jun’s earlier complaint on grounds that the regional court lacked jurisdiction because Facebook’s European operations are based in Ireland.
Jun wrote on his website he believed he would get a more favorable hearing in Bavaria because the justice ministry had signaled an openness to hearing racial hate crime cases. Jun has compiled a list of 438 postings over the past year that include what some might consider merely angry political rantings, but also show clear examples of racist hate speech and calls to violence laced with references to Nazi-era genocide. Following a public outcry and pressure by German politicians for failing to delete a rash of racist postings on Facebook, the Silicon Valley social networking giant earlier this year hired Arvato, a business services unit of Bertelsmann, to monitor and delete racist posts. A rash of online abuse and violent attacks against newcomers to Germany occurred amid a migrant influx last year, which led to a rise in the popularity of the anti-immigrant Alternative for Germany (AfD) party and has put pressure on Chancellor Angela Merkel and her Christian Democratic party.
28/9/2016- Facebook has been ordered to stop collecting personal details from its 35 million Whatsapp users in Germany. It has been told by a privacy regulator that it must stop collecting and storing private data from its German users. When the social network giant bought the instant messaging service two years ago, it promised that data would not be shared between the two platforms. But the Hamburg Commissioner for Data Protection and Freedom of Information, Johannes Caspar, said: “The fact that this is now happening is not only misleading their users and the public, but also constitutes an infringement of national data protection law.” Facebook’s German headquarters is based in Hamburg and falls under the regulator’s jurisdiction. The company has vowed to appeal, promising to work with the Hamburg DPA to address any questions or concerns they may have.
© Euro Weekly News
28/9/2016- A new U.S. intelligence report says the Russian government is conducting a wide-ranging and “opportunistic” campaign to expand its political influence in Europe by deploying Internet “trolls and other cyber actors” to challenge pro-Western journalists and spread pro-Kremlin messages in social media forums. Yahoo News obtained a declassified summary of the report, which also describes the role of two state-owned media outlets, RT and Sputnik, in what some experts say is an increasingly aggressive “information warfare” campaign. According to the report, the outlets promote Russia’s political aims with programming targeted to “activist” audiences including “far-right and far-left elements of European society.” It adds that the RT channel gives “disproportionate coverage and airtime to the European Parliament’s more extreme factions.”
The report, by the office of Director of National Intelligence James Clapper, was originally requested by congressional intelligence committees late last year. The panels also asked for a separate report on Russia’s use of political assassination. Classified versions of both documents were delivered by Clapper’s office to Capitol Hill in July. The decision to declassify brief excerpts from the first report coincides with recent disclosures about suspected Russian cyberattacks on the Democratic National Committee and other political groups. Many in the U.S. intelligence community believe that indicates Russia has expanded its cyberwar and disinformation efforts to the United States. “This is the 21st century version of ‘active measures,’” said Heather Conley, director of the Russia program at the Center for Strategic and International Studies (CSIS), a reference to the Cold War term for the Soviet Union’s efforts to manipulate Western opinion by spreading false information, such as the claim that U.S. scientists had manufactured the AIDS virus as part of a biological weapons project at Fort Detrick, Md.
Conley added that the use of “information warfare” techniques to pursue political goals has now been incorporated into official Russian military doctrine. The goal, she said, is not “the annihilation” of the country’s enemies, but to “weaken them from within” by “keeping everybody off balance” and “sowing doubt” about their political leaders and institutions. A report by Conley describing this effort is due to be released by CSIS next month. Russia’s use of trolls on social media would appear to fit that pattern. A report in the Guardian last year identified a St. Petersburg office building where “hundreds of paid bloggers work around the clock” to flood Internet sites and Western social media forums with posts praising Russian President Vladimir Putin and denouncing the “depravity and injustice” of the West.
Letter to Permanent Select Committee on Intelligence
© Yahoo News
27/9/2016- Minister of Justice, Bassam Talhouni, said Tuesday that those misusing networking websites to incite or spread hate speech will be prosecuted. He told Petra in an interview that those involved in such offense would be referred to specialized courts to take "deterrent legal action against them", particularly after a media gag order was issued by the State Security Court's prosecutor over the murder of a Jordanian writer. The minister said some acts of hate incitement could amount to the crime of inciting terrorism, and will be dealt with according to the Anti-Terrorism Law, Penal Code and the Cyber Crime Law.
© Jordan News Agency - Petra
26/9/2016- Walt Disney Co. is working with a financial adviser to evaluate a possible bid for Twitter Inc., according to people familiar with the matter. After receiving interest in discussing a deal, Twitter has started a process to evaluate a potential sale. Salesforce.com Inc. is also considering a bid and is working with Bank of America on the process, according to other people, who asked not to be named because the matter is private. Representatives for Twitter and Disney didn’t respond to requests for comment. Speculation that Twitter will be sold has been gathering steam in recent months, including last week’s news of Salesforce’s interest, given the social-media company’s slumping stock and difficulties in attracting new users and advertising revenue. Disney, the owner of ABC and ESPN, could obtain a new online outlet for entertainment, sports and news. Jack Dorsey, chief executive officer of Twitter, is on the board of Disney.
Twitter rose as much as 2.1 percent to $23.09 after being down earlier. The stock soared 21 percent on Sept. 23 following reports of the talks with Salesforce. Disney fell, dropping as much as 2 percent to $91.40. “It’s a video distribution play,” said James Cakmak, an analyst at Monness Crespi Hardt & Co. “What Disney has to think about is what is its place in a post cord-cutting world. They are investing in technology for distribution -- and this would give them the platform to reach audiences around the world.”
Disney Chairman and Chief Executive Officer Bob Iger has a reputation as a strategic thinker with an appetite for bold bets, such as the $7.4 billion acquisition of animation studio Pixar in 2006, just months after he became CEO. With Disney’s largest business -- cable TV -- losing viewers and facing more competition from online video services, Iger has invested in technology-related media businesses, including the Hulu video streaming service, digital media company Vice and Major League Baseball’s BAMTech, which provides the platform for online video services such as HBO Now. Twitter has also partnered with with BAMTech for its live streaming. Iger has sought to increase Disney’s new media expertise, adding Dorsey and Facebook Inc. Chief Operating Officer Sheryl Sandberg to his board in recent years.
Still, the track record for old media businesses investing in technology companies isn’t great, Disney included. The world’s largest entertainment company lost hundreds of millions of dollars in its interactive unit in recent years. This year it decided to exit video-game production almost entirely in favor of a licensing strategy. While Disney’s balance sheet is among the strongest in the media industry, Twitter, with a market value of $16 billion, would be the company’s largest acquisition since the $19 billion merger with Capital Cities/ABC Inc. in 1996. A union with Twitter would give Disney much larger exposure to the ad dollars that are increasingly flowing to social media sites, according Paul Sweeney, Bloomberg Intelligence analyst. “Twitter may give them an opportunity to communicate directly with their customers in an increasingly fragmented media landscape,” he said.
21st Century Fox Inc., Comcast Corp., Time Warner Inc. and AT&T Inc. don’t want to buy Twitter, according to people familiar with those companies’ strategies. Microsoft Corp. was approached to evaluate a bid, but isn’t interested, people with knowledge of the matter said. Meanwhile, Iger has long been a mentor of Dorsey, and Twitter’s executives are admirers of his strategy. Earlier this year Iger spoke at a meeting of Twitter’s senior management. “He talked about his transformation of Disney,” Dorsey said in a March interview. “They were at the bottom and he brought a strong optimism. He focused on creativity and excellence. And he made some bold moves. It’s resonated very well with folks. It’s what we needed to hear.”
26/9/2016- ‘Your clothes will be removed & fumigated. You will be held down and given a bath!,” a Twitter troll tweeted at a Huffington Post journalist, complete with a picture of herself in a gas chamber. What sounds like an extreme example, is only one of the many attacks on Jews and Jewish journalsits by the “alt-right” during the last months. (To see an interactive time-line of this targeted harassment, click here.) Through statements and policy proposals tinged with racism — such as advocating a ban on Muslims entering the country, and saying many Mexican immigrants are drug dealers and rapists — Trump has become a favorite of white nationalist groups and provided an unprecedented platform for their views. “It’s pretty substantial, what’s out there,” said Todd Gutnick, a spokesman for the Anti-Defamation League (ADL), which in June created a task force to document attacks on journalists, and analyze the size of the “alt-right” movement. The ADL is planning to release a detailed report on their findings in October. In the meantime, we have collected some numbers that showcase the scope of “alt-right” activity on social media, including Twitter trolls.
250,000 anti-Semitic posts are made public across social media platforms every year
The United Nations reported during a recent conference that deals with digital anti-Semitism.
63 percent of all anti-Semitic tweets are calls for violence against Jews
Israel’s ambassador to the United Nations, Danny Danon, said this during the same UN conference.
Stephen Pollard, the editor of the Jewish Chronicle, receives 20-30 anti-Semitic Twitter messages per day
“And that’s after I have blocked over 300 different tweeters – a number that increases every day,” he wrote at the end of July.
The Global Forum for Combating Antisemitism (GFCA) tracked 2,000 anti-Semitic posts on Twitter, Facebook and YouTube over a period of 10 months. During that time, only 20 percent were removed by the social media sites.
The report for the Israeli Government led forum was produced by the Online Hate Prevention Institute (OHPI) in Australia and published in the report “Report on Measuring the Hate The State of Antisemitism in Social Media” in February. “This demonstrates a significant gap between what the community understand to be antisemitic, […] and what social media platforms are currently willing to remove,” they wrote regarding the the fact that 80 percent of all anti-semitic posts they reported remained online. OHPI doesn’t have the funds to repeat their data analysis for the ongoing election cycle, but they “have been seeing the increase in online antisemitism, and expect it to increase in the near future.”
“The problem is particularly acute in the pro-Trump camp where the campaign has attacked the legitimacy of Clinton as a candidate,” Andre Oboler, the author of the report, told the Forward. “The rise in hate undermines freedom of the press and the fundamentals of democracy as some are pressured into silence. Journalists, and particularly Jewish journalists, are a significant target of the new political intolerance.”
Israeli group has removed over 40,000 anti-Semitic YouTube videos in two years
Started in 2013, They Can’t is a group of grassroots activists that flag hateful content from sites like YouTube and Facebook. “Most of the content is coming from far right activists in the US,” Eliyahou Roth,the group’s founder, told the Forward. “I believe that there is a few thousands account sharing these content, and probable a few hundred thousand videos like that on Youtube.”
Jewish journalist Bethany Mandel had to block 500+ Twitter accounts
Mandel is a conservative journalist who wrote an op-ed for the Forward about her decision to buy a gun after becoming the victim of Twitter trolls. She told the Forward that she had to block over 500 accounts, and that she suspects many of these accounts are based in Russia. “The volume of tweets I got made it seem coordinated in some fashion. It would come in intense waves,” she said.
© The Forward
By Anna North
26/9/2016- Brittan Heller has a hard job. The Anti-Defamation League’s first director of technology and society, she’ll be working with tech companies to combat online harassment. The magnitude of her task became clear as soon as the A.D.L. announced her hire earlier this month, when she was deluged with anti-Semitic and sexist attacks. In a recent interview, Ms. Heller talked about what companies can do to stop online abuse and how her personal experiences have informed her, and offered advice for others dealing with harassment online.
How did the Anti-Defamation League decide they needed to hire someone to work with tech companies against harassment?
People wanted a focus on tech and combating online hate for years, but recently there’s been an increase in online hate. A good personal example of this is that they put out a press release announcing my position and they made an announcement on Twitter as well. Within minutes of A.D.L. announcing this position, I opened up my Twitter feed and I found hateful symbols, I found echoes and swastikas and green frogs and people discussing my death. Within hours it became enhanced with statements of Holocaust denial, and within days it’s become ad hominem attacks based on Jewish stereotypes and misogyny. At this point it’s not surprising anymore that this occurs, but the speed of it, and the ferocity of it — that I think is shocking.
There were two events that people at A.D.L. really took notice of. There was Julia Ioffe’s piece about Melania Trump, that resulted in an online and offline campaign of hatred directed against her, and there was a coordinated campaign by white supremacist groups which resulted in death threats and really severe online abuse. Additionally, Jonathan Weisman of The New York Times tweeted a piece about the election and he got similar threats and online abuse. A.D.L. was very concerned that this kind of toxic environment would prompt self-censorship by journalists and really impact public discourse long after the election.
How much is the Trump campaign to blame for the recent rise in online harassment?
A.D.L. is a nonprofit organization, therefore we do not support any particular political party and we do not endorse or reject candidates for office. That said, A.D.L.’s work encompasses fighting bigotry of all kinds, and we encourage all candidates to call out hate. We’ve been on record engaging with members of the Trump campaign, trying to encourage them to emphasize that hate has no place in the public sphere.
What are social networks already doing to fight harassment? What could they be doing better?
A.D.L. is actually an inaugural member of the Twitter Trust and Safety Council which looks at issues of cyber hate. A.D.L. issued best practices that were supposed to counter online hate in 2014 and they were endorsed by Facebook and Google and Microsoft and Twitter. I’ve seen an increased emphasis on companies developing technology that helps to identify greater percentages of problematic content proactively, but I think the problem there is the mind-boggling volume. It’s not really realistic to assume that a filter or artificial intelligence would be able to review and eliminate hate in real time.
I think there’s a few things the companies can do when they’re faced with this onslaught. First they need to communicate outrage. They have a corporate voice, and they can use this to say that cyber hate is really contrary to their vision of connecting all people. They can ensure that their terms of service and their community guidelines are clear, and more than this they can really improve enforcement and do it transparently. They can offer simplified and user-friendly mechanisms for flagging this content. Going beyond companies, people in Silicon Valley and beyond can promote counter-speech initiatives, grassroots responses or having public persons who are willing to speak out and be a voice for tolerance.
You experienced online harassment in law school. How did your personal experience shape your thinking on this issue?
It was instrumental in making me realize that this issue should be a priority. The reason I went to law school is that I wanted to focus on accountability for crimes that targeted people based on their race or their ethnicity or their gender. When I became a victim of cyber harassment, I really felt what it was like to be targeted online for my gender and my race and my ethnicity, and more than that I felt how terrifying it can feel to be threatened and how powerless this type of abuse can make you feel, especially when it’s coming from anonymous sources.
What advice would you give to people who are going through online harassment?
First, I’d say you’re not alone. Part of the power that the harassers have is they like to make people feel isolated, and sometimes part of the ongoing harm of these kind of crimes is that you feel like there’s no meaningful way for you to fight back, there’s no way for you to adequately speak out against what’s happening to you. I would not let the harassment take your voice away. You can talk to family, teachers and friends about what you’re experiencing and what you’ve seen. You can be a support for other people experiencing the same thing, and you can call out people who are trying to incite hate online. Also, educate yourself. Look at the terms of service or community guidelines for the type of platforms and social media that you’re using, and find out what kind of site that company wants to run. Most say that they don’t wish to host hateful content.
© The New York Times
The group is dispatching an official to Northern California, as anti-Semitic abuse has become a significant problem online.
19/9/2016- The Anti-Defamation League is placing a representative in Silicon Valley to work on cyber hate and harassment issues, BuzzFeed News has learned. The move comes after significant trolling, particularly on Twitter, of Jewish journalists and other public figures, amounting to a wave of anti-Semitic expression not seen in the American conversation for decades — and as tech companies struggle to reckon with their role in regulating abusive speech. “As a leading civil rights advocacy organization, ADL was early to recognize the burgeoning issue of cyberhate and how extremists were exploiting online platforms to spread antisemitism and target Jews as well as other minorities,” said Brittan Heller, who will become the group’s first Director of Technology and Society, in a statement. “From its first report on these cyberhate more than 30 years ago to this year’s work tracking the harassment of journalists on social media, ADL has demonstrated its commitment to ensuring our online communities are a safe and just place for all.”
Heller is a former cyber crime and human rights investigator and prosecutor, has also been a high-profile victim of online harassment. While she was at Yale Law School, she was subjected to sexual harassment on a law school messaging board. She and another student sued the board’s administrator as well as anonymous commenters for invasion of privacy and defamation. Heller and the other plaintiff settled with the defendants in 2009. “We’ve really doubled down on the work that we’re doing to deal with this new emerging and metastasizing trend of online harassment and cyber hate,” said ADL director Jonathan Greenblatt in an interview with BuzzFeed News, calling what has been happening on social media “breathtaking and downright scary.”
The 103-year-old ADL has traditionally focused on combating anti-Semitism, an issue that has been in the spotlight this year as Donald Trump’s candidacy has had the effect of empowering online trolls. The organization conducted an online harassment survey of journalists over the summer. “We’ve had some wins with companies,” Greenblatt said, citing its participation in Twitter’s Trust and Safety Council and its working with Google to take down the Chrome extension which enabled users to place parentheses around Jewish names, a common device employed by the alt-right. The ADL declared the parentheses used in this way to be a hate symbol. The group has been vocal during this election cycle about highlighting the issue of online harassment, forming a task force to investigate bigoted harassment of journalists in June and participating in SXSW’s Online Harassment Summit.
The internet is a bastion of free speech—but that’s not always a good thing.
By Susan J. Douglas
23/9/2016- Comedian Leslie Jones’ recent experiences in our digital environment—a barrage of viciously racist tweets and hackers posting her personal information and nude photos allegedly of her—are just the latest in the downward spiral of online hate speech, harassment and menace. And many women and people of color have really had enough. So here’s the thorny question: With the ongoing scourge of trolling and the damage it causes—take Jones’ simple and poignant tweet, “I’m in a personal hell. I didn’t do anything to deserve this … so hurt right now”—are Americans at a crossroads with the First Amendment? Because although 35 percent of respondents to a 2015 poll believed that hate speech was not protected by the First Amendment, it is.
Each new medium, starting with the printing press, has raised such questions. The First Amendment’s freedom of the press protections were a reaction to colonial policies, which required printers to be government licensed and subjected them to pre-publication censorship and libel laws that forbade colonists from criticizing British rule. Yet, we have rarely had totally unbridled freedom of expression. The Federal Communications Act of 1934 forbade “obscene, indecent, or profane” language on the radio— and later, TV—because broadcasts came into people’s homes without their ability to filter them. Harassing phone calls— calling repeatedly, using obscenity, issuing threats—are illegal, although the provisions in each state vary. Despite these exceptions, most speech is protected unless it is designed to cause “imminent lawless action.”
With the Internet, freedom of expression has been more firmly protected from the start. Congress’s effort to restrict “indecent” content (based on a media panic about the web being full of pornography) led to the Communications Decency Act of 1996, which was struck down in 1997. The Supreme Court reasoned that the internet was not as “invasive” as radio and TV, and that its multitude of sites constituted “vast democratic fora.” There are still federal and state laws prohibiting cyberstalking and cyber-harassment, typically focusing on repeated behavior by one person against another, and on threats to “kill, injure, harass and intimidate.” But what constitutes harassment can be vague, and some states only protect those 18 and under. The kind of group swarming Jones experienced is difficult: Which tweets legally count as harassment, and which are protected?
As of now, it’s Internet companies that determine how much hate speech, if any, circulates on their platforms. George Washington University Law professor Jeffrey Rosen, citing American legal tradition, argues that, with the exception of speech promoting imminent violence, no speech should be banned on the internet. Others, especially feminists, have argued that policing online hate speech is important because the majority of victims are women, making such activity discriminatory. Facebook censors posts and pages it deems inappropriate, and does not permit individuals “to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.” But Facebook’s choices can seem arbitrary, and it has come under harsh criticism for censoring nude photos, paintings and images of breastfeeding. Reddit has been the most libertarian, tolerating all kinds of hate and creepy speech, with Twitter, until recently, not far behind.
Susan J. Douglas is a professor of communications at the University of Michigan and an In These Times columnist. Her latest book is Enlightened Sexism: The Seductive Message That Feminism's Work is Done (2010).
© In These Times
L.A. is about to find out
23/9/2016- Can police prevent hate crimes by monitoring racist banter on social media? Researchers will be testing this concept over the next three years in Los Angeles, marking a new frontier in efforts by law enforcement to predict and prevent crimes. During a three-year experiment, British researchers working with the Santa Monica-based Rand Corp. will be monitoring millions of tweets related to the L.A. area in an effort to identify patterns and markers that prejudice-motivated violence is about to occur in real time. The researchers then will compare the data against records of reported violent acts. The U.S. Department of Justice is investing $600,000 in research by Cardiff University Social Data Science Lab, which has been at the forefront of predictive social media models.
Cardiff University professor Matthew Williams said the research is designed to eventually enable authorities to predict when and where hate crime is likely to occur and deploy law enforcement resources to prevent it. “The insights provided by our work will help U.S. localities to design policies to address specific hate crime issues unique to their jurisdiction and allow service providers to tailor their services to the needs of victims, especially if those victims are members of an emerging category of hate crime targets.” His lab’s previous research in the United Kingdom found that Twitter data can be used to identify areas where hate speech is occurring but where no hate crimes have been committed. This can be useful, researchers said, in neighborhoods with many new immigrants, who are unlikely to report the crime because of fear of deportation.
In 2012, an estimated 293,800 nonfatal violent and property hate crimes occurred in the United States, according to the Bureau of Justice Statistics. About 60% of those were not reported, the Justice Department found. Of course, there is a big difference between someone spouting off on Twitter or Snapchat and an actual hate crime. “It is a great idea in the abstract. But it is not the panacea you might think,” said Brian Levin, executive director of Cal State San Bernardino’s Center for the Study of Hate and Extremism. “The problem is the correlation and reliability. … There are many different forms of social media.”
Levin, who has tracked both Middle Eastern terror groups and local neo-Nazi organizations, also noted that some hate groups don’t advertise their work on social media. “Local tensions may arise to fly and be absent from social media,” he said. “Some segments of the community shun social media … so examining social media as a predictor can be a bit like having one screwdriver and sometimes it doesn’t work.” Predictive policing already is in use at the Los Angeles Police Department and other agencies. The LAPD uses a predictive policing algorithm to deploy officers to locations where prior crime patterns strongly suggest similar crimes may occur. As crime during the last two decades has dropped dramatically across the nation and Los Angeles, police commanders are increasingly looking for any edge they can get in cutting crime.
L.A. County is particularly useful because a huge volume of social media produces massive data sets that increase the accuracy of predictive models over traditional crime analysis and trend-chasing, said Pete Burnap, from Cardiff University’s School of Computer Science and Informatics. “Predictive policing is a proactive law enforcement model that has become more common partially due to the advent of advanced analytics such as data mining and machine-learning methods,” he said. Traditional predictive police modeling has paired historical crime records with geographical locations and then made a probable calculation to predict future crimes. But Twitter and social media-based models work in real time using what people are talking about now. The algorithms look for particular language that is likely to indicate the imminent occurrence of a crime.
British researchers began looking at cyber-hate in the aftermath of the killing of British Army soldier Lee Rigby at the hands of Islamic extremists on a London street in 2013. Analysts collected Twitter data and tested a text classifier that distinguished between hateful and antagonistic responses focusing on race, ethnicity and religion. The British researchers are building a completely new hate speech algorithm designed specifically for Los Angeles. They said that’s necessary because of the linguistic and cultural difference between L.A. and London. "We will also gain access to 12 months LAPD recorded hate crime data," he said. The idea, he added, is to see whether “an increase in hate speech in a given area is also statistically linked to an increase in recorded hate crimes on the streets in the same area," Williams said.
In addition to potentially predicting crimes, the researchers hope their work might shed light on hate crimes that are now not reported. "We know that official reports of hate crime from police probably underestimate how common hate crime really is — but we don’t really know by how much, or which types of hate crimes are most seriously under-reported," said Meagan Cahill, senior researcher at Rand Corp. said. "Using Twitter data from Los Angeles County as a test case, this research will help create better knowledge about hate crime. And, we hope it will ultimately contribute to more hate crime prevention by police and other agencies alike.”
© The Los Angeles Times
Cardiff University project receives $800,000 grant from DoJ for algorithm that identifies cyber-hate
22/9/2016- Police in the US may soon be able to scan social media to predict outbreaks in hate crime, using a computer program being developed at Cardiff University. An $800,000 research project, funded by the US Department of Justice, was announced on the same day that the city of Charlotte, North Carolina declared a state of emergency when violent protests erupted after police shot dead a black man. An algorithm will automatically identify cyber-hate on Twitter in specific regions of the US and look for a relationship between online hate speech and offline hate crime.
Police in cities such as Los Angeles and Charlotte can then use the system to predict where hate crimes may be likely to take place in the wake of triggers, such as the Charlotte shooting, and intervene in a peaceful manner. This is the first time US authorities have turned to social media to try to identify and police real-world hate crimes. According to the US Bureau of Justice Statistics, an estimated 293,800 non-fatal violent and property hate crime victimisations occurred in the US in 2012. That number has been rising, with anti-Muslim crimes alone spiking by 14 per cent in 2015, according to the Federal Bureau of Investigation.
Over the next three years, the new algorithm will analyse the language used in Tweets referring to events such as the US presidential election, map them to city districts and cross-reference this with reported hate crimes on the streets. “Say a black person was killed by police in the US — and that happens a lot more than it should — we will see biased Tweets coming out, using phrases like ‘They had it coming’ or ‘Get them out’,” said computer scientist Peter Burnap, who is co-leading the project at Cardiff University’s Social Data Science Lab. “It doesn’t always have to use derogatory words associated with racism: it could be much more nuanced, which is the major challenge in the project. We are using natural language processing to identify cyber hate in all its forms.”
Previous research from the Social Data Science Lab has already found that Twitter data can be used to identify geographic hotspots of crime in London where hate speech has occurred, but where hate crime has not been reported. Specifically, they studied the spread and reach of hate speech on Twitter following the murder of British soldier Lee Rigby by two British Muslims in 2013. Social scientist Matt Williams, the project’s second lead, said: “The insights provided by our work will help US localities to design policies to address specific hate crime issues unique to their jurisdiction and to tailor their services to the needs of victims, especially if those victims are [in] an emerging category of hate crime targets.” The city of Los Angeles will be the first test case for the project, as the Los Angeles Police Department previously used similar mathematical models to predict other areas of crime including theft, which have been shown to be successful in lowering crime rates.
© The Financial Times
Online Civil Courage Initiative will offer advertising credits, marketing advice to a broader array of groups
21/9/2016- Facebook Inc. plans to broaden a program that gives free advertising to online activists who fight back against online hate speech, the latest expansion of tech-industry efforts to undermine internet propaganda from Islamist terrorists and far-right radicals. The social-networking company said Wednesday that its Berlin-based Online Civil Courage Initiative, founded in January, will expand from a pilot phase focused on Germany, France and the U.K. to offer advertising credits, money and marketing advice to a broader array of groups. Since its creation in January, the program has helped organizations that use Facebook to counteract hateful or extremist messages reach more than two million people with a total of €10,000 ($11,152) in advertising credits, the company said. Facebook has pledged €1 million in credits over two years.
Tech companies, think tanks, activists and governments are pouring resources into new ways to fight back against violent propaganda washing over the internet, including hate speech from groups like Islamist organizations and far-right radicals. The logic is that since such messages can never be blocked entirely, someone must argue against them, an approach called counter-narratives or counter-speech. “Censorship is not effective,” said Erin Saltman, program manager of Facebook’s Online Civil Courage Initiative, who also works with London-based think tank the Institute for Strategic Dialogue. “Conversations would start on mainstream platforms and migrate to less regulated, encrypted platforms.” Content removal is growing. Facebook said it has removed more than 38,000 pieces of content in the European Union in the second half of 2015 because of government requests, with the vast majority from France following the Nov. 13 attacks in Paris.
Twitter Inc. said in August that it had removed 235,000 terrorist-related accounts in the last six months, nearly double the prior period. But tech companies argue that it will always be possible to find similar material elsewhere online. Some Silicon Valley executives say they are also uncomfortable automatically removing posts since that could lead to a chilling effect on free speech. “Silencing a conversation doesn’t help win the argument,” one tech executive said. While government efforts to counteract propaganda have largely sputtered, private groups are taking up the initiative. Facebook later Wednesday is participating in an event in New York to highlight another initiative it has supported, in which college students come up with campaigns to counteract violent extremism. Other groups, are running experiments with companies including Alphabet Inc., Twitter and Facebook on ways to use the machinery of online advertising to counteract extremist messages.
The Online Civil Courage Initiative was in part a response to criticism by German politicians that extremists were using Facebook’s website to spread hatred against immigrants. The initiative aims to support nongovernmental organizations that counter hateful comments with democratic views—mounting things such as “like” attacks on pages rather than removing them. In the next year, Facebook plans to create a stand-alone website for the Civil Courage Initiative so that organizations can find information and marketing advice, Ms. Saltman said. The initiative will also publish “trend reports where we can keep our finger on the pulse a little more and keep activists updated with trends that are taking place so they can react more in time,” she said. Simone Rafael of Amadeu Antonio Foundation, a German anti-bigotry group, said the program’s goal is to strengthen communities that “stand up to a vocal minority of people who try to create the impression they were the majority.”
© The Wall Street Journal*
23/9/2016- The public prosecution department is unable to cope with all the complaints it has received about discrimination, and some 75% of reports never even reach its offices, according to research by RTL News. The department has pledged to look carefully at ‘all complaints about discrimination’ but most are set aside without being checked by department officials, the broadcaster’s researchers say. RTL found that between 2005 and 2013, police received an average of 416 complaints about discrimination a year. But only an average of 123 were actually passed on to the prosecution department. A spokeswoman for the department admitted the difficulties, saying the arrival of social media had made it easy to insult and threaten people.
‘We have to make choices,’ spokeswoman Gabrielle Hoppenbrouwers told RTL. The department’s guidelines are being amended to reflect the change, Hoppenbrouwers said. The Dutch human rights commission says it considers it worrying that so many complaints go unanswered at a time when young people are becoming less likely to register such issues. ‘Youngsters tend to think… you can’t do anything about racism, particularly on social media like Facebook and Twitter,’ said Adriana van Dooijeweert. ‘That worries me.’ Social affairs minister Lodewijk Asscher told RTL that all cases of discrimination hurt people deeply but that not every case can be prosecuted. ‘We are doing a lot and we are going to look if there is any more we can do,’ Asscher said.
© The Dutch News
19/9/2016- Intelligence chief Rob Bertholee has called on the cabinet to restrict the encryption of messages on chat services such as WhatsApp and Telegram. Bertholee told the Volkskrant that the Dutch security service AIVD needed to be able to ‘see into the communications of those who constitute a threat.’ In response to concerns of privacy campagaigners about giving the security services more access to personal data, Bertholee said: ‘I agree that protection of privacy is extremely important, but would people who hold privacy as their highest goal pursue it so enthusiastically if they’d been the victim of an attack?’ But Ronald Prins, of IT firm Fox-IT, warned that giving the security services access to data held by technology firms ran the risk of falling into the wrong hands.
‘The internet has gone dark for the AIVD and the service no longer has any overview of what information people are exchanging online,’ he told NOS. ‘American tech firms such as Apple and Facebook would have to build a back door into their security apparatus so that western security firms have the keys to get into those messages. That sounds like a good idea, but there is one very big risk: the keys could get into the wrong hands, such as the Russians and Chinese, who are constantly tapping our information.’ In an interview at the weekend, Bertholee warned that the terrorist threat in the Netherlands has ‘never been so high’, though he declined to give details of attacks that have been prevented or the number of people on the AIVD’s radar. He added said that staff from 30 European security services were holding daily meetings in the Netherlands to exchange information about terrorist suspects.
© The Dutch News
Facebook has been accused of "enabling vicious Jewish hatred" after telling users an image depicting human remains on a shovel below the tag line "How to pick up Jewish chicks" did not breach its standards.
16/9/2016- Just days after back flipping on its decision to censor an iconic Vietnam War photo of a naked girl escaping a napalm bombing, the social media giant is again under fire over its handling of posts reported as offensive. In reply to complaints about the shovel image - which was widely shared, "liked" 21,000 times, and received more than 37,000 comments - Facebook said: "We reviewed the post you reported for displaying hate speech and found it doesn't violate our community standards." Under its community standards policy, Facebook says it "removes hate speech" that attacks people based on their race, ethnicity, national origin and religious affiliation. "We allow humour, satire or social commentary related to these topics, and we believe that when people use their authentic identity, they are more responsible when they share this kind of commentary," the policy says.
The image was posted to a page attributed to a Queensland man late last month and shared 2280 times. It has now been removed but tens of thousands of comments, many of which are anti-Semitic, were still visible in the thread as late as Thursday. A Facebook spokesman said the image was removed for breaching community standards and that the company was still investigating. But 24 hours after being asked, the spokesman could still not say when the photo was taken down. Facebook could also not explain why only the photo was initially removed and not the entire thread - which is standard when a post is pulled. The thread itself showed people were still commenting on the image up to seven days after it was first reported.
One person told Fairfax Media that when he checked days after reporting the image to Facebook that it was still visible. "[It] certainly wasn't [removed] for a few days and only after they said there was no issue to begin with," he said. National Jewish human rights body, the B'nai B'rith Anti-Defamation Commission, slammed Facebook's handling of the case. Chairman Dvir Abramovich said the "disturbing" post mocked Holocaust victims and "the vile comments that followed clearly violates its community standards". "Facebook should not be a hot-house and a cesspool of racism, xenophobia and bigotry, and should not allow its platform to be used by bigots to disseminate and propagate their toxic and hateful invective," he said. "By allowing such posts to stay for far too long, Facebook is enabling the flourishing of a vicious and bone-chilling Jewish hatred that is a cause for concern."
Last week Facebook chief executive Mark Zuckerberg was accused by a Norwegian newspaper of "abusing your power" over the censoring and removal of multiple copies of the 1972 image of Kim Phúc fleeing a napalm attack during the Vietnam War. But after a global outcry the social media goliath reversed its decision, saying it recognised the importance of the Pulitzer prize-winning photograph in documenting history. "Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image on Facebook where we are aware it has been removed," it said in a statement.
The controversy over Facebook's handling of posts assessed against its community standards policy follows revelations the company's "trending" news topics were being curated by a team of editors. When Facebook overhauled the section last month, firing its editorial team and building an algorithm to manage trending topics, it promoted a fake story about an American newsreader and links to an article about footage of a man using a McDonald's chicken sandwich to masturbate with.
© The Sydney Morning Herald
11/9/2016- A senior delegation from Facebook is in Israel to “improve cooperation against incitement,” Prime Minister Benjamin Netanyahu said. “The fight against terrorism is also being waged on the social networks, and a senior delegation from Facebook is currently in Israel. The goal here is to improve cooperation against incitement, the incitement to terror and murder, on the social network,” Netanyahu said Sunday during the weekly Cabinet meeting. “The Internet has brought considerable blessing to humanity, but folded within it – to our regret – is also a curse, because terrorists and inciters are using the internet to attack mankind. We are determined to fight these phenomena, and therefore I welcome the cooperation, or at least the desire for cooperation, that Facebook is showing, and we hope that these will lead to better results.” The delegation is scheduled to meet with government officials during its visit. Facebook has been accused by Israeli officials of turning a blind eye to violent messages encouraging attacks by individual Palestinians against Israelis.
© JTA News.
9/9/2016- Facebook has deleted a post by the Norwegian prime minister, Erna Solberg, in a row over the social media giant's decision to earlier emove an iconic photograph from the Vietnam war featuring a naked girl fleeing bombs. Sloberg, while commending Facebook's effort to stop violent or abusive content, voiced support in a post for Norway's largest newspaper, Aftenposten after its editor-in-chief criticised Facebook for removing the Pulitzer-prize winning photograph from one of its posts. The newspaper published a series of photographs that "changed the history of warfare". The 1972 picture by Nick Ut features nine-year old Kim Phuc running away, naked, from napalm bombs. Facebook asked the newspaper to remove or "pixelise" it because of her nudity. The newspaper refused and Facebook took down the post.
The newspaper then put the photograph on its front page on Friday (9 September), next to a Facebook logo. Aftenposten's editor-in-chief Espen Egil Hansen wrote on open letter to Facebook boss Mark Zuckerberg, accusing the firm of censorship. While acknowledging Facebook's role in amplifying the newspaper's voice, Hansen wrote: "I think you are abusing your power, and I find it hard to believe that you have thought it through thoroughly." "I have to realise that you are restricting my room for exercising my editorial responsibility. This is what you and your subordinates are doing in this case," he added in the open letter, calling Zuckerberg the world's most powerful editor.
PM Solberg was one of the Norwegian politicians who shared the iconic image. “Facebook gets it wrong when they censor such images,” she wrote in her post, which also included the picture. “I say no to this type of censorship.” "I want my children and other children to grow up in a society where history is taught as it was. Where they can learn from historical events and mistakes," Solberg wrote. A few hours later the post on her profile was taken down. Later, the prime minister urged Facebook to review its editing policy. “While we recognise that this photo is iconic, it is difficult to create a distinction between allowing a photograph of a nude child in one instance and not others,” Facebook said in a statement.
© The EUobserver
8/9/2016- The recent arrest of a suspected neo-Nazi, Sean Creighton, 44, on a terrorism offence contained an interesting footnote. He allegedly possessed a badge with “burn your local mosque” written on it. This idea, to burn a local mosque, has appealed to neo-Nazis and Islamophobes in Europe and North America. The image, however, comes from the artwork of an obscure black metal band named Mogh, who released a live album in 2012. Mogh describes itself as a “Persian/Israeli/German extreme black metal project”. Its influences range from nihilism, the occult and the Orient. The band uses anti-Islamic imagery and symbolism in its album artwork and merchandise. The band has marketed itself as “anti-Islamlic black metal” on t-shirts bearing the “burn your local mosque” design. In spite of the above, the band were ‘shocked’ to learn that their artwork had been used to incite racial hatred.
In a statement, Mogh said: “It shocks us because of many reasons. Mogh is an international conceptual art and band which includes members from Germany, Syria, Iran, Bulgaria and Peru. Mogh philosophy believes in every person as a star regardless of its race and believes religion in any form steals that identical essence and makes you an systematic slave.” Mogh state they have lost family members in the aftermath of the Iranian Revolution. Later variations of the “burn your local mosque” image had removed Mogh’s satanic logo. Social media accounts have used it as an Islamophobic call to arms. In New York, a venue closed after hosting a neo-Nazi music festival last May. A Twitter user posted photos from outside the venue, which included a neo-Nazi owed van covered in hate stickers. One such sticker included “burn your local mosque”.
Mark Bennett, 48, was jailed last July following a racially aggravated public order offence at a mosque in Bristol. Bennett and others had placed rashers of bacon on the door handles of the mosque. They had shouted racial abuse at a member of the mosque, thrown bacon sandwiches at the mosque, and tied a St George’s flag to the railings with the words “No Mosque”. A Facebook page, linked to Bennett, had posted the “burn your local mosque” image, with the caption “Fire in the hole..!!!” Two Instagram users in the United States have promoted “burn your local mosque” patches in recent months. Both posts encouraged individuals to message for further details. The user ‘houndsnhogs88’ promoted the patch a day after the terrorist attacks in Brussels. He captioned the post: “After much deliberation, I’m finally putting this on my vest #burnyourlocalmosque #JeSuisBruxelles #fuckIslam #stopIslam”.
On November 14, 2015, Patrick Keogan had allegedly made online threats against two Islamic centres. In one alleged Facebook post, Keogan included an image of a mosque in flames, captioned with the text “burn your local mosque”. His attorney argued that the allegations do not constitute “crimes of violence.” The judge, however, found probable cause to charge Keogan. Tell MAMA staff became aware of the image last year. On February 27, 2015, the Facebook page of the Sunderland North East Infidels had uploaded the image. In early 2016, Tell MAMA received numerous reports of social media accounts sharing the image. A Twitter account linked to the notorious troll John Nimmo had targeted Tell MAMA staff with this image in 2015. By April 2016, Tell MAMA reported that an individual had been arrested for posting this image online.
The “burn your local mosque” meme had built a European audience since at least 2014. On November 6, 2014, an online post in German promoted the “burn your local mosque” patches atop bullets. A reverse image search revealed the use of the image as an avatar on a Polish language forum that same year. Nor does this idea exist in a vacuum. It’s possible to buy patches which read ‘burn your local church’. In spite of its obscurity and niche genre, the imagery, while offensive, became, by accident, a means for racists to allegedly target Muslim communities.
© Tell Mama
Relentless sexist attacks are having a serious effect on women and their freedom.
By Gillian Schutte
4/9/2016- It seems the public turns a blind eye to the rampant sexist cyberbullying that invades social media. There is a limited response to the misogynistic assault to which women are subjected, although it is patently clear this cyberbullying is intrinsically bound up in sexism, racism and transphobism and that many times it also takes on sexually abusive dimensions. Cyber hate speech is often written off as innocuous - especially by people who aren’t subjected to its full force and who do not know how exhausting it is to deal with every day, nor how damaging it is to your psychological and emotional welfare.
I, along with thousands of other women, have been subjected to cyber abuse for many years. There is nothing new about it, although it is remarkable to me how many men are let off the hook for blatant abuse of women in full view of the public - human rights activists and feminists included. I have had openly violent and misogynistic commentary directed at me by well-known public figures, DA MPs and column writers, and yet seldom have Chapter Nine institutions or fellow activists jumped in and reprimanded the perpetrators, despite many of these remarks not being anything but hate speech.
Reporting these cases to the SA Human Rights Commission does not really help. Beyond compiling a report, the commission is unable to tackle this menace. Legal representation and investigation cost a fortune and could bear little fruit. It is an added abuse that the victim has to pay to protect herself from a syndrome she did not incite and over which she has no control. Two years back an article calling for people to hang me in the streets was published on a site created using WordPress - along with a picture showing me with a hangman and the word “Traitor” scrawled across my forehead.
I was appalled at this violence. I wrote to WordPress demanding that the death threat be removed as it put me and my family in danger. WordPress said it did not go against its code of conduct, although it clearly called for my murder, and displayed a picture of my face. At about the same time a man with a Voortrekker-style beard had been parked outside my house for a few days - the entire day - for no reason other than to write in a notebook every time one of us left the property. I reported the hate speech and the lurker to the police. They opened a case, but did not have the resources to do a full online investigation to track down the author of the website. The police patrolled past my house for a week and the man soon moved on.
We paid for private security for the next two months after a slew of death and rape threats hit my inbox, Twitter and Facebook feeds - some from as far afield as Canada, the US and Russia, but most of them from South Africa. Eventually our resources ran out and we had to end our security contract. Friends offered to accompany us on film shoots as security. The hate speech site was taken down only when a Facebook friend was shocked enough to start a campaign aimed at WordPress and mobilised many followers to send it letters of complaint.
After about 500 “take it down” requests, WordPress finally took down the death threat and the pictures of me and my family. The site had been up for 18 months by the time it acted. This person responsible had hidden behind a pseudonym - but in South Africa there seems to be no shame in open abuse and misogyny, even if you are a well-known public figure. I guess it passes as normal to call women, whose political ideology you do not agree with whores, ho’s, inbred nutters and dirty, even if you have followers who purport to uphold human rights. This is highly offensive and violent anti-woman language.
I am not against women who make a living from selling sex - what I am against is the meaning these misogynists attach to the word “whore” and how they use it to demean women. These are the very same men who shout “xenophobia” loudest, while they practise dehumanising reductionism on women. It is no wonder they have this sense of entitlement when the public allows them to get away with this vile behaviour every time someone challenges their hold over the dominant discourse. The resounding silence after this type of abuse only encourages them. After my recent exposure of Judge Mabel Jansen’s racist utterances on my public Facebook page, I received a host of messages using the same language in ominous threats.
While hate speech and violence against women and girls are not a new syndrome, there certainly had been an upsurge in the use of internet platforms to perpetuate this hate. This is because perpetrators can remain anonymous while expanding their scope and impact. In South Africa, however, these haters do it in the open - they do not have to remain anonymous because they have tacit approval from a silent majority. All of the foregoing abusers know they can get away with hosting conversations in which women are called whores - and with making similar commentary themselves.
I have been battling cyber molestation from well-known figures and anonymous trolls for six years and I cannot fight it on my own. This is a real issue for women with voices - it is not a figment of our imaginations and not a “desperate need for attention”. Who wants daily death and rape threats and sexual or violent intimidation? Who wants their child to be called all manner of hurtful and disgusting things - or their husband’s “black c***” to be referred to as a reason for their being called a whore? Right now Twitter trolls are sending a slew of tweets linking me and black thinkers - and calling them monkeys and other dehumanising insults. It is an attack on blackness and black positive ideology. It is also an attack on women.
The aim of these trolls and bullies is to make sure that the social media space becomes an unpleasant and alienating experience. They engage in a pervasive assault on your psyche to shut down women who speak an anti-hegemonic language. This attack aims to shut down and intimidate voices that do not serve the dominant agenda. The methodology is intended to make sure you feel so violated and so invaded that you will eventually learn your place as a woman and shut up. It is brutal sexism and has a similar effect on the receiver as psychological battery.
Being abused in the open market space has a destabilising effect. Many girls who are victims of this type of abuse are not even sure if they are victims. They internalise the insults and begin to blame themselves. They lose confidence and question their sanity. But it is abuse and it is taken seriously in some countries where people speak about it, organise around it and recognise and name it for what it is. It is also something that victims cannot control alone. The onus of having to police your own social media accounts is like having to avoid going out so you won’t be raped, harassed or assaulted. It places the responsibility on the victim to remedy the situation. It blames the victim and overlooks the perpetrator. It empowers the perpetrators. They can be sure they have distracted you from your work in the hours you have to spend in complicated investigative and legal processes.
Sites such as Facebook, Twitter and WordPress have to start taking this syndrome seriously and track and charge hate speech pushers. Not long ago Facebook allegedly removed multiple photos of women breast-feeding in public, but ignored complaints about racist, sexist and homophobic commentary. This is a war of the discourses and those of us who are seasoned activists will not back down. But the “battered activist” syndrome and cyber misogyny beg exposure. Cyber abuse and the sexual harassment of women in the marketplace are part of the same syndrome. Because they are largely ignored by the public, women are increasingly being alienated and intimidated out of these spaces. This is all part of the war against women and it should not be ignored.
Schutte is a founding member of Media for Justice, a social justice and media activist as well as a documentary film-maker.
© The Sunday Independent
• Annual report shows 2.5% increase in reported incidents of discrimination
• Most significant figure is 18% rise in reported social media instances
6/9/2016- Incidents of discrimination in football are on the increase as abuse moves from the terraces to the internet, according to a report. Statistics released by the anti-discrimination organisation Kick It Out for the 2015-16 season show the number of incidents reported to the group rose 2.5% year on year. The most significant rise concerned social media, with 194 incidents reported, an increase of 18% on 2014-15. Incidents involving supporters at grounds decreased by 16%. “We’ve noticed a shift whereby reported incidents are decreasing in stadiums, especially in the professional game, and social media is the place where supporters can post discriminatory language,” Kick it Out said. “It’s a change whereby abuse isn’t necessarily directed in person to someone’s face but the ease of social media means individuals can post instantly from behind a phone or keyboard.”
Compiled from incidents reported to the organisation, the Kick It Out survey has shown increases in discrimination each year since it was first published in 2012-13. The results of the latest study were released on the same day Kick It Out launched its “Call Full Time On Hate” initiative, which pushes for a unified effort from football bodies to eradicate prejudice and hate from the game. “Football has undoubtedly come a long way and made progress in tackling discrimination and making the game open to all. However, there’s vulnerability at this moment in time,” said the Kick It Out chair, Herman Ouseley. “As cutbacks have taken place across society, football has stepped up and led the way in terms of its community programmes, focusing on diversity, inclusion and equality using the power of football.
“It’s become a leader for this area but young people are vulnerable to the … increases in reported hate crimes and incidents. Education is one of the essential elements of tackling ignorance, bigotry and intolerance. Bringing people of all backgrounds together to play and participate in football activities provides the ideal environment to stimulate learning with and from each other about each other. “Kick It Out is intensifying its education work within football, including the professional sector, with a particular emphasis on football at grassroots.”
A social media incident was deemed to be content related in any way to football, including a post by someone who claimed to be a supporter of a particular club in the social media biography. As well as social media, there was also a significant increase in incidents involving players, managers and staff at a professional level, with 13 being reported. However, there was a 16% decrease in reports of incidents involving supporters and also a smaller decrease in incidents at a grassroots level. The rise in social media incidents will be a particular concern, given the situation of the Burnley striker Andre Gray, who has asked for a personal hearing over his Football Association misconduct charge for homophobic posts on Twitter in 2012. Speaking to the Guardian last month, Ouseley urged the game to do more to promote community cohesion in the face of a rising tide of hate speech and intolerance exacerbated by the Brexit debate.
“It has been noticeable for at least two and a half years that there has been a rise in what I would call intolerance,” he said. “That not only happens in the streets and in the playground but in higher levels of society. There is an underlying subliminal message that all came to the fore during the last few weeks with ‘We want our country back’ and so on.”
© The Guardian.
2/9/2016- Victims of crime will soon be able to upload CCTV footage and track investigations on the internet as Essex Police looks to expand its online services. The force is encouraging people to use its website to record non-emergency crime and lost or found property after suffering a series of cutbacks and closure of police stations earlier this year. The number of people reporting crime online in Essex has more than doubled in the last month. In July, 1,045 people used the reporting crime feature on essex.police.uk compared with 428 the previous month – a 144 per cent increase. Amongst the types of crimes reported were cases of shoplifting, cycle theft, criminal damage, hare coursing, fraud and theft.
In July, 620 people reported minor crashes compared with 542 the previous month. Improvements to the website will continue throughout the year, which will mean users will be able to upload files including CCTV footage and photographs. People will also be able to register for an account, enabling them to track the progress of the investigation into their crime. Chief Inspector Justin Smith, Essex Police’s head of demand management, said: “This data shows more people are using the online service and the overwhelming majority of those who do are happy with the service they get. “Reporting non-emergency crime online is proving more effective and convenient for victims and the other services available, like information on where to report non-policing matters, gives us more time to fight crime.
“Eight out of ten adults across the UK have broadband access and two thirds of people use mobile phones and tablets to use the internet, so we have to cater for that demand. “For those people who don’t have access to the internet we are still contactable in person or over the phone. “The move to online reporting, and therefore subsequent reduction in demand on the 101 number, will also hopefully improve the service for those who are still contacting us by phone.” The site has access to online reporting services for non-emergency crime, minor traffic collisions, lost and found property, fraud, hate crime, potholes, abandoned cars, street lighting and noise nuisance issues. It also provides answers to frequently asked policing questions. Visit essex.police.uk for further details.
© The Echo News
More and more cyber crime cases are piling up on investigators resulting in an increasing pendency of such investigations, according to the latest government report.
31/8/2016- While there were 8,032 cyber cases pending at the end of 2014, the number increased by 47%, touching 11,789 in December 2015. The report “Crime in India 2015” released on Tuesday showed that investigators had handled 19,423 cases in 2015, which included the pending cases from the previous year. It could clear only 7,634 cases last year, which is only 39.30% of the total cases it had investigated in 2015. In July, DH had reported that cyber crimes had witnessed an alarming 20.5% rise with Uttar Pradesh, Maharashtra and Karnataka topping the list. There were 11,592 cyber crimes reported in 2015 compared to 9,622 the previous year. In 2013, 5,693 cases had been registered. According to the report prepared by National Crime Records Bureau (NCRB), charge sheets were filed in courts in 3,206 cases. The rate of chargesheeting was 46.8%, while 60.7% cases were pending.
Greed and financial gain were cited as the main reason behind cyber crime in as many as 3,855 cases. The motive behind other 1,110 cases was fraud and illegal gain. In around 1,200 cases, women were the victims — 606 cases related to using cyber instruments to insult the modesty of women, like posting defamatory pictures and writings and 588 related to sexual exploitation. There were 205 cases in which the motive was to incite hate crimes against communities, while there were 293 cases of blackmailing. Of the 8,121 people arrested, including four foreigners, 415 were “sexual freaks”, while 1,195 were neighbours, relatives or friends and 1,594 business competitors of the victims.
© The Deccan Herald
31/8/2016- More needs to be done in terms of the law to get social media companies to assist police in identifying perpetrators of online hate speech, the SA Jewish Board of Deputies (SAJBD) has told Parliament. SAJBD national director Wendy Kahn made a public submission to the Parliament's Portfolio Committee on Communications on Wednesday, which is hosting public hearings on the Film and Publications Amendment Bill. Kahn said perpetrators of online hate speech were taking refuge in hidden identities on Facebook and Twitter. Using an example from 2014, she said it had been difficult to lay charges against an individual, Phumza Zondi, who had threatened to "come after Jews" in a post on the board's Facebook page.
"You Jews think you are special just because the ANC keep bowing down to your demands," the post read. "Well wait and see... This time we are prepared and ready for you. We will ambush you in your homes and rape you and your cats and drive you to the sea." Kahn said the name was fake, and while Facebook was willing to assist in identifying the user's true identity, they first needed a court order from police. The board followed up with the SAPS Cyber Unit, which sent an order to Facebook directly. Two years on, they are still no closer to identifying the individual who made the post, and the Deputy Public Prosecutor declined to prosecute.
Protection of identities
"While the country's laws adequately address hate speech, the problem is the medium, in this case online, and the lack of provisions when perpetrators take refuge in hidden identities online," Kahn told the committee. "Facebook for instance will take the post down, but that doesn't help me. I can't take action if I don't know who the person is. "The issue is when there is protection of identities." Kahn told the committee that the officer they had approached at the local police station had asked them, "What is Facebook?", indicating the need for training. She said the bill should contain provisions that mandate international social media companies to assist the police when a user breaks the law of the country on the platform. The amendment bill currently suggests a fine of R150 000 for people found guilty of online discrimination.
'We just want to establish a procedure'
Democratic Alliance MP Phumzile van Damme asked Kahn to clarify the board's stance on criminal cases versus general cases. "When people break the law in the country, foreign social media companies should not protect them," Kahn said. "They are essentially giving them refuge. "In France, they have successfully forced organisations like Facebook, through legal means, to identify perpetrators. "It's complicated due to company law and global freedoms, but other countries have successfully managed to get this information from service providers, and with our current standing on racism laws, South Africa shouldn't be any different." Kahn said the US's Anti-Defamation League was willing to educate police and government officials in matters of online discrimination. "We just want to establish a procedure and a correct route for all South Africans in future situations."
© News 24
30/8/2016- After declining to explain why it initially refused to remove an anti-Semitic post from the comments on an Alberta professor’s page, Facebook said it erred in allowing the screed to stay up and subsequently took it down. On Aug. 26, B’nai Brith Canada was notified about a photograph and adjoining paragraph that a Facebook user named Glen Davidson had posted in the comments section of University of Lethbridge professor Anthony Hall’s profile. The image – which Facebook first told B’nai Brith did not violate the company’s community standards but later removed without explanation – featured a man assaulting another man who appeared to be an Orthodox Jew. Beside the photo was a rant containing anti-Semitic slurs, Holocaust denial and calls to kill “all Jews… Every last one.” The paragraph, which is attributed to “Ben ‘Tel Aviv Terror’ Garrison,” begins: “There was never a ‘Holocaust’ but there should have been and, rest assured, there will be, as you serpentine kikes richly deserve one.” It refers to Jews as “greedy, hook-nosed kikes” and likens the Jewish People to “vermin” and “cockroaches.”
Representatives of the social media giant’s communications department told The CJN on Aug. 30 that it does not comment on specific decisions regarding its moderation of content. After The CJN published the initial version of this story on Aug. 30, a Facebook spokesperson issued an official statement, saying the post in question “was reviewed in error and was taken down as soon as we were able to investigate. Our team processes millions of reports each week and we sometimes get things wrong. We’re very sorry about the mistake.” A spokesperson for the Calgary police told The CJN that someone in Calgary filed a complaint about the Facebook post, but the file is being transferred to Lethbridge police for investigation. B’nai Brith spokesperson Marty York said Amanda Hohmann, national director of B’nai Brith’s League for Human Rights, contacted Facebook after learning about the post.
Two hours later, Facebook sent what York described as “a standard e-mail” saying the graphic did not violate the company’s community standards, the set of policies Facebook uses to regulate what it refers to on its company website as the “type of sharing [that is] is allowed on Facebook, and [the] type of content [that] may be reported to us and removed.” The same day, after receiving Facebook’s response, B’nai Brith filed a complaint with Lethbridge police about the anti-Semitic post. It also issued a news release detailing the content of the post and Facebook’s refusal to remove it, in addition to sending out an e-mail blast to some 30,000 B’nai Brith supporters and media outlets, and posting about the incident on its Facebook page. York said it’s incomprehensible that Facebook didn’t immediately regard the anti-Semitic post as a violation of its policies.
“It doesn’t make sense to us whatsoever how it could not be perceived at the outset as pure hate speech. This is probably the clearest, most obvious kind of anti-Semitism that one could possibly create… And yet Facebook allowed it to [remain online] until massive protests happened,” he said. Within hours of B’nai Brith’s campaign, Facebook deleted the inflammatory post from Hall’s page. York said Facebook never explained the apparent reversal of its decision. “Facebook has a reporting system that’s opaque and the mechanisms by which it operates are not clear to the public,” he said. B’nai Brith stressed that Hall himself did not post the graphic on his own wall, but that Hall has been known to use his academic credentials to deny the Holocaust and promote 9/11 conspiracy theories.
Separately, the Centre for Israel and Jewish Affairs (CIJA) said it asked the University of Lethbridge to take disciplinary action against Hall, a tenured professor in the school’s liberal education program, in early August. The request came after the Lethbridge Herald reported that the university was defending Hall’s right to promote conspiracy theories online, including the idea that Jewish Zionists are waging a war on Muslims through control of western media. CIJA’s director of communications, Martin Sampson, said the group has kept tabs on Hall ever since he espoused “rabidly anti-Israel views and advanced a number of anti-Semitic tropes” at a Calgary interfaith dialogue event two years ago. Sampson said the university hasn’t yet responded to CIJA’s request.
A new online form enables users to report threatening or abusive content found on its online communities and services.
30/8/2016- Microsoft has launched a new tool that alerts the Redmond, Wash., software giant when its users encounter hate speech on its consumer online services. A dedicated web form now allows users of the company's Skype, Xbox Live and other services to report the offending content. "Without question, the internet is overwhelmingly a force for good. We strive to provide services that are trustworthy, inclusive and used responsibly. Unfortunately, we know these services can also be used to advocate and perpetuate hate, prejudice and abuse," wrote Jacqueline Beauchere, Microsoft's chief online safety officer, in a blog post. "As part of our commitment to human rights, we seek to respect the broad range of users' fundamental rights, including the rights to free expression and access to information, without fear of encountering hate speech or abuse."
Recently, a Microsoft-sponsored survey conducted by the National Cyber Security Alliance found that nearly four in 10 American teens had been subjected to cruel or abusive messages online. Objectionable remarks were often made about a teen's appearance (45 percent), their sexual orientation (27 percent), gender (25 percent) or ethnicity (24 percent). Microsoft isn't the only tech heavyweight that has pledged to combat online abuse. Google provides tools of its own to report threatening content. "Anyone using our Services to single someone out for malicious abuse, to threaten someone with serious harm, to sexualize a person in an unwanted way, or to harass in other ways may have the offending content removed or be permanently banned from using the Services," states the company's User Content and Conduct Policy. "In emergency situations, we may escalate imminent threats of serious harm to law enforcement."
Following a string of high-profile women leaving its platform after enduring harassment from some users, Twitter announced earlier this month that it was turning on its Quality Filter for all of its users. Twitter's Quality Filter technology, formerly reserved for celebrities, government officials and other public figures with "verified" accounts, analyzes various figures to weed out tweets from bots and other low-quality content, preventing them from appearing on users' timelines and other parts of the Twitter experience.
© E Week
30/8/2016- On Tuesday, the Body of European Regulators for Electronic Communications (BEREC) published guidelines ensuring that the region’s internet users receive strong protections for open and non-discriminatory access to the internet. According to the guidelines (available here), internet user have the right to access and distribute information and content, use and provide applications and services, and use access devices of their choosing to connect with any other person, device or service on the network. The guidelines ensure that EU member countries will enact national Net Neutrality rules that are consistent across the region. BEREC published a draft of the guidelines in June followed by a six-week public consultation period during which more than 500,000 people commented, the vast majority supporting strong Net Neutrality protections. Today’s publication is a final step in the three-year process to adopt a Net Neutrality standard marked by broadband industry efforts to weaken the proposed rules. It supersedes a 2013 legislative proposal by the European Commission that left open loopholes for content discrimination and throttling by access providers.
Free Press Senior Director of Strategy Timothy Karr made the following statement:
“Internet users have fought and won Net Neutrality protections in India, South America and the United States. Europe’s decision today – heeding the advice of internet users who favor robust safeguards for the open internet – is an essential part of this global push to advance the online rights of everyone. “Europeans have good reason to celebrate today. But they must remain vigilant to ensure regulators enforce the rules keeping the best interests of internet users in mind. Online gatekeepers never give up. Despite last year’s Net Neutrality victory in the United States, telecommunications companies have spared no expense on efforts to bend the rules in their favor and weaken enforcement. “This victory is a credit to the sleeves-up outreach and organizing of groups like European Digital Rights, SavetheInternet.eu and Access Now, which helped mobilize the region’s overwhelming public response in support of Net Neutrality.”
© Free Press
1/9/2016- White nationalists and self-identified Nazi sympathizers located mostly in the United States use Twitter with “relative impunity” and often have far more followers than militant Islamists, a study being released on Thursday found. Eighteen prominent white nationalist accounts examined in the study, including the American Nazi Party, have seen a sharp increase in Twitter followers to a total of more than 25,000, up from about 3,500 in 2012, according to the study by George Washington University’s Program on Extremism that was seen by Reuters.
The study’s findings contrast with declining influence on Twitter Inc’s service for Islamic State, also known as ISIS, amid crackdowns that have targeted the militant group, according to earlier research by report author J.M. Berger and the findings of other counter-extremism experts and government officials “White nationalists and Nazis outperformed ISIS in average friend and follower counts by a substantial margin,” the report said. “Nazis had a median follower count almost eight times greater than ISIS supporters, and a mean count more than 22 times greater.”
While Twitter has waged an aggressive campaign to suspend Islamic State users - the company said in an August blog post it had shut down 360,000 accounts for threatening or promoting what it defined as terrorist acts since the middle of 2015 - Berger said in his report that “white nationalists and Nazis operate with relative impunity.” Reuters was unable to independently verify the findings. Asked about the study, a Twitter spokesman referred to the company’s terms of service, which prohibit promoting terrorism, threatening abuse and “hateful conduct” such as attacking or threatening a person on the basis of race or ethnicity. The company relies heavily on users to report terms of service violations.
The report comes as Twitter faces scrutiny of its content removal policies. It has long been under pressure to crack down on Islamist fighters and their supporters, and the problem of harassment gained renewed attention in July after actress Leslie Jones briefly quit Twitter in the face of abusive comments. Berger said in an interview that Twitter and other companies such as Facebook Inc faced added difficulties in enforcing standards against white nationalist groups because they are less cohesive than Islamic State networks and present greater free speech complications. The data collected, which included analysis of tweets of selected accounts and their followers, represents a fraction of the white nationalist presence on Twitter and was insufficient to estimate the overall online size of the groups, the report said.
Accounts examined in the study possessed a strong affinity for U.S. Republican presidential nominee Donald Trump, a prolific Twitter user who has been accused of retweeting accounts associated with white nationalism dozens of times. Three of the top 10 hashtags used most frequently by the data set of users studied were related to Trump, according to the report, entitled “Nazis vs. ISIS on Twitter.” Only #whitegenocide was more popular than Trump-related hashtags, the report said. The Trump campaign did not respond to a request for comment.
27/8/2016- Racism had a moment on Vermont Twitter last week when Rep. Kiah Morris, D-Bennington, became the target of harassment on the social media platform. The offending tweet, which featured a caricature of a black person using obscene, racially-loaded language, took issue with Morris — one of only two black members of the Vermont House of Representatives — representing a predominantly white constituency (bit.ly/vtd-morris). The user who issued the tweet — @MaxBMisch, a self-described “#AltRight sh-tlord” whose Twitter feed is a stream of racist, antisemitic and misogynistic content — is emblematic of a segment of the Twitter community that actively seeks out opportunities to harass others, especially women and minority populations.
Typically, these users are angry, white men who use Twitter as a forum to exercise their impotent rage, lashing out against political correctness, multiculturalism and issues of race, gender, religion and sexuality. Even female Ghostbusters and black Human Torches are taken as an affront on their values. Exactly why these men feel compelled to attack others, often without provocation, is open for speculation. One motivator seems to be a perceived loss of identity and anxiety over the decline of white male hegemony. Another seems to be the belief that any gains made by women or minorities somehow diminishes their own worth. Certainly, Donald Trump’s campaign of demagoguery has, in no small part, helped to embolden these angry individuals. (Plus, some people just enjoy being jerks.)
To say Twitter has a harassment problem is an understatement. BuzzFeed reporter Charlie Warzel, in a recent piece about Twitter’s long, tortured history of user abuse, characterized it as “fundamental feature” of the platform (bit.ly/twitter-bf). The lengthy piece, which provides an insider perspective on why Twitter can’t (or won’t) get a handle on abuse, provides some excellent background on the situation. The TL;DR version: lagging growth and frequent personnel changes — as well as an overwhelmingly white, male and heterosexual leadership team that underestimates the impact of abuse — has created an optimal environment for harassment.
From its launch in 2006, Twitter has held itself up as a champion of free speech. In the past, the company has thumbed its nose at dictators who want to censor their citizens. Both the Arab Spring and Black Lives Matter movements used Twitter to boost their message and attain a global, mainstream audience. But such freedom has a dark side. It also has allowed ISIL to post beheading videos and recruit members, white supremacists to spread their hate and trolls to viciously target celebrities, politicians, journalists and any other users who draw their ire. As Warzel notes, when pressed to address these problems, Twitter executives are ambivalent. They may denounce specific instances of harassment, but they have done little to improve conditions overall. The obvious deflection is that they’re not in the business of policing speech. Co-founder Biz Stone has previously brushed off criticisms stating, “Twitter is a communication utility, not a mediator of content.”
They can tell themselves that, but it doesn’t really scan. Verizon is also communication utility, but if women were called “stupid whores” every time they tried to make phone call or send a text message, Verizon certainly would do something about it. And that really is how bad it’s gotten. Many women can’t send a single tweet without receiving misogynistic attacks and threats of physical or sexual violence. In July, feminist writer Jessica Valenti took a break from the service after rape and death threats were directed at her five-year-old daughter (bit.ly/valenti-slate). Indeed, a number of recent high-profile incidents have forced Twitter to finally take harassment seriously — as if 2014’s GamerGate scandal wasn’t enough of a wakeup call (bit.ly/gg-primer).
Around the same time as Valenti’s departure, “Ghostbusters” star and “Saturday Night Live” cast member Leslie Jones also deleted her account after she was bombarded by an avalanche of racist and misogynist tweets (bit.ly/bf-jones). In response, Twitter banned conservative troll and Breitbart editor Milo Yiannopoulos, who led the attack on Jones. But banning one well-known bully hardly solves the problem. And, as Warzel points out, Twitter is much more responsive to celebrity complaints than those of regular users. CEO Jack Dorsey personally stepped in to mitigate the Jones incident and coax the actor back online, but I’m doubtful he’d do the same for the rest of us. (Online harassment of Jones, regrettably, has continued outside of Twitter. Hackers attacked her personal website Wednesday posting nude photos and racist images as well as photos of her passport and driver’s license.)
Earlier this year, the podcast “Just Not Sports” brought attention to the harassment of women on Twitter with a video of men reading misogynistic tweets to female sports journalists (bit.ly/morethanmean). The video has been viewed more than 3.6 million times on YouTube. (If you’re not familiar with what online harassment looks like, this is a good place to start.) Twitter regards itself as a communication utility, but maybe it should look at itself more like a bar — a privately-owned public space where free speech is welcome, but where there is also an expectation of a certain level of decorum. People can be jerks in bars, but if someone starts shouting the N-word or the C-word, they’re gonna get 86’d real fast because, at the end of the day, it’s in the best interest of the bar owner to create an environment where people can enjoy both freedom and safety.
For his part, Dorsey has admitted Twitter has dropped the ball. “No one deserves to be the target of abuse on Twitter,” he told investors last month. “We haven’t been good enough at ensuring that’s the case, and we need to do better.” To that end, Twitter introduced several new features last week aimed at limiting harassment (bit.ly/twitter-features). Essentially, the update extends existing verified user controls to all users, allowing them to only see notifications from accounts they follow. In addition, a new quality filter will weed out spam and bots. While this gives some control to users, it’s unlikely to do much to curb harassment since it doesn’t actually stop trolls from tweeting abusive content in the first place. Unfortunately, ignoring the problem does nothing to solve it.
In Vermont, we similarly prefer to ignore racism rather than address it. The attack on Rep. Morris is an unsettling reminder that racism exists in our quaint, green little utopia. We fancy ourselves a progressive lot — we were the first state to abolish slavery, right? — but much of that tranquility is the result of our homogeneity. It’s easy to be tolerant and open-minded when everyone around us is white, but when the status quo shifts, those values are put to the test and the results are often shocking and disappointing. We are confronted so infrequently with race issues that when we are, we discover we are ill-equipped confront them — especially casual racism, which can be more difficult to notice. Racist behavior is often downplayed, brushed off or somehow explained away with justifications like, “boys will be boys,” “telling it like it is,” “saying what we’re all thinking” or “just joking around.”
Past racist behavior inside the Rutland Police Department alleged by former officer Andrew Todd resulted in a $975,000 settlement last year (vpb.co/rh-todd). Over the course of his tenure at the RPD from 2003-11, Todd said he endured racial insults from his co-workers and witnessed instances of racial profiling. It’s worth noting here that the RPD has since taken deliberate steps to improve its culture so kudos to them.
Last December in St. Albans, Bellows Free Academy students faced a racist response from fellow students and members of the community for holding a rally against racism (bit.ly/bfa-rally). In this case, the mere act of raising awareness of racism was enough to set people off and compel them to waive Confederate flags in opposition.
Opposition to Rutland’s effort to welcome Syrian refugees has been fertile ground for bigoted and xenophobic rhetoric. When confronted with blatant examples of intolerance, some have waved it away, arguing that the real bigots are those calling out the bigotry.
Rutland-area filmmaker and musician Duane Carleton takes on homegrown intolerance in his upcoming documentary, “Divided by Diversity,” which tells the story of several black student athletes from the Bronx who faced backlash from local families in 2010 when they joined the boy’s varsity basketball team at Mount St. Joseph Academy (bit.ly/nyt-msj). The film paints a fair yet troubling portrait of privilege, entitlement and small-town racism not just in Rutland but around the state.
Rep. Morris, in a Facebook post following the Twitter incident, appealed to Vermonters to boldly confront racism and intolerance when we see it, stating:
“When you allow drive-by harassment of locals on social media pages but do not speak out against it, you endorse this kind of behavior and discourse in our communities. Our right to live in a loving community does not end with someone else’s use of their First Amendment rights. Deny them the audience. Decry the hatred. You have an obligation to do so.”
Yes, we do. We cannot ignore racism. We cannot trivialize it, relativize it or accept bigoted attitudes as simply another perspective. On social media and in the real world, we need to be better and we need to call out racism where we see it. To do that, we must able to talk about it honestly; we must acknowledge racism exists here in Vermont and we must come together as a community to combat it.
© The Rutland Herald
US authorities have launched an investigation into the hacking of Leslie Jones' website and iCloud account after intimate photos of the actress were posted online.
27/8/2016- The Department of Homeland Security says it's looking into the breach. The star's personal information including her driving licence and passport were published on the site. An image of the dead Cincinnati Zoo gorilla Harambe appeared in an apparent racist insult to the actress. Personal photos of Ghostbusters actress Leslie Jones posing with stars including Rihanna, Kanye West and Kim Kardashian West were also posted, before her Tumblr page was taken down. A spokesman for Immigration and Customs Enforcement, part of Homeland Security, said: "The investigation is currently ongoing. In order to protect the integrity of the case, no further details are available at this time." The 48-year-old left Twitter briefly last month after she received racist messages. She was sent tweets blaming her for Aids and comparing her to a gorilla. She criticised the social media company for not doing enough to deal with online trolls.
Twitter announced a new "quality filter" earlier this week which is designed to allow users to deal with trolls and abusive posts more easily. Friends and fellow actors have come out in support of the star after the latest online abuse was posted. Friends and fellow actors have come out in support of the star after the latest online abuse was posted. Ghostbusters director Paul Feig called it "an absolute outrage" Oscar-winning actress Patricia Arquette warned people sharing explicit photos of Leslie Jones that they could be taken to court. Girls star Lena Dunham tweeted: "Let's turn our anger at trolls into love for Leslie Jones." Star of 2009 film Precious and Oscar nominee, Gabourey Sidibe, said she didn't understand how people could hate someone so much.And US presidential hopeful, Hillary Clinton, also tweeted her support.
Leslie Jones hasn't commented about the cyber-attack on social media. The actress was part of the all-female reboot of Ghostbusters this year. Over the past few weeks she's been working at the Olympic Games in Rio for US TV network NBC. In June, a man in America pleaded guilty to running a phishing campaign to steal private pictures and videos from film and TV stars. Edward Majerczyk, from Chicago, was arrested after police investigated the 2014 cyber-attack. Nude photos of more than 100 celebrities, including Rihanna and Jennifer Lawrence, were leaked online.
© BBC News
26/8/2016- Teenager Ali David Sonboly killed nine people and injured 21 during his rampage in Munich last month before shooting himself. Now an investigation by the Standard provides a disturbing insight into how the underground web enabled him to plot the attacks — and could be used by others to carry out more. Last December, seven months before he carried out the murders, Sonboly, 18, used the codename “Mauracher” to place a specific seven-line message requesting a Glock 17 pistol and 250 rounds of ammunition. He offered a price of 2,500 Euros but ended up paying almost 2,000 Euros more. The full extent of his use of the dark web was exposed in a lucky break for German police, as they investigated two separate attempts to use the underground network to ob-tain weapons, by a 62-year-old accountant and a student aged 17.
Armed police arrested a 31-year-old unemployed salesman in the town of Marburg after setting up a sting operation. He allegedly incriminated himself and revealed to undercover officers that he had supplied the Glock 17 pistol Sonboly used to cause the carnage. The arms seller allegedly told detectives he handed the Glock and 350 rounds of ammunition to Sonboly at meetings on May 20 and July 18, four days before the attack. There is no evidence Sonboly, a German-Iranian dual national, was linked to Islamic State or other Islamist terror groups. He had written what has been called a “manifesto of murder” after studying the actions of Anders Breivik, the Norwegian neo-Nazi who killed 77 in a bomb and gun attack five years ago. He had also suffered psychological problems and had been bullied at school, factors which the authorities say may have triggered his rampage on July 22.
Sonboly used the internet to carry out research into mass shootings and lure his victims to his chosen killing ground — a McDonald’s in north-west Munich. He shot at his victims in and outside the restaurant and continued the rampage in the nearby Olympia shopping centre, before shooting himself in the head. Seven of the victims were teenagers. Security agencies of Germany’s Western allies have been kept informed about the inquiry which followed — the information feeding into already rising concern about the use of illicit internet sites by terrorists and violent criminals. The dark net is utilised for a range of illegal purchases including drugs, child abuse images and arms. Users need to be adept at navigating its complex avenues while ensuring anonymity.
After the suicide bombings in Brussels last March, French interior minister Bernard Cazeneuve called for greater control of websites “which are not indexed by traditional search engines and which run a large amount of data issued by criminal organisations including jihadists. Those who attack us use the dark net and encrypted messaging to get access to weapons to hit us”. When Sonboly bought his gun he used the encryption system PGP, making his purchase with Bitcoins. There is a strong possibility he bought a second weapon. Police are looking at claims that he was seen with another gun and his internet searches also included requests for .45 calibre ammunition, not needed for a 9mm Glock.
British security analyst Robert Emerson said: “If teenagers can get on it, so can many others involved in terrorism and organised crime. When guns are supplied to terrorists and robbers there is always a chance that it can be traced, networks dismantled. But there are serious obstacles if the deal is done through the dark net because the raison d’être for that market is secrecy. “It is also, of course, an international market on the web, and goods can be shipped anywhere — this is why we are likely to see increasing use of it by terrorists and criminals.” Despite the difficulties, British and German security agencies have successfully carried out a joint operation to uncover firearms trafficking involving the dark net.
In 2014 career criminal Alexander Mullings used a mobile from his Wandsworth jail cell to order Skorpion sub-machineguns from Germany. The supplier, who had been active in the underground internet market, turned out to be a student in the Bavarian city of Schweinfurt. At Luton crown court last year, Mullings was given a life sentence after being found guilty of conspiring to possess firearms with intent to endanger life. The German government maintains it has tight gun control laws, However, after the Munich shooting, interior minister Thomas de Maizière said further regulations could be brought in, and “in Europe, we want to make further progress with a common weapons policy. We have to look very carefully at where to make legal changes”. Mark Mastaglio, a fellow of the Chartered Society of Forensic Sciences and a forensic ballistic adviser to the UN, said: “The UK has the gold standard when it comes to deactivating guns. But although the laws are very strict in the UK, that is not the case in some places elsewhere and the dark net affects all. That is a serious problem.”
© The London Evening Standard
A man has been arrested for allegedly supplying a gun to the teenager behind the recent Munich attack via the dark web - but the use of such websites can make tracking of weapons difficult
By Kim Sengupta
26/8/2016- The killing spree in Munich last month by an 18-year-old student caused shockwaves in Germany. There is now a breakthrough in the case with the arrest of the man who allegedly supplied the gun used to take the lives of nine people. The unfolding saga has provided disturbing insight into how the underground web – the dark net – was used to plan the murders and how it can be used to carry out future attacks. It was a stroke of luck that led detectives to a 31-year-old unemployed salesman in the town of Marburg who had allegedly procured the Glock 17 pistol for Ali David Sonboly, the young German-Iranian gunman. They had been looking at two different attempts to use the dark net to obtain weapons, one by a 62-year-old accountant, the other a 17-year-schoolboy – illicit transactions which in themselves illustrate the growing reach of the supposedly secret internet forum.
A “sting” operation was set up and it was while this was under way that the gun seller allegedly incriminated himself over the Munich massacre. He is said to have told undercover officers about handing over the Glock and 350 rounds of ammunition to Sonboly in two meetings: one on 20 May, the other on 18 July, four days before the shooting. The deaths resulting from that purchase was one of five acts of killing in Europe over 12 days, claimed by Isis. These have heightened fears of jihadist terror and added to recriminations over the West’s apparent inability to deal with the threat, as well as the supposed security threat posed by the waves of Muslim refugees coming to the Continent. Despite claims and counter-claims, no Islamist terrorist motive has emerged for Sonboly’s attack. He had written what has been described as a “manifesto of murder” after studying the actions of Anders Breivik, the Norwegian neo-nazi who killed 77 people five years ago. Sonboly had suffered psychological problems and had been regularly bullied in school, factors which the authorities say may have triggered his rampage.
It can be revealed that last December, seven months before he carried out the murders, Sonboly, using the name Mauracher, had placed a seven-line message requesting a Glock 17 pistol and 250 rounds of ammunition, for which he offered €2,500. He eventually ended up paying almost €2,000 more for them. Allied Western security agencies have been kept informed by the Germans about the inquiry which followed, the information augmenting concern about the use of illicit internet sites by terrorists and violent criminals. Following the suicide bombings in Brussels last March, the French interior minister, Bernard Cazeneuve had called for greater control of sites “which are not indexed by traditional search engines and which run a large amount of data issued by criminal organisations including jihadists… Those who attack us use the dark net and encrypted messaging to get access to weapons to hit us”.
Sonboly used the encryption system PGP, making his purchase with Bitcoins. There is a strong possibility that he bought a second weapon which has not been found. Police are looking at claims that he was seen with another gun and his internet searches had also included requests for .45 calibre ammunition, not needed for a 9mm Glock. The dark net is used for a range of highly-illegal purchases including drugs, child pornography and arms. Those using it need to be adept at navigating its complex avenues while ensuring anonymity. Sonboly was not considered to have technical expertise and German police say they do not know how he acquired the necessary skills. Neither can they explain how the teenager, whose sole income was a paper round, was able to get €4,350 for the pistol and ammunition. There are indications that he bought the Bitcoins last year when the price for the crypto-currency was much lower, showing further pre-planning and a degree of financial acumen.
Asked in the days following the Munich killings how Sonboly was able to use the underground market, Robert Heimberger, the head of the Bavaria forces criminal investigations branch responded: “I don’t know, I can’t get on the dark net myself, but I am noticing that many teenagers are actually able to get on it.” Robert Emerson, a British security analyst, said: “If teenagers can get on it, then so can many others involved in terrorism and organised crime. When guns are supplied to terrorists and robbers, there is always a chance that it can be traced, networks dismantled. But there are serious obstacles if the deal is done through the dark net because the raison d’etre for that market is secrecy. It is also an international market and goods can be shipped anywhere, this is why we are likely to see increasing use of it by terrorists and criminals.”
The Glock 17 Sonboly used had a certification mark from Slovakia. It had, at one stage, been decommissioned and used as a theatre prop. It was then reactivated before being sold to him. The Kalashnikov AK-47s used in the Charlie Hebdo murders in Paris in January last year were also decommissioned and then converted back to fire live ammunition. The purchase however was not through the dark net and the supply chain was traced back to a shop, once again in Slovakia, in the west of the country. Despite the difficulties posed by the underground web market, British and German security agencies had successfully carried out a joint operation to uncover firearms trafficking involving the dark net. Two years ago Alexander Mullings, a career criminal, used a mobile phone from his cell in Wandsworth prison to order Skorpion sub-machine guns from Germany. The supplier, who had been active in the underground internet market, turned out to be a student at the Bavarian city of Schweinfurt.
The German government maintains it has tight gun control laws. However, in the aftermath of the shooting, interior minister Thomas de Maizière stated that further regulations may be brought in and that “in Europe, we want to make further progress with a common weapons policy.” “First we have to determine how the Munich perpetrator procured a weapon, then we have to look very carefully at where to make legal changes,” he said. Unlike Germany, private ownership of handguns is banned in Britain. Mark Mastaglio, a Fellow of the Chartered Society of Forensic Sciences in London and a ballistic advisor to the UN, said: “The UK has the gold standard when it comes to deactivating guns. A lot of work has been done to get a common EU policy on this although I am not sure how we are left after Brexit. But, of course, the problem remains that although the laws are very strict in the UK, that is not the case in some places elsewhere and the dark net is something which affects all. That is a problem.”
© The Independent - Voices
Facebook has determined that a graphic explicitly calling for the genocide of all Jews “doesn’t violate [its] Community Standards.”
26/8/2016- The global social media giant made its determination in response to a complaint filed by B’nai Brith Canada. It took Facebook two hours to conclude that the post was acceptable, according to the company’s standards. The image, which was posted as a comment on the Facebook wall of University of Lethbridge Professor Anthony Hall, depicts a white man assaulting an Orthodox Jew, accompanied by a lengthy, violent antisemitic screed beside the photograph. It should be noted that Hall is well-known for using his academic credentials to deny the Holocaust and promote 9/11 conspiracy theories. The image is accompanied by this message: “There never was a ‘Holocaust’, but there should have been and, rest assured, there WILL be, as you serpentine kikes richly deserve one.” The image text ends with the entreaty “KILL ALL JEWS NOW! EVERY LAST ONE!”
B’nai Brith is outraged by the post and by Facebook’s refusal to remove it.
UPDATE: As of 3:15 PM ET on Friday August 26, B'nai Brith Canada has learned the image has been removed from Facebook. A screengrab of the image has been taken before its removal and can be viewed here.
“Antisemitism in all forms is rampant on social media, but this is the clearest, most obvious kind of antisemitism one could possibly create,” said Michael Mostyn, B’nai Brith CEO. “The classification of this as antisemitic cannot be challenged, and the fact that this promotes violence towards Jews is beyond dispute. Regardless, Facebook has deemed it acceptable despite its ‘community standards’ containing clear provisions against hate speech. The Jewish community deserves no less protection or respect than any other when it comes to hate speech and threats of violence.” “Every year, upon publication of our Annual Audit of Antisemitic Incidents, a contingent of detractors accuses us of saying the sky is falling, and that antisemitism does not exist in Canada,” said Amanda Hohmann, National Director of B’nai Brith’s League for Human Rights. “Content like this is proof positive that not only antisemitism of a genocidal nature exists in Canada, but the systems that are supposed to protect us from racist hate speech don’t consider hatred of Jews to be problematic.”
B’nai Brith has reported the post to Lethbridge Police Services.
© B’nai Brith Canada
Blogger and plus-size model forced to take a break from Ireland’s community Twitter account after being told to ‘return to your ancestral lands’ by trolls
23/8/2016- A black British woman who was chosen to tweet from the @ireland account for a week has been subjected to a barrage of racist abuse, forcing her to take a break from Twitter. Michelle Marie took over the account – which is curated by a different Twitter user in Ireland each week – on Monday. She introduced herself as a mother, blogger and plus-size model. Originally from Oxford in England, she wrote she had settled in Ireland and “it has my heart”. However, just hours after taking over the profile – which is followed by nearly 40,000 people – the abuse began. Marie responded by writing that being overweight “doesn’t mean I can’t be beautiful or worthy or happy” and described the impact body shaming had had on her mental health. However, that failed to stop the trolls abusing her because she was black. Marie received a lot of tweets of support, with many users urging her to report the abuse and block the users responsible.
James Hendicott, a Briton who had previously run the @Ireland account, said he hadn’t been trolled at the time and the treatment of Marie was “clearly racism”. By the end of the day the negative comments began to take their toll. She posted a statement saying that while she had expected “trolls, backlash and criticism” she had experienced “racism, sexism, fatophobia and homophobia to a degree I have never known.” After “8hrs of non-stop hate” she said she was hurt, shocked and appalled but promised she would try again tomorrow. Marie told the Guardian that the experience had been upsetting. “I’m saddened that such extreme racism and vitriol is still rife. I am fortunate that experiencing this level of hate is a rarity, but for too many it’s a daily reality,” she said. The @Ireland account was opened in 2012 and is run by Irish Central. Irish Central’s website says “as the Ireland of today is not confined to the island of Ireland, the varied voices of @Ireland come from Ireland and across the world.”
© The Guardian
A Helsinki district court has rejected a petition by police to shut down the anti-immigrant website MV-Lehti. However the court has sealed the arguments used in arriving at its decision.
19/8/2016- On Friday the Helsinki District Court overturned a petition by Helsinki police to shut down the MV-Lehti, an alternative news website that police suspect of disseminating false information and encouraging hate speech. The Helsinki police department called on the court to terminate online communications coming from a certain IP address owned by OVH Hosting Ltd, Net9 Ltd, and the sole trader NP Networking, and which is responsible for publishing MV-Lehti and Uber Uutiset, a sister site to MV-Lehti with similar content. The court did not disclose the arguments behind its decision.
Inaccuracies, distortions, suspected copyright infringements
Police had previously received dozens of criminal complaints about MV-Lehti. They determined that several of the site’s articles may have been inaccurate, distorted or fulfilled the criteria for copyright infringement. The inflammatory website was founded in 2014 by Spain-based Ilja Janitskin, who also owns a number of other websites. MV stands for "Mita vittua" (in English What the f***?) and the website became a talking point after publishing a series of vitriolic articles about migration and other subjects. The site gained a wider following in Finland since large numbers of asylum seekers began arriving in Europe and media began reporting on crimes committed by some of the new arrivals. The website’s articles were published without attribution, so none of the contributors were known. In July Finnish media reported that the both the MV-Lehti and Uber Uutiset websites were no longer available. At the time Janitskin had posted a notification on his Facebook page indicating that the site’s Finnish servers had been taken down and would be reinstated elsewhere in due course.
© YLE News
They’re turning the web into a cesspool of aggression and violence. What watching them is doing to the rest of us may be even worse
By Joel Stein
18/8/2016- This story is not a good idea. Not for society and certainly not for me. Because what trolls feed on is attention. And this little bit–these several thousand words–is like leaving bears a pan of baklava. It would be smarter to be cautious, because the Internet’s personality has changed. Once it was a geek with lofty ideals about the free flow of information. Now, if you need help improving your upload speeds the web is eager to help with technical details, but if you tell it you’re struggling with depression it will try to goad you into killing yourself. Psychologists call this the online disinhibition effect, in which factors like anonymity, invisibility, a lack of authority and not communicating in real time strip away the mores society spent millennia building. And it’s seeping from our smartphones into every aspect of our lives.
The people who relish this online freedom are called trolls, a term that originally came from a fishing method online thieves use to find victims. It quickly morphed to refer to the monsters who hide in darkness and threaten people. Internet trolls have a manifesto of sorts, which states they are doing it for the “lulz,” or laughs. What trolls do for the lulz ranges from clever pranks to harassment to violent threats. There’s also doxxing–publishing personal data, such as Social Security numbers and bank accounts–and swatting, calling in an emergency to a victim’s house so the SWAT team busts in. When victims do not experience lulz, trolls tell them they have no sense of humor. Trolls are turning social media and comment boards into a giant locker room in a teen movie, with towel-snapping racial epithets and misogyny.
They’ve been steadily upping their game. In 2011, trolls descended on Facebook memorial pages of recently deceased users to mock their deaths. In 2012, after feminist Anita Sarkeesian started a Kickstarter campaign to fund a series of YouTube videos chronicling misogyny in video games, she received bomb threats at speaking engagements, doxxing threats, rape threats and an unwanted starring role in a video game called Beat Up Anita Sarkeesian. In June of this year, Jonathan Weisman, the deputy Washington editor of the New York Times, quit Twitter, on which he had nearly 35,000 followers, after a barrage of anti-Semitic messages. At the end of July, feminist writer Jessica Valenti said she was leaving social media after receiving a rape threat against her daughter, who is 5 years old.
A Pew Research Center survey published two years ago found that 70% of 18-to-24-year-olds who use the Internet had experienced harassment, and 26% of women that age said they’d been stalked online. This is exactly what trolls want. A 2014 study published in the psychology journal Personality and Individual Differences found that the approximately 5% of Internet users who self-identified as trolls scored extremely high in the dark tetrad of personality traits: narcissism, psychopathy, Machiavellianism and, especially, sadism. But maybe that’s just people who call themselves trolls. And maybe they do only a small percentage of the actual trolling. “Trolls are portrayed as aberrational and antithetical to how normal people converse with each other. And that could not be further from the truth,” says Whitney Phillips, a literature professor at Mercer University and the author of This Is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture. “These are mostly normal people who do things that seem fun at the time that have huge implications. You want to say this is the bad guys, but it’s a problem of us.”
A lot of people enjoy the kind of trolling that illuminates the gullibility of the powerful and their willingness to respond. One of the best is Congressman Steve Smith, a Tea Party Republican representing Georgia’s 15th District, which doesn’t exist. For nearly three years Smith has spewed over-the-top conservative blather on Twitter, luring Senator Claire McCaskill, Christiane Amanpour and Rosie O’Donnell into arguments. Surprisingly, the guy behind the GOP-mocking prank, Jeffrey Marty, isn’t a liberal but a Donald Trump supporter angry at the Republican elite, furious at Hillary Clinton and unhappy with Black Lives Matter. A 40-year-old dad and lawyer who lives outside Tampa, he says he has become addicted to the attention. “I was totally ruined when I started this. My ex-wife and I had just separated. She decided to start a new, more exciting life without me,” he says. Then his best friend, who he used to do pranks with as a kid, killed himself. Now he’s got an illness that’s keeping him home.
Marty says his trolling has been empowering. “Let’s say I wrote a letter to the New York Times saying I didn’t like your article about Trump. They throw it in the shredder. On Twitter I communicate directly with the writers. It’s a breakdown of all the institutions,” he says. “I really do think this stuff matters in the election. I have 1.5 million views of my tweets every 28 days. It’s a much bigger audience than I would have gotten if I called people up and said, ‘Did you ever consider Trump for President?'” Trolling is, overtly, a political fight. Liberals do indeed troll–sex-advice columnist Dan Savage used his followers to make Googling former Pennsylvania Senator Rick Santorum’s last name a blunt lesson in the hygienic challenges of anal sex; the hunter who killed Cecil the lion got it really bad.
But trolling has become the main tool of the alt-right, an Internet-grown reactionary movement that works for men’s rights and against immigration and may have used the computer from Weird Science to fabricate Donald Trump. Not only does Trump share their attitudes, but he’s got mad trolling skills: he doxxed Republican primary opponent Senator Lindsey Graham by giving out his cell-phone number on TV and indirectly got his Twitter followers to attack GOP political strategist Cheri Jacobus so severely that her lawyers sent him a cease-and-desist order.
The alt-right’s favorite insult is to call men who don’t hate feminism “cucks,” as in “cuckold.” Republicans who don’t like Trump are “cuckservatives.” Men who don’t see how feminists are secretly controlling them haven’t “taken the red pill,” a reference to the truth-revealing drug in The Matrix. They derisively call their adversaries “social-justice warriors” and believe that liberal interest groups purposely exploit their weakness to gain pity, which allows them to control the levers of power. Trolling is the alt-right’s version of political activism, and its ranks view any attempt to take it away as a denial of democracy.
In this new culture war, the battle isn’t just over homosexuality, abortion, rap lyrics, drugs or how to greet people at Christmastime. It’s expanded to anything and everything: video games, clothing ads, even remaking a mediocre comedy from the 1980s. In July, trolls who had long been furious that the 2016 reboot of Ghostbusters starred four women instead of men harassed the film’s black co-star Leslie Jones so badly on Twitter with racist and sexist threats–including a widely copied photo of her at the film’s premiere that someone splattered semen on–that she considered quitting the service. “I was in my apartment by myself, and I felt trapped,” Jones says. “When you’re reading all these gay and racial slurs, it was like, I can’t fight y’all. I didn’t know what to do. Do you call the police? Then they got my email, and they started sending me threats that they were going to cut off my head and stuff they do to ‘N words.’ It’s not done to express an opinion, it’s done to scare you.”
Because of Jones’ harassment, alt-right leader Milo Yiannopoulos was permanently banned from Twitter. (He is also an editor at Breitbart News, the conservative website whose executive chairman, Stephen Bannon, was hired Aug. 17 to run the Trump campaign.) The service said Yiannopoulos, a critic of the new Ghostbusters who called Jones a “black dude” in a tweet, marshaled many of his more than 300,000 followers to harass her. He not only denies this but says being responsible for your fans is a ridiculous standard. He also thinks Jones is faking hurt for political purposes. “She is one of the stars of a Hollywood blockbuster,” he says. “It takes a certain personality to get there. It’s a politically aware, highly intelligent star using this to get ahead. I think it’s very sad that feminism has turned very successful women into professional victims.”
A gay, 31-year-old Brit with frosted hair, Yiannopoulos has been speaking at college campuses on his Dangerous Faggot tour. He says trolling is a direct response to being told by the left what not to say and what kinds of video games not to play. “Human nature has a need for mischief. We want to thumb our nose at authority and be individuals,” he says. “Trump might not win this election. I might not turn into the media figure I want to. But the space we’re making for others to be bolder in their speech is some of the most important work being done today. The trolls are the only people telling the truth.”
The alt-right was galvanized by Gamergate, a 2014 controversy in which trolls tried to drive critics of misogyny in video games away from their virtual man cave. “In the mid-2000s, Internet culture felt very separate from pop culture,” says Katie Notopoulos, who reports on the web as an editor at BuzzFeed and co-host of the Internet Explorer podcast. “This small group of people are trying to stand their ground that the Internet is dark and scary, and they’re trying to scare people off. There’s such a culture of viciously making fun of each other on their message boards that they have this very thick skin. They’re all trained up.”
Andrew Auernheimer, who calls himself Weev online, is probably the biggest troll in history. He served just over a year in prison for identity fraud and conspiracy. When he was released in 2014, he left the U.S., mostly bouncing around Eastern Europe and the Middle East. Since then he has worked to post anti–Planned Parenthood videos and flooded thousands of university printers in America with instructions to print swastikas–a symbol tattooed on his chest. When I asked if I could fly out and interview him, he agreed, though he warned that he “might not be coming ashore for a while, but we can probably pass close enough to land to have you meet us somewhere in the Adriatic or Ionian.” His email signature: “Eternally your servant in the escalation of entropy and eschaton.”
While we planned my trip to “a pretty remote location,” he told me that he no longer does interviews for free and that his rate was two bitcoins (about $1,100) per hour. That’s when one of us started trolling the other, though I’m not sure which:
From: Joel Stein
To: Andrew Auernheimer
I totally understand your position. But TIME, and all the major media outlets, won’t pay people who we interview. There’s a bunch of reasons for that, but I’m sure you know them.
From: Andrew Auernheimer
To: Joel Stein
I find it hilarious that after your people have stolen years of my life at gunpoint and bulldozed my home, you still expect me to work for free in your interests.
You people belong in a f-cking oven.
From: Joel Stein
To: Andrew Auernheimer
For a guy who doesn’t want to be interviewed for free, you’re giving me a lot of good quotes!
In a later blog post about our emails, Weev clarified that TIME is “trying to destroy white civilization” and that we should “open up your Jew wallets and dump out some of the f-cking geld you’ve stolen from us goys, because what other incentive could I possibly have to work with your poisonous publication?” I found it comforting that the rate for a neo-Nazi to compromise his ideology is just two bitcoins. Expressing socially unacceptable views like Weev’s is becoming more socially acceptable. Sure, just like there are tiny, weird bookstores where you can buy neo-Nazi pamphlets, there are also tiny, weird white-supremacist sites on the web. But some of the contributors on those sites now go to places like 8chan or 4chan, which have a more diverse crowd of meme creators, gamers, anime lovers and porn enthusiasts. Once accepted there, they move on to Reddit, the ninth most visited site in the U.S., on which users can post links to online articles and comment on them anonymously. Reddit believes in unalloyed free speech; the site only eliminated the comment boards “jailbait,” “creepshots” and “beatingwomen” for legal reasons.
But last summer, Reddit banned five more discussion groups for being distasteful. The one with the largest user base, more than 150,000 subscribers, was “fatpeoplehate.” It was a particularly active community that reveled in finding photos of overweight people looking happy, almost all women, and adding mean captions. Reddit users would then post these images all over the targets’ Facebook pages along with anywhere else on the Internet they could. “What you see on Reddit that is visible is at least 10 times worse behind the scenes,” says Dan McComas, a former Reddit employee. “Imagine two users posting about incest and taking that conversation to their private messages, and that’s where the really terrible things happen. That’s where we saw child porn and abuse and had to do all of our work with law enforcement.”
Jessica Moreno, McComas’ wife, pushed for getting rid of “fatpeoplehate” when she was the company’s head of community. This was not a popular decision with users who really dislike people with a high body mass index. She and her husband had their home address posted online along with suggestions on how to attack them. Eventually they had a police watch on their house. They’ve since moved. Moreno has blurred their house on Google maps and expunged nearly all photos of herself online.
During her time at Reddit, some users who were part of a group that mails secret Santa gifts to one another complained to Moreno that they didn’t want to participate because the person assigned to them made racist or sexist comments on the site. Since these people posted their real names, addresses, ages, jobs and other details for the gifting program, Moreno learned a good deal about them. “The idea of the basement dweller drinking Mountain Dew and eating Doritos isn’t accurate,” she says. “They would be a doctor, a lawyer, an inspirational speaker, a kindergarten teacher. They’d send lovely gifts and be a normal person.” These are real people you might know, Moreno says. There’s no real-life indicator. “It’s more complex than just being good or bad. It’s not all men either; women do take part in it.” The couple quit their jobs and started Imzy, a cruelty-free Reddit. They believe that saving a community is nearly impossible once mores have been established, and that sites like Reddit are permanently lost to the trolls.
When sites are overrun by trolls, they drown out the voices of women, ethnic and religious minorities, gays–anyone who might feel vulnerable. Young people in these groups assume trolling is a normal part of life online and therefore self-censor. An anonymous poll of the writers at TIME found that 80% had avoided discussing a particular topic because they feared the online response. The same percentage consider online harassment a regular part of their jobs. Nearly half the women on staff have considered quitting journalism because of hatred they’ve faced online, although none of the men had. Their comments included “I’ve been raged at with religious slurs, had people track down my parents and call them at home, had my body parts inquired about.” Another wrote, “I’ve had the usual online trolls call me horrible names and say I am biased and stupid and deserve to be raped. I don’t think men realize how normal that is for women on the Internet.”
The alt-right argues that if you can’t handle opprobrium, you should just turn off your computer. But that’s arguing against self-expression, something antithetical to the original values of the Internet. “The question is: How do you stop people from being a–holes not to their face?” says Sam Altman, a venture capitalist who invested early in Reddit and ran the company for eight days in 2014 after one of its many PR crises. “This is exactly what happened when people talked badly about public figures. Now everyone on the Internet is a public figure. The problem is that not everyone can deal with that.” Altman declared on June 15 that he would quit Twitter and his 171,000 followers, saying, “I feel worse after using Twitter … my brain gets polluted here.”
Twitter’s head of trust and safety, Del Harvey, struggles with how to allow criticism but curb abuse. “Categorically to say that all content you don’t like receiving is harassment would be such a broad brush it wouldn’t leave us much content,” she says. Harvey is not her real name, which she gave up long ago when she became a professional troll, posing as underage girls (and occasionally boys) to entrap pedophiles as an administrator for the website Perverted-Justice and later for NBC’s To Catch a Predator. Citing the role of Twitter during the Arab Spring, she says that anonymity has given voice to the oppressed, but that women and minorities are more vulnerable to attacks by the anonymous.
But even those in the alt-right who claim they are “unf-ckwithable” aren’t really. At some point, everyone, no matter how desensitized by their online experience, is liable to get freaked out by a big enough or cruel enough threat. Still, people have vastly different levels of sensitivity. A white male journalist who covers the Middle East might blow off death threats, but a teenage blogger might not be prepared to be told to kill herself because of her “disgusting acne.”
Which are exactly the kinds of messages Em Ford, 27, was receiving en masse last year on her YouTube tutorials on how to cover pimples with makeup. Men claimed to be furious about her physical “trickery,” forcing her to block hundreds of users each week. This year, Ford made a documentary for the BBC called Troll Hunters in which she interviewed online abusers and victims, including a soccer referee who had rape threats posted next to photos of his young daughter on her way home from school. What Ford learned was that the trolls didn’t really hate their victims. “It’s not about the target. If they get blocked, they say, ‘That’s cool,’ and move on to the next person,” she says. Trolls don’t hate people as much as they love the game of hating people.
Troll culture might be affecting the way nontrolls treat one another. A yet-to-be-published study by University of California, Irvine, professor Zeev Kain showed that when people were exposed to reports of good deeds on Facebook, they were 10% more likely to report doing good deeds that day. But the opposite is likely occurring as well. “One can see discourse norms shifting online, and they’re probably linked to behavior norms,” says Susan Benesch, founder of the Dangerous Speech Project and faculty associate at Harvard’s Internet and Society center. “When people think it’s increasingly O.K. to describe a group of people as subhuman or vermin, those same people are likely to think that it’s O.K. to hurt those people.”
As more trolling occurs, many victims are finding laws insufficient and local police untrained. “Where we run into the problem is the social-media platforms are very hesitant to step on someone’s First Amendment rights,” says Mike Bires, a senior police officer in Southern California who co-founded LawEnforcement.social, a tool for cops to fight on-line crime and use social media to work with their communities. “If they feel like someone’s life is in danger, Twitter and Snapchat are very receptive. But when it comes to someone harassing you online, getting the social-media companies to act can be very frustrating.” Until police are fully caught up, he recommends that victims go to the officer who runs the force’s social-media department.
One counter-trolling strategy now being employed on social media is to flood the victims of abuse with kindness. That’s how many Twitter users have tried to blunt racist and body-shaming attacks on U.S. women’s gymnastics star Gabby Douglas and Mexican gymnast Alexa Moreno during the Summer Olympics in Rio. In 2005, after Emily May co-founded Hollaback!, which posts photos of men who harass women on the street in order to shame them (some might call this trolling), she got a torrent of misogynistic messages. “At first, I thought it was funny. We were making enough impact that these losers were spending their time calling us ‘cunts’ and ‘whores’ and ‘carpet munchers,'” she says. “Long-term exposure to it, though, I found myself not being so active on Twitter and being cautious about what I was saying online. It’s still harassment in public space. It’s just the Internet instead of the street.” This summer May created Heartmob, an app to let people report trolling and receive messages of support from others.
Though everyone knows not to feed the trolls, that can be challenging to the type of people used to expressing their opinions. Writer Lindy West has written about her abortion, hatred of rape jokes and her body image–all of which generated a flood of angry messages. When her father Paul died, a troll quickly started a fake Twitter account called PawWestDonezo, (“donezo” is slang for “done”) with a photo of her dad and the bio “embarrassed father of an idiot.” West reacted by writing about it. Then she heard from her troll, who apologized, explaining that he wasn’t happy with his life and was angry at her for being so pleased with hers.
West says that even though she’s been toughened by all the abuse, she is thinking of writing for TV, where she’s more insulated from online feedback. “I feel genuine fear a lot. Someone threw a rock through my car window the other day, and my immediate thought was it’s someone from the Internet,” she says. “Finally we have a platform that’s democratizing and we can make ourselves heard, and then you’re harassed for advocating for yourself, and that shuts you down again.”
I’ve been a columnist long enough that I got calloused to abuse via threats sent over the U.S. mail. I’m a straight white male, so the trolling is pretty tame, my vulnerabilities less obvious. My only repeat troll is Megan Koester, who has been attacking me on Twitter for a little over two years. Mostly, she just tells me how bad my writing is, always calling me “disgraced former journalist Joel Stein.” Last year, while I was at a restaurant opening, she tweeted that she was there too and that she wanted to take “my one-sided feud with him to the next level.” She followed this immediately with a tweet that said, “Meet me outside Clifton’s in 15 minutes. I wanna kick your ass.” Which shook me a tiny bit. A month later, she tweeted that I should meet her outside a supermarket I often go to: “I’m gonna buy some Ahi poke with EBT and then kick your ass.”
I sent a tweet to Koester asking if I could buy her lunch, figuring she’d say no or, far worse, say yes and bring a switchblade or brass knuckles, since I have no knowledge of feuding outside of West Side Story. Her email back agreeing to meet me was warm and funny. Though she also sent me the script of a short movie she had written. I saw Koester standing outside the restaurant. She was tiny–5 ft. 2 in., with dark hair, wearing black jeans and a Spy magazine T-shirt. She ordered a seitan sandwich, and after I asked the waiter about his life, she looked at me in horror. “Are you a people person?” she asked. As a 32-year-old freelance writer for Vice.com who has never had a full-time job, she lives on a combination of sporadic paychecks and food stamps. My career success seemed, quite correctly, unjust. And I was constantly bragging about it in my column and on Twitter. “You just extruded smarminess that I found off-putting. It’s clear I’m just projecting. The things I hate about you are the things I hate about myself,” she said.
As a feminist stand-up comic with more than 26,000 Twitter followers, Koester has been trolled more than I have. One guy was so furious that she made fun of a 1970s celebrity at an autograph session that he tweeted he was going to rape her and wanted her to die afterward. “So you’d think I’d have some sympathy,” she said about trolling me. “But I never felt bad. I found that column so vile that I thought you didn’t deserve sympathy.” When I suggested we order wine, she told me she’s a recently recovered alcoholic who was drunk at the restaurant opening when she threatened to beat me up. I asked why she didn’t actually walk up to me that afternoon and, even if she didn’t punch me, at least tell me off. She looked at me like I was an idiot. “Why would I do that?” she said. “The Internet is the realm of the coward. These are people who are all sound and no fury.”
Maybe. But maybe, in the information age, sound is as destructive as fury.
Editor’s Note: An earlier version of this story included a reference to Asperger’s Syndrome in an inappropriate context. It has been removed. Additionally, an incorrect description of Megan Koester has been removed.
The changes come one week after a BuzzFeed News investigation into Twitter’s decadelong failure to stop abuse.
18/8/2016- Today, Twitter announced two product features that seem intended to help users handle abuse on the platform. The features come one week after BuzzFeed News reported on Twitter’s decade long problem with harassment thanks to what company insiders past and present describe as inaction and organizational disarray. In a company blog post, Twitter revealed it will begin rolling out a setting that will allow users to limit notifications on desktop and mobile to only the accounts they follow. Alongside this feature, the company is also introducing a quality filter. Here’s how Twitter describes it:
The filter can improve the quality of Tweets you see by using a variety of signals, such as account origin and behavior. Turning it on filters lower-quality content, like duplicate Tweets or content that appears to be automated, from your notifications and other parts of your Twitter experience. It does not filter content from people you follow or accounts you’ve recently interacted with – and depending on your preferences, you can turn it on or off in your notifications settings.
Both of these features are similar to the quality filter and notifications settings that have been available to verified users for a while now. The update is an attempt to standardize the experience between verified and non-verified accounts. While the quality filter seems to be designed to stop spammers and pop-up troll accounts, it is unclear how effective the filter will be at ending targeted harassment at an individual by non-spam actors. The features also only seem to address harassment by limiting what users will see in their feeds when they’re logged on. The settings don’t appear to prevent someone from tweeting abusive things. As of now, there appear to be no changes to Twitter’s abuse reporting system or any plans to address how Twitter responds to abuse.
The social network has suspended 235,000 in last six months alone, with rate of daily suspensions up 80%
18/8/2016- Twitter continues to fight to keep terrorist groups and sympathizers from using its service. The social network announced today that in the last six months it has suspended 235,000 accounts for violating its policies related to the promotion of terrorism. In February, Twitter reported that it had suspended 125,000 accounts since mid-2015 for terrorist-related reasons. That means Twitter has suspended 360,000 accounts since the middle of last year. "Since that [February] announcement, the world has witnessed a further wave of deadly, abhorrent terror attacks across the globe," the company wrote in a blog post. "We strongly condemn these acts and remain committed to eliminating the promotion of violence or terrorism on our platform."
Twitter also reported that daily suspensions are up more than 80% since last year, with spikes in suspensions immediately following terrorist attacks. "Our response time for suspending reported accounts, the amount of time these accounts are on Twitter, and the number of followers they accumulate have all decreased dramatically," the company said. "As noted by numerous third parties, our efforts continue to drive meaningful results, including a significant shift in this type of activity off of Twitter." There has been increasing focus on trying to keep terrorist groups, whether it's ISIS or homegrown white supremacists, from using social networks like Twitter and Facebook to communicate, call for attacks and to recruit new members. Democratic presidential nominee Hillary Clinton even raised the issue during her acceptance speech at the Democratic National Convention last month. "We will disrupt their efforts online to reach and radicalize young people in our country. It won't be easy or quick, but make no mistake - we will prevail," Clinton said.
Social media, including sites like YouTube and instant messaging service Telegram, have been used for years. Those sites are fighting back, too. Facebook previously reported that it has suspended accounts it found were associated with radicalized groups. Today, Twitter noted that it not only is suspending accounts, but is making it harder for those suspended to return to the platform. "We have expanded the teams that review reports around the clock, along with their tools and language capabilities," Twitter said. "We also collaborate with other social platforms, sharing information and best practices for identifying terrorist content... Finally, we continue to work with law enforcement entities seeking assistance with investigations to prevent or prosecute terror attacks."
© Computer World
16/8/2016- The EU wants to extend privacy rules to cover calls and messages sent over the internet, subjecting services such as WhatsApp and Skype to much greater regulation. Tech and telecom industries last month called for the EU to scrap the rules, contained in the Directive on Privacy and Electronic Communications, known as the e-privacy directive. Telecom companies have long complained that web-based competitors such as Google, Microsoft and Facebook - which offer communications services Skype, WhatsApp and Hangouts - enjoy an advantage because they are allowed to make money from traffic and location data, which telecoms operators are not allowed to keep. Scrapping the rules would encourage innovation and drive growth and social opportunities, telecoms lobby group GSM Association had said. Instead, the European Commission intends to bring in everyone under the same rules.
According to UK newspaper the Financial Times, the EU executive’s move is an attempt to rein in American companies that dominate the sector, undercutting EU telecoms providers. Whether the rules will strengthen consumers’ privacy is open for debate. Some internet companies offer end-to-end encryption on their services. Facebook, which uses full-scale encryption on WhatsApp, said in its response to the Commission's public consultation that extending the rules to online messaging services would mean they could in effect "no longer be able to guarantee the security and confidentiality of the communication through encryption". They send the new regime would allow governments the option of restricting the confidentiality right for national security purposes. The commission is due to make an initial announcement in September and present detailed plans for legislative review later this year.
© The EUobserver
In total 215,246 Islamophobic tweets were sent from English-speaking accounts in July
18/8/2016- The number of times anti-Islamic insults are used on Twitter is rising month-by-month, a new report reveals. Analysis of the social media site found 215,246 Islamophobic tweets were sent in July this year – a staggering 289 every hour. Spikes in offensive language correlated with acts of terrorism, with the largest number of abusive tweets sent the day after the devastating Nice attack, the research says. Researchers at the Centre for the Analysis of Social Media at the Demos think tank, said identifying tweets that were hateful, derogatory and anti-Islamic was “a formidable challenge”. They first collected all tweets that contained one of a list of terms that could be used in an anti-Islamic way, including ‘Jihadi’ and ‘Terrorist.’ Most are too offensive to be published. Between 29 February and 2 August, 34 million tweets meeting the criteria were collected, but most were not anti-Islamic or hateful.
Algorithms were built and used to identify Islamophobic context within a tweet. For example, classifiers were built to separate tweets referring to Islamist terrorism from other forms of terrorism and then distinguish between messages attacking Muslim communities in the context of terrorism, from those defending the communities. The researchers found many of the tweets, which were identified as derogatory and anti-Islamic, included specific references to recent acts of violence and attacked entire Muslim communities in the context of terrorism. The largest of the spikes within July was the day following the Nice terrorist attack, with 21,190 tweets on 15 July. Not far behind, was the day after the shooting of police officers in Dallas on 8 July, when 11,320 Islamophobic tweets were sent. The 17 July was the next worst date, with 10,610 Islamophobic tweets sent the day after the attempted military coup in Turkey, followed by the end of Ramadan on 5 July, with 9,220 tweets.
The day of an IS attack on a church in Normandy on 26 July, 8,950 upsetting tweets were posted, according to the study. The think tank has been monitoring Islamophobic activity on the social network since March and said July recorded the highest volume of derogatory tweets of any month yet. It found an average of 4,972 Islamophobic tweets were sent a day since March. Demos geo-located locate many of the tweets collected and found Islamophobic tweets originating in every EU member state. As only tweets in English were recorded, the majority were traced to English speaking countries. However outside the UK significant concentrations were identified in the Netherlands, France and Germany. In December 2015, Twitter updated its policies to explicitly ban "hateful conduct" for the first time. The move has been followed-by agreements with officials in the EU – as well as Facebook and YouTube – to remove hate speech from their networks.
"Our rules prohibit inciting or engaging in the targeted abuse or harassment of others," a Twitter spokesperson told the BBC. "We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it's happening and prevent repeat offenders."
© Wired UK
Original posted on the Independent website not to be found there anymore
By Adam Lusher
14/8/2016- Scotland Yard is to recruit civilian volunteers to help police social media in a new £1.7 million online hate crime unit. The volunteers – already dubbed a “thought police” by critics – will seek out and challenge social media abuse and report it to a new police “online hate crime hub”. Documents outlining how the scheme will work appear to suggest that the use of social media savvy volunteers will help address the problem that: “The police response to online hate crime is inconsistent, primarily because police officers are not equipped to tackle it.” A report by the London Mayor Sadiq Khan’s Office for Policing and Crime (MOPAC), which will help fund the scheme, has said: “A key element is the community hub, which will work with and support community volunteers to identify, report and challenge online hate material. “This requires full time capacity to recruit, train and manage a group of community volunteers, who are skilled in the use of social media and able to both identify and appropriately respond to inappropriate content to build the counter-narrative.”
The report suggests using the anti-racist organisation Stop Hate UK to provide the volunteers, because of its previous experience and ability to “effect speedy mobilisation in London.” The two-year pilot scheme will cost a total of £1,730,000, with the bulk of the funding coming from MOPAC and the Metropolitan Police, supported by £453,756 from the Home Office in the form of a Police Innovation Fund Grant. The initiative comes after a spike in racism following the EU referendum that saw a 57 per cent increase in hate crime reported to the police and included social media users receiving such messages as “go home black b*tch – we voted leave, time to make Britain great again by getting rid of u blacks, Asians and immigrants.”
Prominent figures also received abuse, including the Remain-supporting black London MP David Lammy who called police after reportedly receiving a death threat via social media. In one message he was reportedly told “I hope your kids get cancer and die” and “I wish you the same fate as that b*tch got stab” – a reference to the Labour MP Jo Cox who was killed during the referendum campaign. The online hate crime hub also comes after John Nimmo, 28, from South Shields, Tyne and Wear, was told last month that he faces jail for sending threatening emails to the MP Luciana Berger showing a picture of a large knife and telling her “watch your back Jewish scum”. The scheme is also being piloted after a report by the Tell Mama organisation – which had been due to be unveiled by Ms Cox before she was killed – found that social media was being used as a platform for calls for violence against Muslims.
Tell Mama said it had received reports of 364 “flagrant” incidents of online hate speech, harassment and threats in 2015 and said these amounted to “only a small fraction of the anti-Muslim hate on social media platforms.” But the online hate crime hub, which will be led by a Detective Inspector with the help of four other Scotland Yard detectives, has already been criticised by freedom of speech campaigners as a form of “thought police.” The Liberal Democrat leader Tim Farron told the Mail on Sunday: “We want more police on the street, not thought police. “Online bullying is an increasingly serious problem, but police should not be proactively seeking cases like these and turning themselves into chatroom moderators. “With such measures, even if well intentioned, there is a real danger of undermining our very precious freedom of speech.”
Andrew Allison, from the Freedom Association libertarian group, added: “There’s a risk of online vigilantism, where people who are offended by the least thing will have a licence to report it to the police.” Critics also pointed to cases where the Police appear to have been heavy-handed in dealing with online comments. In one of the more spectacular examples, in 2010 Paul Chambers was arrested under the Terrorism Act and convicted of sending a menacing message after joking on twitter that he would blow an airport “sky high” if it remained closed by heavy snowfall and stopped him travelling to see his girlfriend. It took Mr Chambers two years and an appeal to the High Court before his conviction was quashed.
© The Truth Seeker
Several reputable studies have concluded that the ethnic group that suffers the highest rates of unreported racist hate crime in Britain is East Asians. When the butt of the joke is dehumanised in this way, it’s only a matter of time before that butt gets kicked
By Daniel York
12/8/2016- Snapchat is being defensive about its “anime” filter which is (rightly, in my opinion) being called out as an example of “yellowface”. Yellowface is of course nothing new and neither is the defensiveness around it. People tend to dig their heels in about yellowface a lot. Indeed, I’ve argued previously that yellowface is the last acceptable bastion of racist caricature and racial appropriation. Like blackface and brownface, there are two basic forms of yellowface. There is the type that enables actors (nearly always of Caucasian descent) to portray characters who are supposed to be East Asian. Some of these actors have even been nominated for awards for dressing up in exotic costumes and perfecting stilted hybrid accents. This type of “performance yellowface” completely perpetuates the notion that actors of Caucasian descent are inherently more talented, more intelligent, more nuanced and more skilful practitioners of the thespian arts – an utterly ludicrous premise which has had to be (and continues to be) fought very hard.
After all, let us not forget that once upon a time women were not allowed on the stage either and were portrayed by young men. If anyone seriously wants to try and posit the argument that men playing women is somehow preferable to watching the likes of Judi Dench, Halle Berry or Juliet Binoche in action then good luck with that one. The other type of yellowface – the Snapchat variety – is obviously meant to be fun but also points up and exaggerates certain perceived ethnic “traits” which enforce stereotypes and are used to ridicule and dehumanise. It encourages people to pull back their eyes into thin slants, pronounce their l’s as r’s and force their teeth to protrude in the guise of the “comedy oriental” a la Mickey Rooney in the film version of Breakfast at Tiffany’s.
It is of course entirely false. Many, many East Asians have very large eyes, there is no greater occurrence of buckteeth in certain racial groups and, as for the r’s and l’s, let’s face it, there are sounds in all “foreign” languages that the majority of English speakers will struggle with hopelessly. But the whole point of yellowface is it reinforces a certain perceived cultural superiority: you can’t speak our language perfectly so you’re obviously a bit strange (even though you probably speak our language with far more command and dexterity than most of us would ever have yours). Both types of yellowface render people of East Asian descent as invisible ciphers with no personality or individual characteristics. Like blackface or brownface, they reinforce White Western Caucasian as the supreme “norm”; the default setting to which every other type of ethnicity is at best a quirky exotic counterpoint and, at worst, some form of hateful deviation, to be scorned, dominated and kept in its place lest it claim some form of parity in the wider “Caucasian” world.
If anyone reading feels this in any way over-sensitive it might be worth googling some Nazi caricatures of Jews in the 1930s. I’m sure that was all good fun back in the day but we all know how that ended up. It’s also worth remembering that several reputable studies have concluded that the ethnic group that suffers the highest rates of unreported racist hate crime in Britain is East Asians. Traditionally the most unassertive and disparate racial group lacking any kind of media voice or presence, this is really no coincidence. When the butt of the joke is dehumanised in this way, it’s only a matter of time before that butt gets kicked. It’s sometimes argued that this kind of ridicule cuts both ways and is a basic component of humour that goes on in all cultures – but a recent Chinese detergent advert featuring a black man being “washed” Chinese rightly attracted mass social media disapproval. Interestingly, the one East Asian country where you can find regular racist caricatures of white people is...North Korea.
Any other ways we want to emulate the Democratic People’s Republic? Then start caring about racial caricatures in Snapchat filters.
© The Independent - Voices
The Prevention of Electronic Crime Bill 2015 was passed in the national assembly with majority vote on Thursday.
11/8/2016- The senate has already approved the cyber crime bill with 50 amendments in July 29 this year. Minister of State of Information Technology and Telecommunication Anusha Rehman had presented this bill earlier this year. The law envisages 14-year imprisonment and a Rs 5million fine for cyber terrorism, seven-year imprisonment each for campaigning against innocent people on the internet, spreading hate material on the basis of ethnicity, religion, and sect, or taking part in child pornography. The bill awaits signatures by President Mamnoon Hussain after which it will become a law. The bill has been criticised by the civil society members and rights groups for putting curbs on freedom of expression.
14-year jail, Rs5m fine for cyber terrorism
The Prevention of Electronic Crimes Bill 2016 envisages a 14-year imprisonment and Rs5 million for cyber terrorism, and seven-year imprisonment each for campaigning against innocent people on the internet, spreading hate material on the basis of ethnicity, religion and sect or taking part in child pornography, which can also entail a Rs500,000 fine. A special court will be formed for investigation into cyber crimes in consultation with the high court. The law will also apply to expatriates and electronic gadgets will be accepted as evidence in a special court. The bill will criminalise cyber-terrorism with punishment of up to 14 years in prison and Rs5 million in penalties. Similarly, child pornography will carry sentences of up to seven years in jail and Rs5 million, with the crimes being non-bailable offences. The bill also aims to criminalise terrorism on the internet, or raising of funds for terrorist acts online, with sentences of up to seven years in prison.
Under the law, terrorism, electronic fraud, exaggeration of forgery, crimes, hate speeches, pornographic materials about children, illegal access of data (hacking) as well as interference with data and information system (DOS and DDOS attacks) specialised cyber-related electronic forgery and electronic fraud etc would be punishable acts. It will also apply on the people who are engaged in anti-state activities online from their safe havens in other countries. Illegal use of internet data will cost three-year jail terms and Rs1 million fine. The same penalties are proposed for tampering with mobile phones. Data of internet providers will not be shared without court orders. The cyber crime law will not be applied on the print and electronic media. Foreign countries will be accessed to arrest those engaged in anti-state activities from there.
© Geo TV
'They choose the victims,' says head of Rio's cybercrime unit — and the motive may surprise you
7/8/2016- You can hear the open-air gymnasium long before you see it; the thwacks of bodies in white judo gis hitting the mat. The gym, nestled between downtown and a nearby favela, is Brazil. The competitors of all ages are black, white, brown. Everyone is equal. At least that's what Brazilian world judo champion Rafaela Silva thought until she competed in the London Olympics. Silva was favoured to win a medal, but lost unexpectedly. And if that wasn't bad enough‚ as soon as she went online, she got another punch in the gut. On Twitter, on Facebook, hundreds of people on social media were hurling racist abuse at her. "I was very sad because I had lost the fight," Silva says. "So I walked to my room, I found all those insults on social media, they were criticizing me, calling me monkey, so I got really, really upset. I thought about leaving judo." Brazilian police say racist cyberattacks — especially against high-profile black women — are becoming more common
'They want to become famous'
Alessandro Thiers, the head of Rio's cybercrime unit, recently announced his officers caught those behind a racist cyberattack against a famous black journalist. It's not random racists at work here, he says. The attacks are co-ordinated by groups led by so-called administrators. "They choose the victims and they tell those in the group to act," Thiers said. "So they organize themselves in several states, chose a target ... then people from various states attack the victim." Police say most of the perpetrators are young and middle-class, and their motive often has little to do with white supremacy. "They want to become famous," Thiers says. "In fact, they are just spoiled kids." Saying shocking things about well-known figures is an easy and often risk-free way to get the notoriety they seek, says Jurema Werneck, one of the founders of the Rio-based NGO Criola. And with the Olympics in their backyard, she fears they will now get a bigger platform. "We are not talking about fake profiles," she says. "Their profiles on the Internet are true ... they're not disguising themselves."
Werneck helps organize campaigns to stop the attacks‚ like a recent one in which Criola activists would find the perpetrators online and shame them by putting up billboards with their pictures near their home or work. She says if her small NGO can find the attackers‚ why can't police? "We find these racists easily. The police can do it too; they have more tools," Werneck says. "They're not doing a good job yet." For Silva, preparing for the Games now involves more than just practising holds and throws. She went to see a psychologist to help her deal with the hate she's bound to get online. "It has helped make me stronger and want to keep going," she says. This time, she knows what to expect. But being prepared, she says, doesn't make it any easier.
© CBC News
Sickening racist abuse regularly posted by warped trolls on Facebook — worsened since the Brexit vote — depicts ethnic minorities as "scum", "rapists" and "terrorists" and orders them to leave the UK, Daily Star Online has found.
7/8/2016- The UK voted to leave the EU in June. But last year net migration to the UK rose to around 333,000 — the second-highest figure ever. Since the vote, a Daily Star Online investigation has found migrants and specific religious groups — notably Muslims — have suffered shocking abuse online. And social media has represented these minorities as threats to national security and criminals for years, it has emerged. It comes days after a nationwide #BlackAugust campaign raising awareness of racism blocked roads, including access to Heathrow Airport. Sick trolls and far right groups — including the English Defence League and Britain First — disseminate online hate and hostility, particularly after major global terror attacks like Brussels 2016. The investigation found hundreds of instances of anti-Muslim hate alone on Facebook, calling Muslims "terrorists", "rapists", claiming Muslim women are national security threats, ordering Muslims to be deported and posts referring to a "war" between Muslims and "us".
The offenders included far right groups, such as the English Brotherhood, but also twisted fantasists determined to spread hate. Shockingly, even councillors' posts were found to be slurs. Cllr Tim Paul Hicks, who represents UKIP on Shepshed Town Council in Leicestershire, is under investigation after allegedly making a series of anti-Muslim Facebook posts. He is accused of putting up a spate of racist images between July 10 and July 20 before the account was taken down five days later. One chilling picture allegedly showed a grenade with the caption: "Hotline to Allah. Pull pin, hold to ear, then wait for dial code." Accompanying the picture was a message saying: "ISIS HQ want to chat to you about Suicide Bomber Training School. Apparently, you missed a lesson." Another post showed tiara placed on top of a full burka and read: "Miss Saudi Arabia". There was also an image of a dog wearing a towel as a veil. Cllr Hick refused to comment on the allegations.
A spokesman from Progressive Leicestershire, a liberal political group, said of the posts: "They don't belong in 21st century Britain. They never have. I find it appalling." Birmingham City University carried out the harrowing research. Dr Imran Awan, associate professor in criminology at Birmingham City University, said: "The types of abuse and hate speech against Muslim communities on Facebook uncovered real problematic associations with Muslims being deemed as terrorists and rapists. "Muslim women wearing the veil are used as an example of a security threat. Muslims are viewed in the lens of security and war. This is particularly relevant for the far-right who are using English history and patriotism as a means to stoke up anti-Islamic hate with the use of a war analogy. "For example, after posting an image about eating British breakfast, a comment posted by one of the users, was: ‘For every sausage eaten or rasher of bacon we should chop of a Muslims head’. "The worry is that such comments could lead to actual offline violence and clearly groups such as this, are using Facebook to promote racial tensions and disharmony."
A spokesman from Facebook said the social media site does not tolerate direct attacks on race, ethnicity or religion. He added the site allows users to report any comment they feel is offensive and that Facebook does remove any content which is inappropriate. A spokesman from The Association of Chief Police Officers said: "We understand that hate material can damage community cohesion and create fear, so the police want to work alongside communities and the internet industry to reduce the harm caused by hate on the internet."
© The Daily Star
Far-right politics in Germany, France, and the U.K. flourished amid ongoing fears over migrants, terrorism and economic instability
3/8/2016- Anxious citizens across Europe are continuing to flock to their countries’ far-right fringes, posing an unprecedented challenge to established political parties throughout the region. Amid ongoing fears of migrants, terrorism and weakening job markets, support for radical right-wing parties in Europe is growing rapidly, a social media analysis by Vocativ shows. Long banished to the obscure corners of political life, resurgent populist groups in Germany, France, and the United Kingdom now boast more Facebook fans than their mainstream counterparts and have grown at a faster pace. For our analysis, we looked exclusively at the number of Facebook fans who identify as hailing from the home country of each party examined. Vocativ then tracked the growth of these online communities over the course of a year where immigrants, ISIS-inspired massacres, and national referendums dominated the consciousness of the continent.
In Germany, Europe’s leading destination for asylum seekers, fans of the ultra-right Alternative for Germany (AfD) party more than doubled to 240,000 between July 19, 2015 and July 31, 2016. By contrast, the country’s Christian Democratic Union, led by Chancellor Angela Merkel, and Social Democratic Party grew by only 17 percent (to 84,000) and 29 percent (to 87,600), respectively. The Facebook page of France’s National Front, which is led by Marine Le Pen, saw an uptick of 57 percent in the last year, to more than 290,000 fans—four times as many as the 70,000 on the page of President Francois Hollande’s Socialist Party. And in the United Kingdom, Britain First grew its Facebook community by 45 percent, topping the left-leaning Labour Party and the Eurosceptic UK Independence Party.
Events in each of these countries over the last year—coupled with looming concerns over the political stability of the E.U.—have helped to further fuel the populist, anti-immigrant, and anti-Muslim sentiment that underpin Europe’s rightward tilt. German anger over migrants and refugees reached a fever pitch in January when foreigners were accused of carrying out a string of sexual assaults in Cologne on New Year’s Eve. Terror-weary France has been battered by a series of Islamist-inspired attacks, including the deadly truck rampage in Nice last month that left more than 80 people dead. Meanwhile, Britain’s referendum on whether to break from the E.U., which passed narrowly in June, pushed nationalism and economic fears to forefront of public life. Just how well some of these groups fare politically will soon be tested. Germany holds regional elections next month. France’s presidential election will take place in April and May of 2017.
For the second time in a year, neo-Nazi hacker Andrew "weev" Auernheimer appears to have targeted flaws in printer networks to distribute racist fliers. This time, he's calling for the killing of children.
3/8/2016- Andrew Auernheimer, the notorious neo-Nazi black hat computer hacker better known as “weev,” claims to have targeted 50,000 printers across the country to distribute hate-filled fliers that call for the killing of black and Jewish children. “I unequivocally support the killing of children,” Auernheimer wrote in the flier. “I believe that our enemies need such a level of atrocity inflicted upon them and their homes that they are afraid to ever threaten the white race with genocide again.” He continued: “We will not relent until far after their daughters are raped in front of them. We will not relent until far after the eyes of their sons are gouged out before them. We will not relent until the cries of their infants are silenced by boots stomping on their brains out onto payment.”
It is unclear what prompted the flier, though Dylann Roof, who will soon face trial for allegedly murdering nine black people during a church service in Charleston, S.C., in June 2015, seems to have been a motivation. “I am thank thankful for his personal sacrifice of his life and future for white children,” Auernheimer wrote. “In honor of Dylann Roof, I will be growing out a bowl cut in solidarity for his trial.”
Auernheimer also praises Anders Breivik, who killed 77 people in separate attacks Oslo, Norway and at a nearby children’s summer camp as a political statement against immigration in 2011. Auernheimer describes Breivik as a Nordic warrior, comparing him to the protagonist in the poem Volundarkvida, where the main character kills the sons of his captor and rapes his daughter after being imprisoned. Like the protagonist of the poem, Auernheimer served a brief stint in prison after he was convicted of one count of identity fraud and one count of conspiracy to access a computer without authorization after exposing a flaw in AT&T security which allowed the e-mail addresses of iPad users to be revealed. Breivik seems to be a fascination of Auernheimer’s. Responding to Breivik’s appeal to receive internet access while he’s incarcerated, Auernheimber created the hashtag “#BreivikOnline” to draw attention to Breivik’s inability to go online.
Andrew Anglin, the founder of The Daily Stormer website that refers to non-whites as “hordes" and Jewish people as “Bloodthirsty Jew Pigs,” also published a blog yesterday mirroring Auernheimer’s demand for Breivik to have internet access. This is not the first time Auernheimer has faxed violent, hate-filled fliers. Earlier this year, he blasted unprotected printers at colleges, universities, and unprotected office networks across the country with swastika adorned fliers promoting an anti-Semitic message.
© The Southern Poverty Law Center
Instagram is said to work on bringing a hate filter to stop harassment to the social networking platform. What it means is that soon enough, users can start filtering their comment stream and also be able to turn off comments on their posts. This tool should provide ways to stop cyber bullying.
1/8/2016- Online harassment is a recurring issue in our day and age. Anyone with access to the internet for longer than a week has undoubtedly personally experienced or seen how others get bullied over the web. Instagram should be a fun, friendly environment, but problems like these do arise as much as anyone tries to combat them. Instagram already has general policies created to flag specific offensive words or phrases. However, the new feature will allow users to take matters into their own hands and control their account content as they wish. The hate filter to stop harassment works in a simple way. Instagram account holders will be able to change their settings so that they can filter the comments they receive and, if they rather, completely turn off other’s ability to post comments on their account. This way, all users can individually set up their account in such a way that personally offensive content gets ignored.
The new feature is set to arrive on high-profile accounts first, but all users will see the changes in the upcoming months. High-volume accounts can bring the social networking service a great deal of valuable feedback in a shorter period of time. The post-by-post comment filter should roll out to all accounts soon enough. According to the Pew Research Center, about 60 percent of internet users have seen someone being called offensive names. Other 53 percent of users have witnessed to efforts made by some individuals to embarrass someone else. Around 25 percent of web users have seen someone being physically threatened, and some 24 percent have seen someone continuously being harassed for a prolonged period of time. Furthermore, approximately 27 percent of internet users have personally been called offensive names, and 8 percent of them have been physically threatened or even stalked.
These worrying statistics call for more policies and efforts to put an end to online harassment. It is moves like Instagram’s and other networking websites that raise awareness of a serious issue that must be addressed further on.
© The Next Digit
No-one does anything to stop it
By Stephen Pollard
31/7/2016- Not so long ago, the likes of John Nimmo would be living in well deserved obscurity. Nimmo is a misogynist racist who has a penchant for sending threatening messages to women. Before the internet and the advent of social media, he would doubtless have festered alone in his South Shields bedroom and his hate would have been shared only with whichever other losers he happened to speak to. But in our digital age, Nimmo can make contact with pretty much anyone at the touch of a button. Two years ago he did exactly that to Labour MP Stella Creasy and feminist campaigner Caroline Criado-Perez, sending them abusive tweets and getting an eight week prison sentence for doing so. Now he is at it again, this time sending anti-Semitic death threats to the Liverpool Labour MP Luciana Berger. She would, he told her, “get it like Jo Cox”. He warned her: “watch your back Jewish scum, regards your friend the Nazi”, along with a picture of a large knife.
Ms Berger told the court where Nimmo is being tried that his words caused her “great fear and anguish”. She said the tweets left her in a state of “huge distress” and “caused me to feel physically sick being threatened in such a way.” I imagine that you are shocked to read about such behaviour. No decent person could fail to be. But Ms Berger won’t have been. I certainly wasn’t. Nor will any prominent Jew. Not because the behaviour is in any way acceptable. Rather, because it is so run-of-the-mill. Ms Berger receives anti-Semitic abuse every day. In spades. Indeed, you will not find a single prominent Jew with a Twitter or Facebook account who does not regularly receive anti-Semitic abuse. When I wake up and check my Twitter feed it rarely contains fewer than ten anti-Semitic messages. More often than not it’s far more. Another 20 or so come during an average day. And that’s after I have blocked over 300 different tweeters – a number that increases every day.
Some even amuse me, such as the recent claim that I “lead British Zionists with their propaganda to enable them to control UK.” Another tweet informed the world: “Pollard is the chief protagonist of Zionist supremacism in UK. He controls MSM.” MSM is an acronym for mainstream media – which means I apparently control all British media. Which would be really useful, if it were true. Sadly, I can’t even control my own kids. Some are threatening. One notorious anti-Semite that I had previously blocked started informing her followers that I was in the habit of ringing her voicemail and had left abusive messages threatening to rape her. She also posted a tweet suggesting that someone “pop” me off. In my experience, the police have been entirely useless. Last year I had to explain what Twitter was to two PCs from the Met who had been sent to talk to me about a threat I had reported. Though they had heard of it, they had no real idea what it was.
This is an epidemic of hate. And with the odd exception, such as the clear death threat to Ms Berger, nothing is done about it. Certainly not by Twitter. I have given up reporting the culprits, since not once has Twitter taken any action against them. Free speech, innit? But one thing puzzles me. Have the likes of Nimmo always been with us, and has social media simply given them a tool and a voice they didn’t have before? Or has social media itself raised the temperature and itself caused much of the epidemic? For most of my 51 years, anti-Semitism was something I encountered only fitfully; the odd unthinking throwaway remark or “joke”. Certainly nothing that would give me pause for thought. But the past few years have been different. I have not gone a day without encountering it. As a journalist, I have reported the spate of such comments from Labour members with astonishment that anti-Semitism can have entered the language of a mainstream party, however marginally.
My hunch is that it has always been there, but we simply never heard it. In the years after the Second World War, no one voiced anti-Semitism, even if it lay buried deep within their psyche. Even Jewish jokes were rarely told in polite company. But as memories faded and the Holocaust grew further away, social wariness of Jew-hate dissipated. History then reasserted itself. It’s not called the longest hatred for nothing. And the kind of anti-Semitism that once remained private, behind closed doors, now has the megaphone of social media. And that, we surely know, is not going anywhere.
Stephen Pollard is editor of the Jewish Chronicle
© The Telegraph
In hundreds of postings Islamophobes spread hate speech to foster violence against UK's Muslims.
28/7/2016- Islamophobes are targeting Muslim women in online hate campaigns, according to a new study. A Birmingham City University study examined hundreds of Facebook pages, posts and comments as part of an extensive survey of the spread of anti-Islam hate speech online, including those associated with far right groups Britain First and the English Defence league. They found 500 instances of Islamophobic abuse, in which Muslims were branded terrorists and rapists, alleged to be waging "war" on non-Muslims, and in which calls made for Muslims to be deported, as part of a campaign to "incite violence and prejudicial action." Women wearing Islamic dress are branded a "security threat." There is evidence of the hatred spilling into attacks and real life abuse, with a 326% surge in Islamophobic incidents recorded last year, and more than half of the victims women.
Researcher Imran Awan said that the recent murder of MP Jo Cox and the surge of racist attacks in the wake of the Brexit vote showed the urgency of tackling online hate speech. "What is has shown is that the far right and those with links and sympathies with the far right were using Facebook and social media to in effect portray Muslims in a very bad and negative fashion," Awan said. "After Brexit people have felt much more empowered and confident to come and target Muslims and others in racist hate attacks. This was all playing on social media but no one looked at it. If Facebook had been monitoring this racism, then I'm not saying they could have stopped the racist attacks, but it certainly could have given them an insight into the racist people using their platforms." Online abuse surged after events such as the murder of soldier Lee Rigby by two Islamic extremists in 2013, or the sex abuse cases in Rotherham, according to the study.
It found that 80% of the abuse was carried out by men, who singled out Muslim women for attacks, with 76 posts portraying women wearing the niqab or hijab as a "security threat." The next most frequent form of abuse called for Muslims to be deported, with 62 instances recorded. It identifies five kinds of online Islamophobe, from the 'producers' and 'distributers' seeking to create "a climate of fear, anti-Muslim hate and online hostility," to the 'opportunists' who will spread anti-Muslim hate speech in response to a specific incident, such as atrocities committed by terrorist group Isis. Also responsible are 'deceptives', who will concoct rumours and false stories to whip up Islamophobic hatred, such as the rumour Muslims wanted to ban cartoon character Peppa Pig, and 'fantasists', who fantasise about Islamophobic violence and make direct threats against Muslim communities.
On Tuesday, Home Secretary Amber Rudd announced the launch of a campaign to combat hate crime in the UK, with Her Majesty's Inspectorate of Constabulary to review the way hate crimes are reported and investigated by police in England and Wales. It comes with more than 6,000 hate crimes recorded by police in the wake of the 23 June EU Referendum. The Muslim Council of Britain recorded 100 crimes in the weekend after the referendum. Islamophobia monitoring group Tell MAMA found a 326 increase in Islamophobic incidents last year, with Muslim women "disproportionately targeted by cowardly hatemongers." "We have known that visible Muslim women are the ones targeted at a street level, but what we also have seen in Tell MAMA, is the way that Muslim women who are using social media platforms, are targeted for misogynistic and anti-Muslim speech.
In particular, there is a mix of sexualisation and anti-Muslim abuse that is intertwined which also hints at perceptions and attitudes towards women in our society," said Tell MAMA director Fiyaz Mughal. "We are also aware from our work in Tell MAMA, that the perpetrators age range has dropped significantly from 15-35 to 13-18 showing that anti-Muslim hate in particular is drawing in and building a younger audience which is daunting for the future. We need to redouble our efforts if we are to have social cohesion in our society and we also need to ensure that women feel protected and confident enough to report in such hate incidents."
Facebook needs to do more to tackle race hatred
Facebook recently signed up to a new European Union code of conduct obliging it to remove hate speech from its European sites within 24 hours. Awan said that UK authorities and Facebook needed to do more to combat the problem. "I think police have a really tough job in the sense that in my understanding it is like finding a needle in a virtual haystack, and they are not clued up enough. I don't think they have enough training to look at social media posts, police need to be trained on what to look at," he said. A College of Policing spokesman said: ""We are working with the Crown Prosecution Service, partners and police forces to raise awareness and improve the policing response to hate crime. This will ensure offenders can be bought to justice and evidence of their hostility can be used to support enhanced sentencing. "The College has developed training for police forces to issue to officers and staff and published Authorised Professional Practice, which is national guidance, for those responding to hate crime.
"In addition, more than £500,000 has been awarded to the University of Sussex and the Metropolitan Police through the Police Knowledge Fund to pilot a study that will examine the relationship between discussions of hate crime on social media and data relating to hate crime that has been recorded by police. The fund allows officers to develop their skills, build their knowledge and expertise about what works in policing and crime reduction, and put it into practice." Facebook says that it will not tolerate content that directly attacks other directly based on race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition, and its policies try to strike the right balance between giving people the freedom to express themselves and maintaining a safe and trusted environment. It said it has rules and tools people can use to report content that they find offensive. IBTimes UK has contacted Facebook for comment.
© The International Business Times - UK
If any one form of discriminatory social media expression has been on the rise in recent months, it’s been anti-Semitism.
24/7/2016-The Donald Trump presidential campaign’s well-documented white nationalist and Neo-Nazi following continues to bring such hatred to the forefront. Trump himself had even retweeted things from members of the “white genocide” movement, and in June, the campaign tweeted out an anti-Semitic meme that originated from the alt-right fever swamps of social media. On Saturday, a completely different organization seemed to dip its toes in those waters, too. Wikileaks started tweeting about (((echoes))), and it’s generated a great amount of controversy. It’s one of the increasingly well-known methods of harassment used by anti-Jewish racists on Twitter, which has exploded into wider visibility in recent months¯tweeting at Jews, and bracketing their names with two or three parentheses on either side.
It’s intended both as a signal to other anti-Semites and neo-Nazis, to highlight the target’s Jewish heritage (or perceived Jewish heritage, since racists aren’t always the sharpest or most concerned with accuracy), and track them on social media, making it even easier for other anti-Semites to join in on the abuse. After the phenomenon became more widely discussed in the media, many Jews and non-Jews alike began self-applying the parentheses on Twitter names, in a show of anti-racist solidarity. That’s where Wikileaks comes in. On Saturday, amid the group’s high-profile dump of thousands and thousands of emails from the Democratic National Committee, its Twitter account said something very suggestive about its critics. The tweet has since been deleted, going against Wikileaks' perceived notion of radical transparency. Nevertheless, screenshotters never forget.
It’s not exactly the most coherent tweet, but the thrust is nonetheless pretty clear: Wikileaks accused most of its critics of having the (((echoes))) brackets around their names, as well as “black-rimmed glasses,” statements that many interpreted, plainly enough, as “most of our critics are Jews.” The Wikileaks account subsequently tweeted some explana-tions of what the offending tweet meant, suggesting that “neo-liberal castle creepers” had appropriated the racist-turned-anti-racist solidarity gesture, turning it into “a tribalist designator for establishment climbers.” A clarifying tweet also misspelled “gesture” as “jesture,” which further stoked accusations of witting anti-Semitism. Wikileaks ultimately defended the decision to delete the tweets, saying they’d been intentionally misconstrued by “pro-Clinton hacks and neo-Nazis.” It’s also been maintaining a pretty aggressive public relations posture regarding these latest leaks. It threatened MSNBC host Joy Reid for tweeting that she planned to discuss an “affinity” between the group and the Russian government on her show, saying “our lawyers will monitor your program.”
So, again, not the best tone for a group dedicated to prying open closed organizations, regardless of their desires. It also responded to an article by Talking Points Memo’s Josh Marshall, investigating alleged ties between the Trump campaign and Vladimir Putin, accusing him of “weird priority” for focusing on the method of the correspondences' release rather than the data dump itself. Wikileaks has also accused Twitter as well as Facebook of censoring information about the DNC emails, highlighting DNC email-related posts that were flagged as “unsafe.” Facebook CSO Alex Stamos subsequently stated on Twitter that the problem had been “fixed,” however, and there’s no shortage of Facebook links out there directing people straight to the leaked materials. Twitter similarly denied the allegations in a tweet from its public relations account.
The Wikileaks brouhaha wasn’t the only instance this weekend of a controversial, perceived piece of anti-Semitism on Twitter getting immediately rolled back and explained away. The Trump campaign landed in yet another such situation on Sunday morning, when General Mike Flynn¯once considered by Trump for his vice presidential selection¯retweeted someone who accused “Jews” of misleading people about the origins of the DNC email leak. Flynn has since apologized, saying he only meant to retweet a link to an embedded CNN article about the leak.
© The Daily Dot