news archives >>

news

Headlines June 2016 Headlines May 2016 Headlines April 2016 Headlines June 2016

UN aware of racism on internet

This impulse stems from the enormous increase in hateful comments in response to the shooting of a Syrian refugee at the Slovak-Hungarian border.

24/6/2016- The UN Committee on the Elimination of Racial Discrimination (VRAX) unanimously denounced statements against minorities found in online discussions. This impulse stems from the enormous increase in hateful comments in response to the shooting of a Syrian refugee at the Slovak-Hungarian border in early-May. These comments were collected and presented to the committee by the Islamic Foundation in Slovakia, VRAX informed in a press release. Customs officials close to Ve¾ký Meder flagged down four cars full of migrants that entered Slovakia from Hungary in May 2016. One of the vehicles refused to stop, prompting the authorities to open fire at its tyres, during which the woman was hit.

Moreover, the Slovak Catholic Charity and NGO Human Rights League informed the committee of repeated physical assaults on a young refugee from Somalia including one witnessed by her young son. Over the last year, we have registered a significant rise in hate speech against refugees, foreigners and other minorities which have boiled over into physical attacks against the most vulnerable persons, said Zuzana Števulová, head of the Human Rights League. “Such a situation is not acceptable and requires not only the activities of the police, prosecution and civil society, but also major political indicators and conviction from the highest authorities of the country,” said Števulová in the press release.

VRAX vice-president Irena Biháriová added that criminal sanctions cannot provide the only and universal solution to the problem in spite of more active support of the use of legal instruments for the fight against this phenomenon. VRAX asks to investigate and punish perpetrators and for plans to settle in detail the topic of online hateful statements through a special working group. The committee delegated Biháriová to discuss matters with the Interior Minister and the general prosecutor to enhance cooperation in preventing and combating extremism and radicalism, the press release reported.
© The Slovak Spectator

top

New Zealand: Cyberbullying: The media should practise what they preach (commentary)

As so often happens with the rapid uptake of technology, we’re quickly forced to confront new ethical dilemmas.

23/6/2016- As so often happens with the rapid uptake of technology, we’re quickly forced to confront new ethical dilemmas. Cyber-bullying is proving one of the great unforeseen challenges of our time. It’s admirable our media are now showing leadership with an It’s Not Okay–style campaign to discourage bullying and abuse on social media. But this will only resonate if the media takes responsibility for its own contributions to this pernicious social problem.

The internet has been the biggest democratic boon to human communication since the printing press. Yet the online revolution has also put the media in a spiral of financial vulnerability and many outlets have ramped up their salacious, celebrity and “click-bait” content in a bid for survival. This creates the optimal environment for social bullying. The more lurid the story, the nastier the comments and the wilder the “social media storm” that acts to justify the news judgment. The media need not actively encourage commenters to “hate on” subjects, but too often they fail to provide a handbrake by adequately moderating abuse from comments threads.

Even state-funded Radio New Zealand, which need not stoop to click-bait for survival, has been inadvertently caught out, by not dealing in a timely manner with a slew of ugly and racist comments about the Prime Minister on its site earlier this year. It would be a useful display of anti-bullying leadership for media outlets to provide no comments function at all unless it is adequately moderated. Likewise, they might take note that some of the worst online offenders are journalists and columnists. The Press Council recently upheld a complaint against a New Zealand Herald journalist who went on Twitter to bait a public figure who was a subject of his pending news story.

Social media are public information outlets no less than newspapers or radio stations. What members of the media do there reflects on the standards and ethics of their employers. It’s beyond ironic that the media have run innumerable stories about employees being disciplined and even fired for questionable or distasteful online posts, while media managers seem blind to some of their own writers’ online aggro in forums such as Twitter. This has included obscenity and vilification – not just of politicians or newsmakers but of ordinary New Zealanders, including young women, just doing their jobs, who have dared to displease. If the media and their corporate sponsors aren’t aware of the damage being done to their brands, they should be. Enforcing professional standards of conduct is critical.

Beyond that, most of us, armed with common sense and empathy, can tell the difference between gratuitous bullying – which may meet the definition of harmful digital communication – and fair comment and criticism, which is healthy and necessary. However, this is an area in which we should tread carefully. No matter how well-intentioned the desire to repress abuse and hate speech, we risk crimping freedom of expression. In critical respects, the internet has changed nothing. The most effective remedy for objectionable speech remains, as always, not to silence or gag those with whom we disagree but to provide more opportunities for free speech. Yet our new ability to access a like-minded cyber-community can make us feel entitled to shut down those whose opinions we dislike. This is a variant of bullying, distinguishable from the standard kind only because of its self-righteousness: someone says something others find sexist, racist or unscientific and keyboard warriors propose a boycott and lead a massed online beat-up.

Sustained abuse, silencing and threat of income loss – bullying doesn’t come much worse. Yet, too often, people feel virtuous in such pile-ons because they’re only trying to silence views inimical to their personal community. It’s righteous for us to “un-person” them, but Stalinist or fascist if they try to stifle us. Amid the torrent of comments, the job of the media is clear. It is to facilitate the expression of facts and opinions as a civilised and informed function of a healthy society. It is never to simply stand back and watch bullying work its repressive harm on our freedom.
© The New Zealand Listener

top

Auschwitz Game Highlights Serious Holes in Google’s Review Process

Controversy raged this week over news that the Google Play store had allowed a free mobile game that promised players could “live like a real Jew” at Auschwitz.

23/6/2016- For the second time in a month, Google’s review process was brought into serious question. But now, the game’s creators have come forward to say that was the point of the game. TRINIT, a vocational school teaching video game design in Zaragoza, Spain, asked their students to design games that would test the strength of Google’s policy on hateful speech and inappropriate imagery during the review process, the institute told The Forward in an email. “Surprisingly, Google denied almost all of the test apps, but [the Auschwitz game] was approved,” the institute said. TRINIT said it pulled the game, which it said was nonfunctional and only included a start page, on Sunday night after realizing it had sparked media controversy. The institute said it received a notice from Google later that night notifying it that the app had been reported several times. Google confirmed to the Forward that the app was pulled from its store on Monday.

In addition to its Auschwitz game, TRINIT said it chose to pull other test apps from Google Play, including apps named “Gay Buttons” and “Kamasutra Dices.” The school said it instructed students to test Google’s app policy by specifically testing themes corresponding to questions on a Google survey used in the app approval process. One question on the survey, shared with the Forward by TRINIT, asks whether the app under review contains symbols or references to Nazis. Although the school said it replied yes to the survey question, Google still approved the submission. A Google spokesperson said, “While we don’t comment on specific apps, we can confirm that our policies are designed to provide a great experience for users and developers.” “This clearly indicates that Google needs to be more vigilant about its review process,” said Jonathan Vick, assistant director of the Anti-Defamation League’s cyberhate response team.

However, Vick also finds blame with the way TRINIT conducted its experiment and remains skeptical of the app’s true purpose. Vick told the Forward it concerns him that the school felt it was sufficient to take down the offensive app without issuing a statement, and he called on the school to explain itself in public. “Review is a human process and any time people are injected into the equation, the margin for error increases,” Vick said. “Since the Google review process isn’t transparent, we don’t know where in the review chain someone approved the app, but it means more training might be needed for Google employees,” he said. “If real, the experiment speaks for itself,” Vick said.

Google launched a new app review process last year with the goal of catching apps that violate its policies on hateful speech before they reach the Google Play store, including both machine and human review elements. However, the company is still in the process of fine-tuning the process and relies heavily upon community reporting to review the millions of game submissions it receives.
Google Game Let Users ‘Live Like a Real Jew in Auschwitz’    Application That Identifies (((Jews))) Online Disappears From Google Browser, App Store
© The Forward

top

Nigeria: Bill to protect social media users against hate speech passes first reading

22/6/2016- The House of Representatives on Wednesday passed for first reading a Bill for an Act to Provide for the Protection of Human Rights online. The bill which was sponsored by Rep. Chukwuemeka Ujam (PDP-Enugu) is titled ``Digital Rights and Freedom Bill’’. Presenting the bill, Ujam said that it sought to guard and guide Nigerian internet users on their rights and to protect the rights. According to him, Section 20(3) provides against hate speech online, while Section 12 of the Bill outlines the process to be followed before access is granted to governmental agencies and others to the personal data of citizens. He said that the bill also provided for the protection of citizen's rights to the Internet and its free use without undue monitoring. He added that it was targeted at ensuring openness, Internet access and affordability as well as the freedom of information online.

The lawmaker said that Nigeria lacked a legal framework for the protection of internet users, in spite of being a subscriber to international charters which recognised freedom and access to the Internet as a human right. Some of the charters, he said, were the African Union Convention on Cyber-Security and Personal Data Protection of 2014. Contributing to the debate, Rep. Aminu Shagari (APC-Sokoto) said that the bill was aptly designed for the protection of persons online. On his part, Rep. Sani Zoro (APC-Jigawa) stressed the need to create awareness on the details of the bill to prevent the public from misconstruing it as legislation that will restrict the freedom of internet users in the country. The bill was unanimously passed through a voice vote by the lawmakers. The Speaker of the house, Yakubu Dogara, referred the bill to Committees on Telecommunications and Human Rights for further legislative action.
© The Daily Trust

top

UK: Far-right groups incite social media hate in wake of Jo Cox’s murder

20/6/2016- Police are being urged to investigate extreme right-wing groups in Britain and their incitement activities after a series of hateful messages were published on social media in the wake of Jo Cox’s murder. Nationalist groups have been accused of glorifying Thomas Mair, Mrs Cox’s accused killer, crowing about the attack and making excuses for it. It comes amid concern about the rise of the far right in pockets of the UK, notably in Yorkshire, with violence at anti-immigration marches and increasing anti-Muslim hate crimes. In the days since Mrs Cox’s death scores of members of far-right organisations have taken to social media to make threats to other MPs and to crow about the fate of the 41-year-old mother, who was a prominent campaigner for remaining in the EU.

The northeast unit of National Action, which has campaigned for Britain to leave the EU, tweeted: “#VoteLeave, don’t let this man’s sacrifice go in vain. #JoCox would have filled Yorkshire with more subhumans.”
#VoteLeave, don't let this man's sacrifice go in vain.#JoCox would have filled Yorkshire with more subhumans! pic.twitter.com/qXT5ez6dlG
— National Action NE (@NANorthEast_) June 16, 2016

The police northeast counter-terrorism unit confirmed it was probing a number of “offensive messages on social media and extreme social media content”. A spokesman said: “We are conducting checks on this material to establish whether or not any criminal offences have been committed.” There have been numerous other disturbing messages from far-right supporters in other areas of the country, resulting in calls for police to monitor and investigate online hatred. A member of the English Defence League, another far-right group, posted on Facebook: “Many of us have been saying for years that sooner or later “SOMEONE” was going to get killed. No one thought it was going to be one of “them” (left-wing) who was going to be the first victim of the coming civil unrest heading towards Europe ... BUT he had reached his breaking point (like many of us) and snapped.”

One Twitter user described Mrs Cox as a “traitor” while another said she was a “threat to the UK” and described Mr Mair as an “Aryan warrior”. Another group, calling itself the Notts Casual Infidels, linked to a news story of Mrs Cox’s murder and posted on Facebook: “We knew it was only a matter of time before we take it to the next level. We have been mugged off for too long.” A man associated with Pegida UK, an anti-Islam group, posted on Facebook: “From today the game changed as a good friend said have a look at today’s date 16/06/2016. Next time the government must listen to its people.”

Matthew Collins, head of research at Hope not Hate, a charity that seeks to defeat the politics of extremism within British communities, said he was concerned that “there are a number of tiny, right-wing organisations that are taking great glory and satisfaction from Jo’s death”. He added: “I think the police should look at the motives behind some of those people that are continuing to speak so much hatred and division.” Mr Collins said that although there were many people who did not agree with or vote for Mrs Cox, “they had the decency to recognise the contribution she made to wider society”. Referring to hateful messages posted on social media, he said: “These people are so on the margins of society that they no longer have any sense of moral decency or moral codes. I think the police should look at the motives behind some of those people that are continuing to speak so much hatred and division and are well aware of what such words have led to. These people are engaged in a whole network of tearing down the moral fabric of society.”

Stephen Kinnock, the MP who shared an office with Mrs Cox, was subjected to “particularly venomous” online abuse last week after an article about his family’s support for the Remain campaign. One email threatened violence and has been reported to the police, he said. Mr Kinnock said the far right were a “shady bunch” who had many of their “views legitimised by the referendum and the choice of the Leave campaign to go hard on immigration”. “I get the sense that a lot of rhetoric around the Leave campaign would have been classified as far right only five years ago but now it’s more mainstream. “There seems to have been a drum beat over the years for venomous rhetoric. A lot of this referendum would have been classified as pretty extreme. “Many MPs have a siege mentality because of the abuse, so I do think something needs to be done about it, but the question is what. You’ve got to get a balance between free speech and protecting people’s security. The last thing we’d want to do is never hold surgeries, then the bad guys have won.”
© The Times

top

India: To counter hate messages online, Bareilly cops seek 2k ‘digital volunteers’

In order to keep an eye on "online rumour-mongering", police in Bareilly division is planning to rope in over 2,000 'digital volunteers' for the task.

20/6/2016- In poll-bound UP, these volunteers will keep a close eye on "communally-sensitive messages and polarization propaganda" that has potential to disturb peace in the region. Deputy inspector general of police (DIG) Bareilly Range, Ashutosh Kumar said, "We need at least 2,000 digital volunteers to tackle rumours and wrong information posted on social media sites. Director general of police has instructed every district to engage digital volunteers. As of now, the response from our range has been cold because of lack awareness about the initiative, but we are working towards it." Explaining the importance of engaging these volunteers, DIG said, "If any objectionable content is posted on any social media platform, the first step for us is to lodge an FIR. Police then contacts cyber police stations in Agra and Lucknow, from where officials write to the headquarters of these sites in foreign countries and the process of removing the content is initiated. It's a long process and much damage is done till this procedure is completed." He added, "We know that there are other ways of getting such content removed instantly. Like on Facebook if the post receives a certain number of 'dislikes'. For such situations digital volunteers will have a huge role." These volunteers will also play an important part in informing public through social media on what actually happened, he said.

According to cops, anyone, who is a regular social media user can become a digital volunteer. A person who is interested in maintaining peace in their neighbourhood and is well-versed with social media can volunteer for it. For becoming a member, a person can follow official accounts of police on social media and inform them about it. "Few of the digital volunteers can reach at the scene and help in making people aware about the truth. A riot-like situation takes place at many locations due to false rumours spread on WhatsApp, Facebook, Twitter, Instagram and other such sites," he said. "As UP is gearing for state assembly elections, scheduled for next year, there are chances that few persons will try to mislead people for their communal agenda, creating law and order problem. To thwart their attempts, we need such initiatives," he said. "In fact, we also have a software through which we can know that how many persons are talking about a certain issue by typing few keywords. We can also know that how many of them are spreading wrong information and trace their IP address," said DIG.
© The Times of India

top

New Zealand: Cyberbullying: Retiring judge leads new centre to assess laws

Hub at Auckland University to provide research and development into technology’s effects on legislation.

17/6/2016- A new national cyber-law centre is being set up and its first project is putting the Harmful Digital Communications Act under the microscope. The New Zealand Centre for ICT Law, which opens next month at Auckland University, aims to provide an expanded legal education for students and provide research and development into the impact electronic technology has on the law. The centre's new director, retiring district court Judge David Harvey, said he regarded the centre as a vital hub for both the legal fraternity and the public. "More and more IT is becoming pervasive throughout our community and it's providing particular challenges and interesting developments as far as the law is concerned." Research was already underway on the effectiveness of the Harmful Digital Communications Act. Future projects would include digital aspects of the Search and Surveillance Act, Telecommunications Act and Copyright Act.

Mr Harvey, who consulted with the Law Commission on the legislation, said already significant trends were emerging in prosecutions taken under the Harmful Digital Communica-tions Act. In its first year 38 cases had come before the courts, which he described as surprisingly high for such a recent law. "It's quite a few for a relatively new piece of legislation that's dealing with not a new phenomenon but a new technology, and it seems that the prosecution people with the police have been able to grapple with some of the aspects of this." Researchers had already noticed a significant number of cases involved revenge porn and a broad swathe of electronic media used to harm others. "[The act] catches any information that's communicated electronically. If you're making a nasty telephone call using voice on your smartphone, that amounts to an electronic communica-tion. So it's the scope of the legislation and who's being picked up that becomes very, very interesting."

He said it remained troubling to see the level of harm inflicted through technology. "It's a matter of concern that people seem to lack the inhibition that you would normally expect in what they say and what they do. "A number of cases have involved posting intimate photographs and intimate videos online with the intention of harming somebody else. The number of occasions on which that has occurred is surprising. "I think the level of anger that is expressed or at least the intensity of the language - hate speech - is also a matter of concern." Mr Harvey expected the second component to the act, the civil agency to be headed by NetSafe, would have an enormous impact. "It will be interesting to observe how many applications are made to the approved agency in the first place and subsequently how many are settled or resolved or go on to the court. I imagine there will be quite a bit of activity coming up once the civil enforcement regime is in place."

While it was still early days he was confident the act was providing help to people being cyberbullied. "It won't solve the problem in the same way that making murder a crime doesn't stop murder but at least it will provide people with a remedy, with a place to go which they haven't had before."

STOP THE HATE: READ THE FULL SERIES HERE
© The New Zealand Herald

top

Britain First: The far-right group with a massive Facebook following

16/6/2016- The Leader of Britain First has distanced the far-right group  from the murder of Labour MP Jo Cox, despite several witnesses confirming that the killer shouted "Britain First" three times during the attack in Leeds on Thursday. "At the moment that claim hasn't been confirmed - it's all hearsay, Paul Golding said. "Jo Cox is obviously an MP campaigning to keep Britain in the EU so if it was shouted by the attacker it could have been a slogan rather than a reference to our party - we just don't know. "Obviously an attack on an MP is an attack on British diplomacy - MPs are sacrosanct. We're just as shocked as everyone else. Britain First obviously is NOT involved and would never encourage behaviour of this sort. "As an MP and a mother, we pray that Jo Cox makes a full recovery." In a video on the party’s website he said the media had “an axe to grind”. He added: “We hope that this person is strung up by the neck on the nearest lamppost, that’s the way we view justice.”

What we know about the group
Formed in 2011 by former members of the British National Party, Britain First has grown rapidly to become the most prominent far-right group in the country. While it insists it is not a racist party, it campaigns on a familiar anti-immigration platform, while calling for the return of “traditional British values” and the end of “Islamisation”. The party says on its website: “Britain First is opposed to all mass immigration, regardless of where it comes from – the colour of your skin doesn’t come into it – Britain is full up.” Although it claims to have just 6,000 members, Britain First has managed to build an army of online fans, mainly by using social media to campaign for innocuous causes such as stopping animal cruelty, or wearing a poppy on Remembrance Day, and appealing for users to “like” its messages.

It now has more than 1.4 million “likes” on Facebook, more than any other British political party. In a bid to garner newspaper coverage, the group has carried out mosque invasions and so-called “Christian patrols”. A march in January targeted Dewsbury, near Jo Cox’s Batley and Spen constituency, and featured 120 Britain First members carrying crucifixes and Union Jacks through the town. Mrs Cox wrote on Twitter at the time: “Very proud of the people of Dewsbury and Batley today - who faced down the racism and fascism of the extreme right with calm unity.” Britain First’s current leader, Paul Golding, stood against Sadiq Khan in the London mayoral election earlier this year. After Khan’s victory, the group announced that it would take up “militant direct action” against elected Muslim officials. In a chilling warning on its website, the group said: “Our intelligence led operations will focus on all aspects of their day-to-day lives and official functions, including where they live, work, pray and so on.” The party has a vigilante wing, the Britain First Defence Force, and last weekend carried out its first “activist training camp” in Snowdonia, at which a dozen members were given “self defence training”.
© The Telegraph

top

Austria: Far-right leader caught up in online racism scandal

The leader of Austria’s far-right Freedom Party (FPÖ) was caught up in yet another scandal this week after his supporters posted racist comments about Austria's football team on his Facebook page.

16/6/2016- Many of Heinz-Christian Strache’s Facebook followers started posting anti-immigrant hate speech after Austria lost their first Euro 2016 game to Hungary 2-0 on Tuesday. The comments were published underneath a post from Strache wishing the Austrian team luck with their debut game. After they lost he suggested that people keep their spirits up and that the referee was partly to blame for Austria’s loss. Some of his followers disagreed, however, arguing that having players whose families have an immigrant background on the Austrian team might be why the Austrian team lost. One poster described the Austrian team as “the amazing national team with two coal sacks”, likely referring to David Alaba and Rubin Okotie, who have a Nigerian-Filipino and Nigerian background respectively. Another user said he “could puke” when he sees “what is sold as Austria”.

Germans writing online had similar complaints about their own team. One commentator said that his team should no longer be called the German team but just “the team”, suggesting that because some of the German players' parents have immigrant backgrounds they are not true Germans. A member of the far-right Alternative for Germany (AFD) party recently also faced criticism for saying that the German team was “no longer German”, the Local Germany reported. It is not the first time that Strache has been caught up in a scandal involving comments left on his Facebook page. Only a few days ago, his followers posted death threats to Chancellor Christian Kern from the Social Democratic Party (SPÖ). The Freedom Party leader has had to ask his followers to be more moderate with their postings. The FPÖ have deemed these comments unacceptable but have also often said that they could not check each one, as there were so many posted everyday.
© The Local - Austria

top

Imagine CYBERSPACE without HATE

By Deborah J. Levine, Award-winning author/Editor, American Diversity Report

14/6/2016- As a former target of Cyber Hate, I sat spellbound with various movers and shakers of Chattanooga’s Jewish community as we listened to Jonathan Vick, Assistant Director of the Cyber Safety Center of the Anti-Defamation League. Founded in 1913 “to stop the defamation of the Jewish people and to secure justice and fair treatment to all,” ADL’s tag line is “Imagine a World Without Hate®.” ADL began reporting on digital hate groups in 1985, exposing and monitoring groups such as StormFront created by KKK leader, Don Black. StormFront was popular with white supremacists, neo-Nazis, bigots, and anti-Semites. In recent years, StormFront has moderated its language somewhat to appear more mainstream. It’s membership has grown to almost 300,000 despite reports documenting one hundred homicides committed by StormFront members (Southern Poverty Law Center).

Hate groups like StormFront pick up speed on the internet with new technologies, create global communities, raise funds, and convert the unwary into believers with sophistica-ted techniques. According to Vick, these groups can also intimidate into silence, disarm by hacking, encourage hate crimes, and punish by hijacking. The good news is that known groups can be better monitored on the internet and exposed, where once they operated under the radar. The not so good news is that the mask of anonymity of Cyber Hate can pose a huge challenge. In his 2011 address, Hate on the Internet: A Call for Transparency and Leadership, Abraham Foxman, ADL National Director, described the problem which has only become worse with time. “Today, we have a paradigm shift, where Internet users can spew hatred while hiding behind a mask of anonymity. The Internet provides a new kind a mask - a virtual mask, if you will - that not only enables bigots to vent their hatred anonymously, but also to create a new identity overnight... Like a game of “whack-a-mole” it is difficult in the current online environment to expose or shame anonymous haters.”

The major Internet companies wrestle with these issues, as do we. How should they define what is hateful and what violates their terms of service? How do they police the incredible number of posts, blogs, and videos posted online every minute? As companies like Facebook, Twitter, and YouTube grapple with privacy issues, the public needs to voice its concerns. Organizations like the ADL can and do influence what is uploaded and posted.

Vick discussed the Anti-Cyber Hate Working Group in Silicon Valley that ADL convened to explore these issues with tech companies effectively. Given the current political cycle, this discussion is vital as religious and political-based hacking increases. For example, Vick cited a “Brag sheet” that lists 35,000 websites hacked leaving anti-Semitic messages. The messages include “memes” that perpetuate stereotypes of Jews. Some are anti-Israel, others depict Jews as money lenders. They can be almost impossible to monitor, such as the (((Hugs))) graphic that identify Jews on Twitter. On Google Chrome, there’s an app that identifies Jews on any given page. ADL has contacted Google and they have removed it, but this is an example of how technology is evolving.

Technology adds to the aggressive presence of hate groups, as anyone who has been hacked can confirm. When my online magazine was hacked and hijacked, the FBI traced the perpetrators to a terrorist group in Iran. The American Diversity Report was erased and replaced by a single screen claiming responsibility and threatening my life with unrepeatable epithets topped off by “Death to mother-f***** Zionists!” All the sites on my webmaster’s server were similarly wiped out and replaced, whether shopping pages or golf tournaments. I was invited to leave the group, become my own webmaster, and implement my own security. All of which I’ve done in a highly-motivated learning mode. Anything and anyone can be hacked. In the Target stores case, the hackers went through one of their service providers, an air condition company. Vick cited a case on Facebook, where a user named Roman Kaplan had a weak password that was taken over by ISIS which then had access to all his contacts and apps. The goal is to make you feel targeted, vulnerable, and isolated.

Vick offered advice for protecting yourself against digital terrorism.
Be Aware: Google yourself and know where your name appears. How do you identify yourself and what personal information do you give? Be aware of how your information is shared on the internet by organizations, including your synagogue.

Protect yourself: Passwords are your best protection. Don’t use your name, religion, location, or personal information. Instead, pick a favorite song lyric, use caps, numbers, and symbols with it. Don’t have an online password vault with all your passwords in it. Write down the passwords in a notebook. Old-fashioned pen and paper will keep them safe.

Protect your website: Don’t host your own website. Use a reputable company and make sure that they have a phone number contact for emergencies. Know when people visit and why. If you start getting friend request from strangers from strange places and unusual traffic spikes, be suspicious.

Protect your E-mails: Have multiple email accounts for various audiences. Do not use same PW on all accounts. Watch for Phishing emails and robot calls. They may be from companies that look real but the actual email is bogus. Advise staff to never open an attachment from an unknown source. Don’t click, don’t open. When in doubt- delete. Err on the side of caution.

Protect your social media presence: Know who is posting and tagging your pictures. Segregate your personal, community and professional life on separate pages. Limit the amount of personal information that you post. Know where your posts and blogs are going. Who is likely to target you? Know where your problem people are and, if you’re enraged, take time to respond. Know the terms of service and what crosses the line of acceptability and how to report an incident.

Protect your devices: Understand the inter-connectedness of devices and apps. Your mobile provider knows what you are doing. Apps know what you’re doing. When logged into a service, they know what you’re doing. If you various sites open, they can all see what you’re doing. Make no presumption of privacy on mobile devices.
© The Huffington Post

top

Australia: Facebook dragged into QUT racism case

Facebook has been dragged into a racial discrimination case involving three Queensland university students.

13/6/2016- Federal Circuit Court Judge Michael Jarrett on Monday ordered the social media giant be subpoenaed for information on the account details of a Queensland University of Technology Student accused of making a racist comment online. He ordered the subpoena be sent to Facebook's international headquarters in Dublin, along with 100 euros to cover international postage fees. Calum Thwaites has denied being responsible for a two-word racist post among a 2013 Facebook thread about three students being asked to leave an indigenous-only computer lab. He claims a post was not written by him and came from a fake account. Mr Thwaites is being sued for $250,000 alongside fellow students Alex Wood and Jackson Powell by Cindy Prior, the indigenous administration officer who asked the students to leave.

Mr Wood has not denied posting "Just got kicked out of the unsigned indigenous computer room. QUT (is) stopping segregation with segregation?" on Facebook after being asked to leave the lab and Mr Powell has admitted writing "I wonder where the white supremacist lab is?" However, both deny their posts were racist. Barrister Susan Anderson, representing Ms Prior, told the court on Monday Facebook should be asked to provide details about Mr Thwaites' accounts. Ms Anderson said the information from Facebook, providing it still had it, would probably be able to answer whether Mr Thwaites was behind the post. Tony Morris QC said although his client, Mr Thwaites, would be "delighted" to be proved right, the application to subpoena the documents was futile and would only "muddy the waters" of the case. Judge Jarrett will publish his reasons for allowing the subpoena in the coming days. Lawyers representing the trio have called for the matter to be dismissed, however Judge Jarrett is yet to deliver his judgment on that application.
© 9 News

top

Twitter Can't Figure Out Its Censorship Policy

13/6/2016- New York Times editor Jon Weisman announced he was leaving Twitter last week, thanks “to the racists, the anti-Semites, the Bernie Bros who attacked women reporters yesterday.” Enough was enough. Here’s what happened: In response to a rash of hatred on the site, Weisman’s colleague Ari Isaacman Bevacqua (also a Times editor) reported accounts that used anti-Semitic slurs and threats to Twitter support. Twitter replied that it “could not determine a clear violation of the Twitter Rules,” Weisman told me. It didn’t make sense to him. Weisman isn’t alone. A Human Rights Watch director, a New York Times reporter, and a journalist who wrote about a video game have all reported a similar phenomenon. Still more confirmed the process independently to Motherboard. They each got what they perceived to be a threat on Twitter, reported the tweet to Twitter support, and received a reply that the conduct does not violate Twitter’s rules.

When Twitter made new rules of conduct in January, the company gave itself an mpossible task: let 310 million monthly users post freely and without mediation, while also banning harassment, “violent threats (direct or indirect)” and “hateful conduct.” The fault lines are showing. The hands-off response that Bevacqua received fits with the Twitter that CEO Jack Dorsey touts. Censorship does not exist on Twitter, he says. But there’s another side to Twitter, one with a “trust and safety council” of dozens of web activist groups. This side of Twitter developed a product to hunt down abusive users. It’s the one that signed an agreement with the European Union last month to monitor hate speech. It’s joined by Facebook, YouTube, and Microsoft in the agreement, and while it’s not legally binding, it’s the first major attempt to put concrete rules in place about how online platforms should respond to hate speech.

“There is a clear distinction between freedom of expression and conduct that incites violence and hate,” said Karen White, Twitter’s head of public policy for Europe. What’s not entirely clear, is how Twitter is going to enact this EU agreement, though it seems like the platform will rely on users reporting offensive content. The internet has always been a breeding place for vitriol, but it’s become much more present lately. Neo-nazis have been putting parentheses, or “echoes,” around the name of a Jewish writer. Google Chrome recently removed an extension called Coincidence Detector that added these around writers’ names. The symbol represents “Jewish power,” because anti-Semites just can’t give up on their theory that Jews are behind everything bad in history. From a practical standpoint, policing hate speech on a platform with 310 million monthly users is difficult. The “echoes” don’t show up on a Twitter search or on a Google search.

Twitter wants to be a place of open and free expression. But it also, at least according to a statement to the Washington Post, wants to “empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance.” “I would say that much of the anti-Semitism that is being spread on Twitter and other platforms is not new in terms of the messaging and content,” Oren Segal, director of the Anti-Defamation League’s Center on Extremism, told Motherboard. “What’s new is for people to be able to deliver their hatred in such public and efficient ways.” The “echoes” symbol has extended to be a sign of racism as well. The symbol is common on Twitter even without the extension—some writers have put the marks on their names voluntarily, to reappropriate the symbol, but others use it for hatred. One user sent Weisman a photo of a trail of dollar bills leading to an oven.

Another user tweeted in reply: “well Mr. (((Weisman))) hop on in!” This user has a red, white, and blue flag with stars, stripes, and a swastika as his cover photo. It’s a flag from the Amazon television series The Man in the High Castle, which depicts an America under Nazi control. Weisman reported these tweets to Twitter. The site didn’t remove them. Some others, though, were removed. “Suddenly I get all these reports back saying this account has been suspended,” Weisman told the Washington Post. “I don’t really know what their decisionmaking is,” he said. “I don’t know what is considered above the line and what isn’t.” “It’s not like this echo or this parentheses meme was in and of itself the most creative and viral anti-Semitic tactic that we’ve seen,” Segal said. “It’s relevant because we’ve come to a time of more anti-Semitism online... It represented one element of a larger trend.”

Twitter has taken action against accounts perceived as offensive in the recent past. Recently it suspended five accounts that parodied the Russian government, although now the most popular of these, @DarthPutinKGB, is back up. Since mid-2015, Twitter has suspended more than 125,000 accounts for promoting terrorism, a practice that picked up in 2013. After the March terrorist attacks in Belgium, the hashtag #StopIslam was trending. Twitter removed it from the trending topics sidebar, although many instances of the hashtag were using it in a critical light. Earlier this year, the platform revoked the “verified” status of Breibart personality Milo Yiannopoulos, who tweets provocative messages that have been described as misogynistic and as harassment. Yiannopoulos said he reached out to Twitter twice, but he never got an answer about why he was un-verified. The platform was frustratingly unresponsive, as users who reported offensive tweets found as well.

To Twitter’s co-founder and CEO, bigotry is part of life. “It’s disappointing, but it’s reflective of the world,” he said when Matt Lauer asked him about people who use the platform to “express anger and to hurt people and insult people.” He reminded Lauer that users are free to block whomever they’d like, although he’s never blocked anyone on his account.
© Motherboard

top

Finland: Police ponder probe into Soldiers of Odin secret Facebook group

Police say they are considering a criminal investigation into racist messages exchanged in a secret Facebook group by leaders of the Nazi-linked Soldiers of Odin. Police chief Seppo Kolehmainen confirmed to Yle that police will try to determine whether or not any of the group’s messages are criminal in nature.

11/6/2016- In March this year, Yle obtained screenshots of a secret Facebook group maintained by leaders of the anti-immigrant group Soldiers of Odin, which was founded by Kemi-based neo-Nazi Mika Ranta late last year. Among one of the regular greetings used by members of the group is the salutation, "Morning racists." The posts also feature members showing Nazi salutes and include images of Nazi symbols. As reported earlier this week by Yle, leaders also suggested patrolling without insignia so as to be able to engage in attacks more freely, urging members to have "unmarked patrols and zero tolerance for dark skin" and to "hammer anyone who even leans to the left". Police commissioner Seppo Kolehmainen told Yle that officers will be looking into the group’s posts to see if they bear the hallmarks of criminal activity. "We are now evaluating the content of the messages to see whether or not they can be considered criminal. The National Bureau of Investigation is now responsible for the evaluation and on that basis we will determine whether or not to begin an investigation into some message or individual," Kolehmainen said.

Fresh assault conviction
Finnish news agency STT first reported on law enforcement’s intention to investigate the group and its messages. Soldiers of Odin founder Mika Ranta was convicted of aggravated assault in May. He had previously been convicted of racially-motivated attacks on two immigrants in 2005. Ranta, who was previously a member of the neo-Nazi Finnish Resistance Movement, said he founded Soldiers of Odin, ostensibly to protect nationals following the arrival of asylum seekers in the northern town of Kemi.
© YLE News.

top

Google didn’t need to delete the anti-Semitic (((Echo))) app (opinion)

The reaction from social media users was the best two-fingered response that Twitter has ever seen, as Jewish people reclaimed their identity from the trolls who hoped to use it against them
By Jacob Furedi


9/6/2016- Anti-Semitism is all the rage these days. From the emergence of far-right parties across Europe to our very own Labour party, we are constantly warned that life as a Jew is becoming rather unpleasant. Most recently, animosity towards the Jewish people has extended into the cyber sphere. An anti-Semitic app available to download on Google Chrome has made its way into the public sphere after Jonathan Weisman, deputy Washington editor for the New York Times, raised questions about why Twitter trolls were referring to him as (((Weisman))). He had just tweeted about an article criticising GOP candidate Donald Trump titled “This is how fascism comes to America”. It became clear that certain users had downloaded a “Coincidence Detector” which automatically surrounded Jewish names written on the internet in parentheses. ‘Israel’ automatically reads as (((Our Greatest Ally))). Users of the app consequently used the symbol to denote a Jewish subject online.

Having been born into a Jewish family, I’m not particularly surprised. To be honest, the most offensive element of the app is its shameful appropriation of that fantastic gram-matical tool: the parenthesis. By highlighting the presence of Jewish names, the app intends to make users aware of Jewish involvement in the media. According to its creators, the chosen people have secretly masterminded to take over the world. Given the apparent ignorance of the schmucks who created the “Coincidence Detector”, it wouldn’t be surprising if their deeply-held fear was correct. Perhaps I’m being harsh. The algorithm used by the app was pretty clever. Anti-Semites who use the detector’s use of parentheses are almost untraceable given that search engines tend to exclude punctuation from their search results. Anyhow, Google decided that it no longer wanted to host the extension and promptly removed it from its store by appealing to “hate speech”. Given that Google is private company, it had every right to withdraw a component of its search engine that may affect the reputation of its business.

But was it necessary? The Twittersphere’s reaction suggests not. Rather than needing to be shielded from anti-Semitic users, people actively chose to track them down and expose their prejudiced convictions. Jewish users reacted with the best two-fingered response Twitter has ever seen. They promptly edited their usernames to include the symbol that was previously being used against them. Jonathan Wiseman became (((Jonathan Wiseman))) and Jewish journalists and writers followed suit. Soon our newsfeeds were plastered by comments from (((Jeffrey Goldberg))), (((Yair Rosenberg))), (((Greg Jenner)))) and (((Lior Zaltzman))). Instead of appealing to “hate speech”, these people thought it more prudent to reclaim their Jewish identity from a few trolls who hoped to use it against them.

Despite receiving a five star rating on Google’s store, only 2,473 people downloaded the app. And it showed. Their voices were soon drowned out by swathes of users undermining their anti-Semitic cause. Crucially, the counter-movement demonstrated that Jewish users didn’t need Google to protect them from the ‘Coincidence Detector’. They were perfectly capable of doing that themselves. From their enslavement in Egypt to their genocide in Eastern Europe, the Jewish people have never had it easy. But, importantly, they still survived. We shouldn’t be too surprised, therefore, that they managed to deal with a crudely devised anti-Semitic app. ‘Coincidence’? I think not.
© The Independent - Voices

top

How Jews Are Re-claiming a Hateful neo-Nazi Symbol on Twitter

To combat the online vitriol, Jews and non-Jews alike are adopting a controversial new method which, some critics say, is equivalent to pinning a yellow 'Jude' star to one’s shirt.

7/6/2016- It is not a particularly pleasant time to be a Jew on the Internet. In recent weeks, Jewish journalists, political candidates and others with Jewish-sounding names have endured a torrent of anti-Semitic vitriol online, much of it coming from self-identified supporters of U.S. Republican presidential candidate Donald Trump. Until it was removed last week, a user-generated Google Chrome extension allowed those who installed it to identify Jews and coordinate online attacks against them. It has gotten so bad that the Anti-Defamation League has announced that it is forming a task force to address racism and anti-Semitism on social media.

Last week, Jeffrey Goldberg, a national correspondent for The Atlantic, decided to fight back. He changed his Twitter username to (((Goldberg))), co-opting a symbol that neo-Nazis and others associated with the so-called “alt-right” use to brand Jews on blogs, message boards, and social media. The “echoes,” as they are called, allude to the alleged sins committed by Jews that reverberate through history, according to Mic, a news site geared toward millennials that first explained the origins of the symbol. Then Yair Rosenberg of Tablet Magazine, another popular troll target, encouraged his followers to put parentheses around their names as a way to “raise awareness about anti-Semitism, show solidarity with harassed Jews and mess with the Twitter Nazis.” Several journalists and other Jewish professionals followed suit, and the “thing,” as Internet “things” are wont to do, took off.

Jonathan Weisman, a New York Times editor who changed his username to (((Jon Weisman))) over the weekend, wrote on Twitter that the campaign was a way to show “strength and fearlessness” in the face of bigotry. Weisman was the victim of a barrage of anti-Semitic abuse last month after he tweeted the link to an article in the Washington Post that was critical of Trump. Weisman retweeted much of the filth — including memes of hook-nosed Jews and depictions of Trump in Nazi regalia — that came his way. “Better to have it in the open,” he wrote. “People need to choose sides.” In Israel, where Twitter is less popular than other social media platforms like Facebook and Instagram, a small number of journalists, including Haaretz’s Barak Ravid, joined the cause.

Many non-Jews also added the parentheses to their usernames out of solidarity. Among them was NAACP President Cornell Brooks, who tweeted on Saturday: “Founded by Jews & Blacks, the haters might as well hate mark our name [too]: (((@NAACP))).”  Neera Tanden, president of the Center for American Progress, a left-leaning think tank, told Haaretz that she joined the campaign after being targeted on Twitter. “I don’t know if they thought I was Jewish or that they are just awful,” said Tanden, who is Indian-American and not Jewish. “Anti-Semitism is as hateful as racism and sexism and as a progressive, I stand against it.” Yet the cheeky campaign struck some Jews as unseemly, the virtual equivalent of willingly pinning a yellow “Jude” star to one’s shirt. On Sunday, the journalist Julia Ioffe tweeted that she was “really uncomfortable with people putting their own names in anti-Semitic parentheses.”

Ioffe, who filed a police report in Washington, D.C. last month after receiving threatening messages following the publication of an article she wrote about Melania Trump, told Haaretz that she understood the purpose of the campaign and was not calling for others to abstain from participating. Nevertheless, she said, it only seemed to provoke more harassment. “The second I started tweeting about it, all those bottom dwellers immediately rose to the surface and said things like, ‘You’re doing our work for us,’” Ioffe said. Goldberg explained that his goal was simply to mock neo-Nazis by reclaiming and neutralizing an element of their online culture, such as it is. He said he was inspired by “the way the LGBT community took the word ‘queer’ and made it their own.” (On Sunday, he reversed the parentheses around his last name. Why? “Just because I can.”)

In a statement to Haaretz, ADL CEO Jonathan A. Greenblatt wrote: “There’s no single antidote to anti-Semitism posted on Twitter. An effective response includes investigating and exposing the sources of hate, enforcing relevant terms of service, and promoting counterspeech initiatives. From our perspective, the effort by Jeffrey Goldberg and others to co-opt the echo symbols is one positive example of clever counterspeech.” On Monday, the ADL added the triple parentheses to its online hate symbols database. The parentheses are beginning to disappear from Jewish Twitter usernames as “our little war on #altright,” in Weisman’s words, seems to have reached a stalemate. But the debate about whether or not it was “good for the Jews” to out themselves in such a way is still roiling.

Mordechai Lightstone, a rabbi in Brooklyn who works in the Jewish social media world, said it was dangerous “if we only subvert these hateful acts and use that as the sole basis to define our identities.” A better solution, he said, would be to “channel this into positive actions expressing Jewish pride.” How best to fight back against the anti-Semitic trolls is both a moral and logistical dilemma, according to Ioffe. She noted that it is impossible to determine how many there are and whether or not they are real people or bots. (The "Coincidence Detector" Chrome extension that automatically put parentheses around Jewish-sounding names had been downloaded about 2,500 times before it was removed by Google for violating its policy against harassment.) “It’s hard to figure out how to strike that balance between standing up to them and giving them too much attention, between de-fanging them and giving them more fodder,” she said. “I think it’s something that we Jewish journalists are going to have to continue to grapple with.”
© Haaretz

top

USA: This Guy’s Simple Google Trick Sums Up Racism In A Nutshell

8/6/2016- One needs to look no further than current events to see that racism is sadly alive and well in America in 2016. From the fact that George Zimmerman can attempt to auction off a gun he used to kill a black teen, to #OscarsSoWhite, to the very fact that Donald Trump is the Republican presidential candidate, racism continues to dominate headlines in this modern day and age. To give one example and prove just how systemic a problem racism in our country is, a guy with the Twitter handle @iBeKabir recorded this video of himself performing a very simple Google trick. First he searches for the images that come up when you google “three black teenagers.” The results are predominately of mugshots and inmates. “Now let’s just change the color right quick,” he says, replacing the wording with “three white teenagers.” What he yields are generic stock photos of smiling white teens palling around, some holding sporting equipment. The post has unsurprisingly accumulated over 45,000 likes and 50,000 retweets in less than 48 hours at the time of this writing — and those numbers will only continue to skyrocket as the tweet goes viral — obviously indicating that he’s struck a chord with far too many people.
© UpRoxx

top

Israel: Shaked: Facebook, Twitter removing 70% of ‘harmful’ posts

Social media giants clamping down on incitement to violence in Israel, says justice minister

7/6/2016- Facebook, Twitter and Google are removing some 70 percent of harmful content from social media in Israel, Justice Minister Ayelet Shaked said Monday. Speaking at a press conference in Hungary, Shaked said the social media giants were working to remove materials that incite to violence or murder, the Ynet news website reported Shaked was attending a conference in Hungary on combating incitement and anti-Semitism on the Internet. In a post on her Facebook page, she said: “The Hungarian Justice Minister said correctly that verbal incitement can lead to physical harm and that he is committed to the war on incitement. Anti-Semitic internet sites in Hungary have already attacked him for the existence of the conference. “A joining of forces by justice ministers from all over the world against incitement and our joint work vis a vis the internet companies will lead to change. “Already now, the Israeli Justice Ministry is managing to remove pages, posts and inciteful sites by working with Facebook and Google.”

Social media first came to the fore as a key tool for avoiding state-operated media organs and for communicating, particularly for the young, during the so-called Arab Spring, the wave of protests that swept the Arab world between 2010 and 2012. More recently, and for similar reasons, it has become the preferred medium through which terror groups try to communicate their messages and recruit new members. Palestinian social media has played a major role in the radicalization of young Palestinians during the current wave of violence against Israelis, which began in October. In one recent example of a crackdown on internet incitement, Twitter closed dozens of accounts held by members of the Izz ad-Din al-Qassam Brigades, the military arm of Hamas.

In response, the Brigades’ spokesman, who goes by the nom de guerre Abu Obeida, vowed: “We are going to send our message in a lot of innovative ways, and we will insist on every available means of social media to get to the hearts and minds of millions.” The terror group uses its social media accounts to publish internal news about the organization, such as when its members die in training accidents, and also to call for and praise attacks against Israeli civilians.
© Times of Israel

top

Online anti-Semitism: Difficult to Fight, but Even Harder to Quantify

Amid the Jew-hating, anti-Israel and Holocaust-denying conversations, 12 percent of the anti-Semitic discourse one Israeli company monitors is Trump-related.

7/6/2016- Julia Ioffe, a Jewish journalist, becomes the target of anti-Semitic attacks, and even death threats, from Donald Trump supporters on social media after she publishes a profile of his wife Melania.
Jonathan Weisman, a Jewish editor at The New York Times, finds himself inundated with anti-Semitic epithets from self-identified supporters of the presumptive Republican presidential candidate after the editor tweets an essay on fascist trends in the United States.
Erin Schrode, a young Jewish Democrat running for Congress in California, receives a torrent of Jew-hating messages on Facebook (“Fire up the ovens” was just one of the gems) in what appears to be an orchestrated attack launched by American neo-Nazis.
A Google Chrome extension (removed a day after it was discovered) marks members of the Jewish faith online by placing three sets of parentheses around their names.

Mere coincidence, or is this the dawn of a new and dangerous era in online anti-Semitism? The honest answer, say those in the business of tracking attacks on Jews, is that it’s hard to tell. In the old offline world, life was far less complicated. You counted acts of vandalism, physical assaults and whatever else was quantifiable, compared the total with the previous year, and then determined whether things were getting better or worse for the Jews. With the advent of social media, however, those sorts of calculations have become virtually impossible. Not only is it difficult to know what to count (Tweets? Retweets? Likes? Posts? Shares? Follows? Reports of abuse?), but also, with billions of people posting online, how do you begin searching?

“Back in the days when online anti-Semitism was confined to websites like Stormfront and Jew Watch, we were able to keep statistics,” says Rabbi Abraham Cooper, who runs the Digital Terrorism and Hate Project at the Simon Wiesenthal Center in California. “But in the era of social networking, the numbers have become meaningless. If you get one good shot in and it goes viral, how do you count it? Social networking has changed the whole paradigm.” Jonathan Greenblatt, chief executive of the Anti-Defamation League, has been keeping himself busier than usual this election season, calling out anti-Semites, their supporters and apologists. Yet, even he is reluctant to describe the current level of online attacks as unprecedented. “Back in 2000, when Joe Lieberman was on the presidential ticket, there were anti-Semitic attacks against him, too. So there’s certainly a history of these things,” he notes. “But we didn’t have Twitter back then. What social media has done is offer a platform that circulates some of the most noxious ideas in ways that were never previously possible, allowing bigots and racists, once marginalized by mainstream society, to now come out of the woodwork.”

Even if it were possible to make accurate numerical calculations about online anti-Semitism these days, says Greenblatt, there is no way to know if the situation has become worse, “because we don’t have a sample set from previous elections with which to compare.” Probably the closest thing to hard statistics related to the phenomenon appear in a recent report compiled by Buzzilla, an Israeli company that monitors and researches discussions in various online arenas: responses to articles, blogs, forums and social media. In preparing the report – commissioned by an Israeli nonprofit that promotes Holocaust remembrance – Buzzilla scoured the Internet for key phrases associated with anti-Semitism (“Hitler was right,” “burn the Jews,” “hate the Jews” etc.). “We define anti-Semitism as content that is against Jews, not against Israel per se,” says Merav Borenstein, Buzzilla's vice president for strategy and products. Regardless, she notes, Israel serves as a lightning rod for online anti-Semitism.

Examining anti-Semitic discourse over the course of a 12-month period ending in March 2016, the report found a spike in the three last months of 2015, coinciding with the spate of Palestinian stabbing attacks against Israelis. “We have found that whenever Israel is in the news – and this was true during the Gaza War in the summer of 2014 as well – it translates into a rise in online anti-Semitism,” says Borenstein.


Cooper, of the Simon Weisenthal Center, confirms this pattern. “You can almost write the script,” he says. “Within an hour of any terror attack against Jews or Israelis, the images of the perpetrators are up online, and they are touted as heroes who should be emulated.” According to the Buzzilla report, roughly 600 anti-Semitic conversations took place in the arenas it monitors in April 2015. By March 2016, that number had almost tripled. (The peak month was December 2015, with 2,500). At the request of Haaretz, Buzzilla also examined how much of the recent anti-Semitic discourse on the Internet has been fueled by the Trump campaign. It found that since the beginning of this year, 12 percent of the total volume of anti-Semitic discourse in the arenas it monitors is related to the presumptive Republican presidential candidate although not posted by him personally.

Flagging offensive content
They Can’t is the name of relatively new Israeli nonprofit devoted to fighting online anti-Semitism. Through a network of grass-roots activist, the organization flags anti-Semitic content, mainly on YouTube and Facebook, and demands that it be removed. Its founder, Belgian-born Eliyahou Roth, says their track record is unmatched. “Over the past three years, we’ve managed to remove more than 45,000 accounts, pages, videos, posts and photos with anti-Semitic content from the Internet,” he says. “About 41,000 items were what we call classic anti-Semitic items, another 1,000 dealt with Holocaust denial, and the rest, which were in Arabic, fell into the category of terror incitement.” That was out of a total of 78,500 anti-Semitic items that his organization tracks on an ongoing basis. Over at the Simon Weisenthal Center, Cooper says that the number of anti-Semitic items his organization has succeeded in removing from the Internet is “probably in multiples of tens of thousands.”

But such success is not the norm, according to a report prepared earlier this year by The Online Hate Prevention Institute. Titled “Measuring the Hate: The State of Anti-Semitism in Social Media,” it found that out of 2,000 anti-Semitic items the Australian-based organization had been tracking over a period of 10 months, only 20 percent had been removed from the Internet. The report did take note, however, of significant variations in the response rates of different social media companies. Facebook was hailed as the company most responsive to demands to remove anti-Semitic content, whereas YouTube was the least. A breakdown provided in the report of anti-Semitic content by category found that 49 percent was “traditional” (defined as containing “conspiracy theories, racial slurs and accusations such as the blood libel”), 12 percent was related to Holocaust denial, 34 percent to Israel, and 5 percent promoted violence against Jews.

Acknowledging the difficulties of quantifying online anti-Semitism, David Matas, a prominent Canadian human rights lawyer, points to a key indicator that social media companies adamantly refuse to divulge, although it could provide a useful benchmark: the number of complaints they receive about anti-Semitic content. Speaking at a recent conference in Jerusalem, Matas, who also serves as senior legal counsel of B’nai Brith Canada, lamented that “unless we have a solution on metrics, we cannot even know the problem.” Danielle Citron, a professor of law at the University of Maryland and an expert on online harassment, is not sure whether online anti-Semitism is spreading or simply drawing more attention. “What I can say is that it’s become more mainstream,” she notes. “It is no longer hidden in the dark corners of the internet like it once was. We are now seeing it on very mainstream platforms like Facebook and Twitter.”

At the same time, Jew-haters are clearly feeling more emboldened – not only by the anonymity provided by social media, says Citron, but also, more recently, by the nod they’ve received from the Republican presidential hopeful. “Trump gives people permission to be hateful, whether that is to women, to the disabled or to Jews,” she explains. How much of what seems like an uptick in online anti-Semitism can be blamed on extreme right-wingers who support Trump and how much on extreme left-wingers who hate Israel? “I see two twin vectors converging here,” says the ADL’s Greenblatt. “One is right-wing anti-Semitism, steeped in white supremacist ideology, and it’s very anti-Jewish. Then there is the left-wing anti-Semitism, steeped in anti-Israel ideology. In my estimation, though, the end result is the same: Jews are being attacked for being Jewish. It’s prejudice plain and simple.”
© Haaretz

top

Canadian content rules for online media have weaker support, survey suggests

Canadians back regulations, but want a more 'hands off' approach online, pollster says

3/6/2016- Canadian content rules need updating, the majority of respondents in a new online poll said — but people had more divided views on whether online media should be subject to the same regulations as traditional media. The online poll conducted by the Angus Reid Institute comes after Federal Heritage Minister Mélanie Joly announced in April a period of public consultation around current broadcasting and content regulations, with the possibility of changes to laws and agencies as soon as 2017. Roughly 56 per cent of the 1,517 Canadians surveyed said online media should not be subject to the same types of CRTC regulation as traditional media, while 44 per cent said all media should be regulated the same.

When asked by pollsters whether existing policies "do a good job of promoting" Canadian cultural content, 40 per cent said yes, 26 per cent said no and the rest were uncertain. However, 60 per cent of those surveyed replied that the current Cancon regulations need to be reviewed and updated. The survey's release coincides with CTV's announcement it would cancel Canada AM after 43 years, a change that could leave a "big hole" in the Canadian content spectrum depending on what replaces it, said Shachi Kurl, executive director of Angus Reid. Overall, Kurl said that Canadians support media regulations, but want a more "hands off" approach online. This is especially true among Canadians aged 18 to 34, she said, who use newer media such as Spotify and Netflix. Young people often see stars, including Justin Bieber, who were discovered on YouTube and perceive it as "doing it on their own," Kurl said. "The argument has yet to be made for these younger Canadians that protection, supports and government regulation is something that will enable Canadian content to thrive," she said.

Protect and promote culture
A majority of respondents, 61 per cent, said Canadian culture is unique and needs government support to survive, while the remaining 39 per cent said Canadian media "will be fine without specific protection policies and support from government." Respondents across the country supported cultural protection, with Quebecers having the most support at 70 per cent and Albertans showing the lowest support at 54 per cent. Kurl said that even though the majority of Canadians still support regulation, it may not stay that way. "Across Canada, two in five [people] or more think that actually it's time to take the reins off," she said. "It's not the majority view, but it's a growing view."
The polls by the Angus Reid Institute were conducted between May 10 and 13, 2016, interviewing 1,517 Canadians via the internet. A probabilistic sample of this size would yield a margin of error of plus or minus 2.5 per cent, 19 times out of 20.
© CBC News

top

USA: Logic isn't needed for Internet (opinion)

By Roger Bluhm, managing editor of the Dodge City Daily Globe.

2/6/2016- People don’t use logic when it comes to the Internet. People consistently create fake reports of celebrities dying online, just to stir things up. Not long ago Gabriel Iglesias, the comedian known as "Fluffy" was the victim of this hoax. As people were offering good will and prayers to his family, he tweeted out he was still alive. There is no logic for anyone to "kill" someone, yet it’s happened so much online, when a real report comes out, we take a while to believe it. Then there’s the anonymous posters in a chat room or Facebook, people who say mean things for their own benefit or just to start a situation. What’s the purpose? As I’ve said repeatedly in this space, if you have the guts to say something, have the guts to put your name on it. Own it.

How about those who go online looking for love, or lust? Millions of world wide web surfers are looking for the perfect match, the perfect right now or the perfect hook-up for later. I’m in the minority of people it seems as my wife and I have been together almost 25 years (my anniversary is in February.) Almost all of my cousins on my mother’s side of the family have been divorced at least once. I have two cousins who have each been married — and divorced — five times. Of course, at least three times their marriages fell apart because they found a new love online.

Terror groups like ISIS recruit our youth online. How? They tell our children we don’t care about them. They preach to the side of teenagers that wants to rebel, but also wants to be wanted. It’s amazing how terror groups have exploited our teenagers, but it doesn’t have to be like that. We can be more in our children’s lives and make sure they know we love them and they can tell us anything. I mean anything, because this is also how children get molested and molesters get away with it. Logic would suggest a person doesn’t go looking for hate groups, how to make bombs or child porn, yet it happens. Neo-Nazi groups have web sites, bomb-building instructions (and how to make methamphetamine) are available online and child porn has been shared and collected since the Internet was first introduced.

Logic doesn’t apply at all.

It has always amazed me how the best in advances can also be the worst in advances. The Internet brought a way for people across the country to talk to one another. In the early days there were Internet Chat Rooms where a man in England could talk with someone in Idaho. Of course, as is human nature, this created a whole new way for people to connect with others and disconnect with loved ones. We should have guessed then what was coming. As the Internet grew — faster, more reliable — and cell phones turned into smart phones, logic continued to go out the window. Does anyone older than 30 believe people are dying because someone has to read a text? It’s reading while driving and no one would have believed that to be smart or safe 20 years ago, but people do it all the time now.

In fact, people have been hurt in many ways by simply paying attention to their smart phones and not on their surroundings. A huge debate sprang up online recently over the death of a gorilla at a zoo. It seems a toddler decided to get into the moat surrounding the gorilla habitat and the animal grabbed the toddler. Zoo officials killed the gorilla to save the child’s life, yet some believe other measures should have been taken. I noticed someone took video of the entire situation, not once stopping to call for help on the phone, or off. Where were the parents? I never let my children out of my sight when they were toddlers and we were in a public place. Maybe because of my job — and reading many stories of child abductions — but I made sure my children were always safe.

I’m guessing mom or dad or both were buying shoes online or answering an email instead of making sure his or her son didn’t jump into the moat creating an overblown online debate. I just hope that not everyone reads this column online. It’ll just prove my point about logic having little place on the Internet.
© The Dodge City Daily Blobe

top

USA: Commissioner: 'White Pride World Wide' post ‘not a neo-Nazi thing’

Wade Eisenbeisz said his posting was ‘not a neo-Nazi thing’

3/6/2016- An Edmunds County commissioner who posted a white supremacy symbol on Facebook says he didn’t realize what the symbol represented. Wade Eisenbeisz recently shared a link that included the symbol and the words “White Pride World Wide.” The privacy setting on his Facebook page was public, meaning anyone could see the post. Eisenbeisz tells the American News that he was unaware what the symbol represented. He says he only meant to show that he is proud to be white. He says there’s nothing wrong with being proud of one’s race. The Anti-Defamation League says the symbol posted by Eisenbeisz is used by groups such as neo-Nazis. Eisenbeisz says his posting was “not a neo-Nazi thing.” He has deleted the post.
© The Associated Press

top

Google removes anti-Semitic app used to target Jews online

3/6/2016- Google has removed an app that allowed users to surreptitiously identify Jews online after a tech website brought the tool to widespread media attention and spurred a backlash. Coincidence Detector, the innocuous name of the Google Chrome internet browser extension created by a user identified as “altrightmedia,” enclosed names that its algorithm deemed Jewish in triple parentheses. The symbol — called an “(((echo)))” — allows white nationalists and neo-Nazis to more easily aim their anti-Semitic vitriol. The extension was exposed Thursday in an article on the tech website Mic by two reporters who had been targets of anti-Semitic harassment online. Google confirmed that evening that it had removed the extension from the Chrome Web Store, citing violation of its hate speech policy, which forbids “promotions of hate or incitement of violence.”

The Mic reporters traced the triple-parentheses symbol to a right-wing blog called The Right Stuff and its affiliated podcast, The Daily Shoah, starting in 2014. The parentheses are a visual depiction of the echo sound effect the podcast hosts used to announce Jewish names. The echo has now emerged as a weapon in the arsenal of the so-called “alt-right,” an amorphous, primarily online conservative movement that has been become more visible and vocal in the midst of Donald Trump’s presidential campaign. “Some use the symbol to mock Jews,” the Mic article explains of the echo. “Others seek to expose supposed Jewish collusion in controlling media or politics. All use it to put a target on their heads.” One neo-Nazi Twitter user provided a succinct explanation to The Atlantic magazine national correspondent Jeffrey Goldberg, who added the parentheses to his Twitter handle to mock the trend.

The product description of the now-disappeared Google extension said it would help users identify “who has been involved in certain political movements and media empires.” The use of the term “coincidence” was meant to be ironic. The Coincidence Detector had nearly 2,500 users and a five out of five stars rating. There was a suggestions tab to submit Jewish names to be added to the algorithm. Mic was tipped off to the use of the echo after Jonathan Weisman, an editor at The New York Times, retweeted a Washington Post article called “This is How Fascism Comes to America,” a scathing indictment of Trump. Weisman asked one of his harassers, @CyberTrump, to explain the symbol. ‘It’s a dog whistle, fool,’ the user responded. ‘Belling the cat for my fellow goyim.'” In addition to prompting action by Google, the report drew disbelief and protest across Twitter, with several Jewish users also adding parentheses to their names.

Julia Ioffe, a journalist who became the target of a campaign of anti-Semitic harassment after she wrote a profile of Melania Trump in GQ that Donald Trump supporters didn’t approve of, retweeted the Mic article with bewilderment. The alt-right has joined real-world white supremacists in generally embracing Trump’s candidacy, and the presumptive Republican nominee has been criticized for not doing more to distance himself from such supporters. The Daily Beast reported that Jared Kushner, Trump’s Jewish son-in-law, was among those targeted by the extension. While Coincidence Detector was mostly focused on names, with terms like “Jews,” “Jewish” and “Holocaust” unaffected, a notable exception was “Israel,” which Coincidence Detector changed to “(((Our Greatest Ally))).” The extension could be set at various levels of intensity, from 0-100 sets of parentheses. Writer Joe Veix dug into the extension’s code and compiled a full list of the 8,771 people targeted by Coincidence Detector.
© JTA News.

top

Australia: Racist memes mocking Adam Goodes taken down after AFL demands removal

2/6/2016- Racist internet memes mocking AFL great Adam Goodes have been voluntarily removed from a popular Facebook page, but those responsible maintain they were "just for fun". The AFL had earlier demanded that Facebook take down the posts, which on Wednesday night appeared on a page followed by about 200,000 people. The page's administrators told Fairfax Media they had deleted the images themselves. "I deleted because our page was getting a lot of reports and the best way was to delete them!" an administrator said. "Those posts was just for fun, to make people laugh, that was not racism." A second post was also deleted.

But a new meme appeared about 4.30pm on Thursday, attracting the ire of page followers. "Racism doesn't magically become funny when you repeat it over and over," one man wrote. "So you take down your other two racist posts just to put up another one?" another said. The first meme had been "liked" on Facebook by 5700 people, although most comments were highly critical of the racism. There were 336 people who gave it a laugh-out-loud emoji. The second post had 6800 likes. AFL spokesman Patrick Keane earlier said the league's legal team was in contact with Facebook over the "utterly unacceptable" posts. "We have told our legal team and we are in contact with Facebook to have it removed," Mr Keane said.

Goodes, who retired from his AFL career last year after a campaign of sustained crowd booing at certain grounds, was infamously jeered by a 13-year-old girl at an AFL game in 2013. He pointed the girl out in the crowd and she was ejected. It caused a social storm about racism in sport and in Australia. The girl later apologised to Goodes. Collingwood president Eddie McGuire, days later, apologised after comparing Goodes with King Kong. Mr Keane had said the AFL would use intellectual property rights to try to force Facebook's hand to remove the posts, and possibly the page. "If you are trying to use AFL [intellectual property] in this way it is utterly unacceptable and we will not tolerate it," he said. "As I understand it, copyright gives us the ability to act, but why we want to act is because it is utterly unacceptable," he said. "We are not going to allow people to be vilified." Fairfax Media has chosen not to republish the memes. The creator of the page earlier had responded to the backlash, saying "it's just a joke".
© The Age

top

Anti-Defamation League mobilizes on anti-Semitism, racism against journalists

1/6/2016- To combat the anti-Semitic and racist comments and threats facing journalists on social media these days, the Anti-Defamation League is assembling a task force that is expected to deliver recommendations by summer’s end on this scourge. “We’re seeing a breadth of hostility whose virulence and velocity is new,” said Jonathan A. Greenblatt, the ADL’s chief executive, in a chat with the Erik Wemple Blog. Among the task force participants are Danielle Citron, a University of Maryland law professor and an oft-quoted voice on online harassment, and Steve Coll, dean of the Columbia University Graduate School of Journalism. The group’s mandate is threefold: to determine the “scope and source” of the attacks on social media against journalists and their ilk; to research their impact; and to come up with countermeasures that “can prevent journalists becoming targets for hate speech and harassment on social media in the future.”

Item No. 1 is a very tricky and frustrating matter, as any sentient social media user can attest. Some journalists who have written skeptically of presumptive Republican nominee Donald Trump have been stung by vile anti-Semitic attacks on Twitter. But who are the people sending the tweets? How many people are doing this? Some of them mention Trump in their Twitter IDs — does that mean they’re Trump supporters? Jeffrey Goldberg, a reporter for the Atlantic and a target of the attacks, posed this question on Twitter: When the Erik Wemple Blog passed that question along to Greenblatt, he responded, “I think, honestly, we just don’t know yet.”

One of the task force members is Julia Ioffe, a freelance reporter who found herself targeted by anti-Semitic tweets following a deeply reported GQ profile of Melania Trump. She later filed a police report over death threats that she’d received. The report states that Ioffe claimed that “an unknown person sent her a caricature of a person being shot in the back of the head by another, among other harassing calls and disturbing emails depicting violent scenarios.” In an interview with the Erik Wemple Blog, Ioffe said, “The Trumps have a record of kind of whistling their followers into action.” The ADL is assisting Ioffe with her case.

Anti-Semitic social media trolls are equal-opportunity harassers. Conservative writers such as Ben Shapiro, John Podhoretz and Noah Rothman have all seen the backlash, as have Ioffe, Goldberg, CNN’s Jake Tapper and Jonathan Weisman of the New York Times. It doesn’t take much to provoke, either. In Weisman’s case, he merely tweeted out a Robert Kagan piece in The Post titled “This is how fascism comes to America” — on Trump’s shoulders, that is. Then came the abuse. In announcing its task force, the ADL made no mention of Trump. Greenblatt explains that the ADL is a 501(c)(3) group that neither supports nor rejects politicians. Furthermore, he said, anti-Semitic attacks have arisen both from the right and the left. In the case of the latter, he cites folks who try to “delegitimize the policies of the Israeli govt and oftentimes the speech used can be rather troubling.”

Should the task force approach Trump on this matter, it should brace for the response he gave to CNN’s Wolf Blitzer, who asked about threats against Ioffe. “Oh, I don’t know about that. I don’t know anything about that.”
© The Washington Post - Blog Erik-Wemple

top

Why some free-speech advocates 'stand with hate speech'

The European Commission's announcement Tuesday of a new code of conduct against hate speech has raised concerns about political censorship.

2/6/2016- The European Union's new code of conduct aimed at curbing hate speech has some free speech advocates raising concerns of censorship. Microsoft, Twitter, Facebook, and Google promised on Tuesday to police and remove what the European Commission has deemed a concerning rise in hate speech, but critics are raising ideological, political, and technical objections to the plan. "It seems these companies were given 'an offer they couldn't refuse,' and rather than take a principled stand, they've backed down fearing actual legislation," human rights advocate Jacob Mchangama, the director of Copenhagen-based think tank Justitia, told the Christian Science Monitor's Christina Beck earlier this week. "And of course, how will global tech companies now be able to resist the inevitable demands from authoritarian states that they also remove content that these countries determine to be 'hateful' or 'extremist'?"

The Daily Caller's Scott Greer suggests that government insistence on defining and punishing hate speech threatens the delicate principle of free speech by punishing differences of opinion. "Those whom express views in line with the prevailing wind of popular opinion are not the ones who need the comfort of the First Amendment," he wrote in an editorial Thursday. "By instituting hate speech laws, the government declares itself the arbiter of what counts as hate speech, which means they are more likely to go after unwanted opinions." The hashtag #IStandWithHateSpeech became a trending topic on Twitter, as free speech advocates insisted the dangers of censorship exceed those of the hate speech itself. Janice Atkinson, a Member of the European Parliament, told Breitbart London the "Orwellian" policy could be used for political gain as Europe wrestles with difficult immigration problems.

"If an MEP, such as the centre-right Hungarians, the Danish People’s Party, the Finns, the Swedish Democrats, the Austrian FPO, say no to migration quotas because they cannot cope with the cultural and religious requirements of Muslims across the Middle East who are seeking refugee status, is that a hate crime? And what is their punishment?" Ms. Atkinson told Breitbart London. "It's a frightening path to totalitarianism." Others have raised technical concerns, because the companies agree to review and remove hate speech within 24 hours, a process that privatizes the protection of free speech. "The code requires private companies to be the educator of online speech, which shouldn't be their role, and it's also not necessarily the role that they want to play," Estelle Massé, EU policy analyst with the Brussels-based Access Now, told the Monitor.

Daphne Keller, the Stanford Center for Internet and Society's director of intermediary liability, told Buzzfeed that when in doubt of what to remove, these new hate speech police would err on the side of removing controversial – but legal – content for fear of government reprisals. "They take down perfectly legal content out of concern that otherwise they themselves could get in trouble," Ms. Keller told Buzzfeed. "Moving that determination out of the court system, out of the public eye, and into the hands of private companies is pretty much a recipe for legal content getting deleted."
© The Christian Science Monitor

top

Is Combating Online Hate Speech Censorship Or Protection?

31/5/2016- Facebook, Twitter, Google and Microsoft signed a new EU code of conduct agreement to review content flagged as hate speech and remove it within 24 hours of flagging. While all of these companies have long claimed to have zero tolerance for online hate speech, the new code of conduct gives them a time limit of one day, within which they must respond to complaints. The response on Facebook and Twitter did not take long to come. Many posters wrote about their apprehension that the new rules will effectively shut down free speech on the Internet. Some of those contributing to the discussion prefer to remain more open and ask if this means simply increased but ineffectual restriction to freedom of speech or sincerely combating online hate speech. Commentator responses varied from those who believe this constitutes censorship to those who ask who defines the term, hate speech, to those who are in favor of the anti-hate speech code of conduct as long as it is sufficiently well defined.

Targets of Online Hate Speech
An online site, nohatespeechmovement.org, lists the potential targets of online hate speech. These include women, the LGBTQI community, Jews, Muslims, and individuals targeted for cyberbullying by people they know in real life. The FBI publishes an annual report on hate crime statistics. Their report concerns actual real-life attacks, but the statistics regarding targets offline may mirror online hate speech targets. The 2014 data are the most recent data available.

FBI hate crime stats may reflect online hate speech

There may be a similarity between actual hate crimes and online hate speech. [Image from: fbi.gov/news/stories/2015/november/latest-hate-crime-statistics-available]

Perpetrators of Online Hate Speech
It is difficult to characterize the perpetrators of hate speech on the Internet because of the easy apparent anonymity achieved by the use of fake profiles. The UNESCO brochure, entitled Countering Online Hate Speech, claims that anonymity is not necessarily easily achieved, since high-level technological knowledge is required to successfully hide the user’s identity. However, the identities behind anonymous hate speech perpetrators can often only be discovered by legal authorities. This apparent anonymity encourages many people to post hate messages toward the objects of their hate. Middlesex University psychology professors William Jacks and Joanna Adler discuss the effects of anonymity on online hate speech.

"In an online environment, where individuals often perceive themselves as anonymous and insulated from harm, confrontation between those subscribing to differing ideologies was common, especially on open-access sites. Hate postings were often followed by other hate postings expressing a polar opposite extremist view, which only served to increase the ferocity of both arguments and further reduce the validity of either point of view".

They also characterize the perpetrators of online hate speech, dividing them into browsers, commentators, activists and leaders. Browsers are commonly referred to as “lurkers” on social media, those who read but do not interact openly. Commentators actively respond to the posts of others. Jacks and Adler found that 87 percent of online hate speech was perpetrated by this group. Activists engage in real-life hate activities as well as online hate speech. Leaders go even further.
"A Leader will use the Internet to support, organize, and promote his extremist ideology…. They will be at the forefront of developing Websites, storing large amounts of extremist material relating to their ideology, and organizing hate related activities on and offline".
According to Jacks and Adler, perpetrators of online hate speech seek to purposefully insult a given group, to scorn beliefs of others, to rationalize their own beliefs and to support those thinking as they do. The activists and leaders promote offline events, some of which could be classified as hate crimes.

Can the EU Code of Conduct Combat Online Hate Speech?
In the discussion of their study, Jacks and Adler believe that this is a step in the right direction. They refer to Holocaust expert Debra Lipstadt to support this idea, as she studiously ignored the Holocaust deniers who tried to spread hate in response to her publications.
"Some would suggest that simply ignoring hate content and pressuring Internet service providers to remove content as soon as possible could be the most effective option… By engaging with those who are purporting hate, no matter how vociferous the debate and ridiculous their views, the fact that the debate is happening at all would cause others to perceive the views as legitimate and allow them to enter mainstream consciousness".

On the other hand, the phenomenon of tailored search results, whereby individuals are presented with materials based upon their online behaviors may mean that simply ignoring the hate speech online would have no effect at all. Alternatively, Jacks and Adler conclude that careful interaction with the haters may eventually bear fruit.
"As search engines ‘learn’ about individuals’ extremist views, they will provide searches that preference hate material, increasing the likelihood of further entrenchment. In order to combat this narrowing of search results and affirmation of beliefs, it may be necessary to safely but actively engage and challenge hatred online… For early intervention, the best hope may be through engaging with users on hate sites, posts, walls, and blogs — although the question remains as to whether an alternative point of view will be able to break into a hate user’s cocooned online experience".

The UNESCO report supports this approach of engaging in hate speech incidents as a means of education toward tolerance. They add that the social media giants have a major role to play in combating online hate speech.
"Internet intermediaries, on their part, have an interest in maintaining a relative independence and a “clean” image. They have sought to reach this goal by demonstrating their responsiveness to pressures from civil society groups, individuals and governments. The way in which these negotiations have occurred, however, have been so far been ad hoc, and they have not led to the development of collective over-arching principles".

And changing that ad hoc approach to a more systematic and formal method for combating online hate speech is the purpose of the EU code of conduct. Future studies will demonstrate whether or not this is effective. In the meantime, Jewish groups have had negative experience with reports of hate crime to Facebook. It is questionable whether or not the new EU code of conduct agreement will help in such instances because this is more related to a definition of hate speech than the willingness to remove it. It seems that an agreed upon definition for hate speech is the first order of the day.
© The Inquisitr

top

EU tells Facebook and others to stop hate speech -- because terrorism???

Facebook, Twitter, Google, Microsoft agree to pointless EU edict. But Commissioner Vĕra Jourová is ever so proud.
By Richi Jennings


31/5/2016- The European Union has told social platforms such as Facebook to do something about hate speech. And, yes, this is indeed something -- something they're already doing. And does it surprise you to learn that this "code of conduct" is being justified in the name of combating terrorism? In IT Blogwatch, bloggers are ever so glad they won't be subjected to hate speech any longer. Your humble blogwatcher curated these bloggy bits for your entertainment.

What’s the craic? Julia Fioretti and Foo "bar" Yun Chee report—Facebook, Twitter, YouTube, Microsoft back EU hate speech rules:
Facebook, Twitter, Google's YouTube and Microsoft...agreed to an EU code of conduct to tackle online hate speech. [They] will review the majority of valid requests for removal of illegal hate speech in less than 24 hours and remove or disable access to [illegal] content.

They will also...promote "counter-narratives" to hate speech. ... The United States has undertaken similar efforts...focusing on promoting "counter-narratives" to extremist content.


Are you impressed? Alexander J. Martin isn't— EU bureaucrats claim credit for making 'illegal online hate speech' even more illegal:
The European Commission has claimed the credit...despite the companies already following practices demanded by EU bureaucrats. ... Under the code, IT companies will have an obligation to [do what] national laws in the EU already require them to do.

[It's] a particularly difficult area for legislation. ... The European Court of Human Rights has stated that...freedom of expression “is [also] applicable to [words] that offend, shock or disturb
.”

Who is responsible for this bureaucratic bungling? Vĕra Jourová is the EU Commissioner for Justice, Consumers and Gender Equality:
The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred. This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected. I welcome the commitment of worldwide IT companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.

Ahhh, I see. Because terrorism! Romain Dillet crunches the background—Facebook, Twitter, YouTube and Microsoft agree to remove hate speech across the EU :
ISIS has been successfully using social media to recruit fighters. [And] the European economic recession has fostered far-right parties, leading to more online antisemitism and xenophobia. [So] now, four tech companies are making a formal pledge at the European level.

[They] will have to find the right balance between freedom of expression and hateful content. ... They’ll have dedicated teams [of] poor employees who will have to review awful things every day.

It’s encouraging to see tech companies working together on a sensitive issue like this
.


Good grief. Liam Deacon and Raheem Kassam wax multi-cultural—Pledge To Suppress Loosely Defined ‘Hate Speech’:
[It's] been branded “Orwellian” by Members of the European Parliament, and digital freedom groups have already pulled out of any further discussions...calling the new policy “lamentable”. ... The platforms have also promised to engage in...the re-education of supposedly hateful users.

[Independent] Janice Atkinson MEP [said] “Anyone who has read 1984 sees its very re-enactment live. ... The Commission has been itching to shut down free speech. ... It’s a frightening path to totalitarianism.”

European Digital Rights (EDRi) announced its decision to pull out of future discussions...stating it does not have confidence in the “ill-considered code”.


You have been reading IT Blogwatch by Richi Jennings, who curates the best bloggy bits, finest forums, and weirdest websites… so you don’t have to. Catch the key commentary from around the Web every morning. Hatemail may be directed to @RiCHi or itbw@richi.uk.  Opinions expressed may not represent those of Computerworld. Ask your doctor before reading. Your mileage may vary. E&OE.

Your humble blogwatcher is an independent analyst/consultant, specializing in blogging, email, spam, and other security topics. He was voted 'Most likely to get up first to sing at karaoke' for 14 years in succession.
© Computer World

top

WJC welcomes EU guidelines against hate speech, is skeptical regarding implementation

The World Jewish Congress (WJC) on Tuesday welcomed the signing by leading internet service providers Google/YouTube, Facebook, Twitter and Microsoft of a European Union code of conduct aimed at fighting the proliferation of hate speech on the internet, but voiced skepticism about the commitment of these firms to effectively police their platforms.

31/5/2016- WJC CEO Robert Singer said: “YouTube, Twitter, Facebook and others already have clear guidelines in place aimed at preventing the spread of offensive content, yet they have so far utterly failed to properly implement their own rules. Singer recently wrote to Google Inc., which owns the world’s largest online video service YouTube, to complain about the persistent failure of YouTube to delete neo-Nazi songs that glorify the Holocaust or incite to murder from its platform. “Tens of thousands of despicable video clips con-tinue to be made available although their existence has been reported to YouTube and despite the fact that they are in clear violation of the platform’s own guidelines prohibiting racist hate speech. "Nonetheless, YouTube gives the impression that it has been cracking down on such content. Alas, the reality is that so far it hasn't. We expect that real steps are taken by YouTube, as well as other social media platforms, that go beyond well-meaning announcements,” said Singer. The WJC CEO nonetheless praised the European Commission’s code of conduct to combat online racism, terrorism and cyber hate. "This is a timely initiative, and we hope all internet service providers will adhere to the code," said Singer. The guidelines require companies to review the majority of flagged hate speech within 24 hours and remove it, if necessary.
© World Jewish Congress

top

EU Hate Speech Deal Shows Mounting Pressures Over Internet Content Blocking

1/6/2016- An agreement on Tuesday by four major US Internet companies to block illegal hate speech from their services in Europe within 24 hours shows the tight corner the companies find themselves in as they face mounting pressure to monitor and control content. The new European Union "code of conduct on illegal online hate speech" states that Facebook Inc, Google's YouTube, Twitter Inc and Microsoft will review reports of hate speech in less than 24 hours and remove or disable access to the content if necessary. European governments were acting in response to a surge in antisemitic, anti-immigrant and pro-Islamic State commentary on social media. The companies downplayed the significance of the deal, saying it was a simple extension of what they already do. Unlike in the United States, many forms of hate speech, such as pro-Nazi propaganda, are illegal in some or all European countries, and the major Internet companies have the technical ability to block content on a country-by-country basis.

But people familiar with the complicated world of Internet content filtering say the EU agreement is part of a broad and worrisome trend toward more government restrictions. "Other countries will look at this and say, 'This looks like a good idea, let's see what leverage I have to get similar agreements,'" said Daphne Keller, former associate general counsel at Google and director of intermediary liability at the Stanford Center for Internet and Society. "Anybody with an interest in getting certain types of content removed is going to find this interesting."

Policing content
The EU deal effectively requires the Internet companies to be the arbiters of what type of speech is legal in each country. It also threatens to complicate the distinction between what is actually illegal, and what is simply not allowed by the companies' terms of service - a far broader category. "The commission's solution is to ask the companies to do the jobs of the authorities," said Estelle Masse, policy lead in Europe for Access Now, a digital rights advocacy group that did not endorse the final EU agreement. Masse said that once companies agree to take quick action on any content that is reported to them, they will inevitably review it not only for legal violations but also terms of service violations. "The code of conduct puts terms of service above national law," she said.

The agreement also expands the role of civil society organizations such as SOS Racisme in France and the Community Security Trust in the UK in reporting hate speech. While governments can make formal legal requests to the companies for removal of illegal content, a more common mechanism is to use the reporting tools that the services provide for anyone to "flag" content for review. None of the companies would provide any detail on how many such organizations they work with or who they are. Facebook and Google both said in statements to Reuters that they already review the vast majority of reported content within 24 hours. "This is a commitment to improve enforcement on our policies," said a Facebook representative. Facebook reviews millions of pieces of reported content each week, according to Monika Bickert, the company's head of global policy, and has multilingual teams of reviewers around the world.

'Dangerous precedent'
Yet free speech advocates expressed concern that the EU code of conduct would pressure companies to overcomply and remove lawful content out of an abundance of caution. "This is a dangerous precedent, as any wider discussion between the EU and international human rights groups would have revealed," said Danny O'Brien, international director of the Electronic Freedom Foundation. "It does not address that different speech is deemed illegal in different jurisdictions," he said. The hashtag #istandwithhatespeech was trending on Twitter Monday afternoon as rights advocates objected to the EU deal.

The hate speech agreement raises some of the same issues as a European court ruling that gives EU residents the right to demand that links about them be removed from Google and other search engines, Internet activists say. The so-called right to be forgotten requires Google to review removal requests and determine which ones qualify because they contain "excessive" or "irrelevant" information. According to Google's transparency report, the company has reviewed 1,522,636 Internet addresses, or URLs, since the law went into effect in 2014. It removed the links in 43 percent of the cases.
© Reuters

top

European Commission and IT Companies announce Code of Conduct

The Commission together with Facebook, Twitter, YouTube and Microsoft (“the IT companies”) today unveil a code of conduct that includes a series of commitments to combat the spread of illegal hate speech online in Europe.

31/5/2016- The IT Companies support the European Commission and EU Member States in the effort to respond to the challenge of ensuring that online platforms do not offer opportunities for illegal online hate speech to spread virally. They share, together with other platforms and social media companies, a collective responsibility and pride in promoting and facilitating freedom of expression throughout the online world. However, the Commission and the IT Companies recognise that the spread of illegal hate speech online not only negatively affects the groups or individuals that it targets, it also negatively impacts those who speak out for freedom, tolerance and non-discrimination in our open societies and has a chilling effect on the democratic discourse on online platforms.

In order to prevent the spread of illegal hate speech, it is essential to ensure that relevant national laws transposing the Council Framework Decision on combating racism and xenophobia are fully enforced by Member States in the online as well as the in the offline environment. While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously reviewed by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame. To be considered valid in this respect, a notification should not be insufficiently precise or inadequately substantiated.

Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, said, "The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred. This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected. I welcome the commitment of worldwide IT companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary."

Twitter’s Head of Public Policy for Europe, Karen White, commented: “Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society. We remain committed to letting the Tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate. In tandem with actioning hateful conduct that breaches Twitter’s Rules, we also leverage the platform’s incredible capabilities to empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance. We look forward to further constructive dialogue between the European Commission, member states, our partners in civil society and our peers in the technology sector on this issue.”

Google’s Public Policy and Government Relations Director, Lie Junius, said: “We’re committed to giving people access to information through our services, but we have always prohibited illegal hate speech on our platforms. We have efficient systems to review valid notifications in less than 24 hours and to remove illegal content. We are pleased to work with the Commission to develop co- and self-regulatory approaches to fighting hate speech online."

Monika Bickert, Head of Global Policy Management at Facebook said: "We welcome today’s announcement and the chance to continue our work with the Commission and wider tech industry to fight hate speech. With a global community of 1.6 billion people we work hard to balance giving people the power to express themselves whilst ensuring we provide a respectful environment. As we make clear in our Community Standards, there’s no place for hate speech on Facebook. We urge people to use our reporting tools if they find content that they believe violates our standards so we can investigate. Our teams around the world review these reports around the clock and take swift action.”

John Frank, Vice President EU Government Affairs at Microsoft, added: “We value civility and free expression, and so our terms of use prohibit advocating violence and hate speech on Microsoft-hostedconsumer services. We recently announced additional steps to specifically prohibit the posting of terrorist content. We will continue to offer our users a way to notify us when they think that our policy is being breached. Joining the Code of Conduct reconfirms our commitment to this important issue."

By signing this code of conduct, the IT companies commit to continuing their efforts to tackle illegal hate speech online. This will include the continued development of internal procedures and staff training to guarantee that they review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary. The IT companies will also endeavour to strengthen their ongoing partnerships with civil society organisations who will help flag content that promotes incitement to violence and hateful conduct. The IT companies and the European Commission also aim to continue their work in identifying and promoting independent counter-narratives, new ideas and initiatives, and supporting educational programs that encourage critical thinking.

The IT Companies also underline that the present code of conduct is aimed at guiding their own activities as well as sharing best practices with other internet companies, platforms and social media operators.

The code of conduct includes the following public commitments:

The IT Companies, taking the lead on countering the spread of illegal hate speech online, have agreed with the European Commission on a code of conduct setting the following public commitments:

The IT Companies to have in place clear and effective processes to review notifications regarding illegal hate speech on their services so they can remove or disable access to such content. The IT companies to have in place Rules or Community Guidelines clarifying that they prohibit the promotion of incitement to violence and hateful conduct.

Upon receipt of a valid removal notification, the IT Companies to review such requests against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA, with dedicated teams reviewing requests.

The IT Companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.

In addition to the above, the IT Companies to educate and raise awareness with their users about the types of content not permitted under their rules and community guidelines. The use of the notification system could be used as a tool to do this.

The IT companies to provide information on the procedures for submitting notices, with a view to improving the speed and effectiveness of communication between the Member State authorities and the IT Companies, in particular on notifications and on disabling access to or removal of illegal hate speech online. The information is to be channelled through the national contact points designated by the IT companies and the Member States respectively. This would also enable Member States, and in particular their law enforcement agencies, to further familiarise themselves with the methods to recognise and notify the companies of illegal hate speech online.

The IT Companies to encourage the provision of notices and flagging of content that promotes incitement to violence and hateful conduct at scale by experts, particularly via partnerships with CSOs, by providing clear information on individual company Rules and Community Guidelines and rules on the reporting and notification processes. The IT Companies to endeavour to strengthen partnerships with CSOs by widening the geographical spread of such partnerships and, where appropriate, to provide support and training to enable CSO partners to fulfil the role of a "trusted reporter" or equivalent, with due respect to the need of maintaining their independence and credibility.

The IT Companies rely on support from Member States and the European Commission to ensure access to a representative network of CSO partners and "trusted reporters" in all Member States helping to help provide high quality notices. IT Companies to make information about "trusted reporters" available on their websites.

The IT Companies to provide regular training to their staff on current societal developments and to exchange views on the potential for further improvement.

The IT Companies to intensify cooperation between themselves and other platforms and social media companies to enhance best practice sharing.

The IT Companies and the European Commission, recognising the value of independent counter speech against hateful rhetoric and prejudice, aim to continue their work in identifying and promoting independent counter-narratives, new ideas and initiatives and supporting educational programs that encourage critical thinking.

The IT Companies to intensify their work with CSOs to deliver best practice training on countering hateful rhetoric and prejudice and increase the scale of their proactive outreach to CSOs to help them deliver effective counter speech campaigns. The European Commission, in cooperation with Member States, to contribute to this endeavour by taking steps to map CSOs' specific needs and demands in this respect.

The European Commission in coordination with Member States to promote the adherence to the commitments set out in this code of conduct also to other relevant platforms and social media companies.

The IT Companies and the European Commission agree to assess the public commitments in this code of conduct on a regular basis, including their impact. They also agree to further discuss how to promote transparency and encourage counter and alternative narratives. To this end, regular meetings will take place and a preliminary assessment will be reported to the High Level Group on Combating Racism, Xenophobia and all forms of intolerance by the end of 2016.

Background
The Commission has been working with social media companies to ensure that hate speech is tackled online similarly to other media channels. The e-Commerce Directive (article 14) has led to the development of take-down procedures, but does not regulate them in detail. A “notice-and-action” procedure begins when someone notifies a hosting service provider – for instance a social network, an e-commerce platform or a company that hosts websites – about illegal content on the internet (for example, racist content, child abuse content or spam) and is concluded when a hosting service provider acts against the illegal content. Following the EU Colloquium on Fundamental Rights in October 2015 on ‘Tolerance and respect: preventing and combating Antisemitic and anti-Muslim hatred in Europe’, the Commission initiated a dialogue with IT companies, in cooperation with Member States and civil society, to see how best to tackle illegal online hate speech which spreads violence and hate.

The recent terror attacks and the use of social media by terrorist groups to radicalise young people have given more urgency to tackling this issue. The Commission already launched in December 2015 the EU Internet Forum to protect the public from the spread of terrorist material and terrorist exploitation of communication channels to facilitate and direct their activities. The Joint Statement of the extraordinary Justice and Home Affairs Council following the Brussels terrorist attacks underlined the need to step up work in this field and also to agree on a Code of Conduct on hate speech online.

The Framework Decision on Combatting Racism and Xenophobia criminalises the public incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin. This is the legal basis for defining illegal online content. Freedom of expression is a core European value which must be preserved. The European Court of Human Rights set out the important distinction between content that "offends, shocks or disturbs the State or any sector of the population" and content that contains genuine and serious incitement to violence and hatred. The Court has made clear that States may sanction or prevent the latter.
© The EUropean Commission

top

Tech giants agree to EU rules on online hate speech

31/5/2016- Tech companies Facebook, Twitter, Microsoft and Google, owner of video service YouTube, agreed Tuesday to new rules from the European Union on how they manage hate speech infiltrating their networks. The rules push companies to review requests to remove illegal online hate speech within 24 hours and respond accordingly, as well as raise awareness among users on what content is appropriate for their services. In a joint statement from the European Commission and the companies involved, both sides say they recognize the "collective responsibility" to keep online spaces open for users to freely share their opinions.

"This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected," said Vĕra Jourová, the EU's EU Commissioner for Justice, Consumers and Gender Equality, in a statement. In March, 32 people were killed in bombings at an airport and subway station in Brussels. The attacks and recent efforts by terrorist groups to recruit new members through social media including Facebook and YouTube prompted the new rule changes. "The recent terror attacks have reminded us of the urgent need to address illegal online hate speech," said Jourová. "Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racists use to spread violence and hatred."

Some U.S. privacy rights groups expressed concern that the agreement sets a dangerous precedent because the removals will be based on flagging by third parties.
“It does not address that different speech is deemed illegal in different jurisdictions, nor how such 'voluntary agreements' between the private sector and state might be imitated or misused outside Europe,” said Danny O’Brien, international director of the San Francisco-based Electronic Frontier Foundation, an online civic rights group.

However, some U.S. groups concerned about cyber hate hailed the agreement. Rabbi Abraham Cooper, head of the Simon Wiesenthal Center’s Digital Terrorism and Hate Project, called it a significant step in efforts to stop terrorists and extremists from leveraging the power of social media platforms.
He called upon U.S. companies to help in the effort. For example, he said a posting about an ancient slur on Jews that they use the blood of Christian children for ritual purposes would be removed in Germany, but remain untouched if posted through a U.S. server. “Hate is hate and if a social media company would remove such postings from its online pages in Germany, it should do the same globally,” he said.
© USA Today

top

Are tech firms neutral platforms or combatants in a propaganda war?

31/5/2016- Facebook founder and Chief Executive Mark Zuckerberg pledged two weeks ago to keep his company neutral when it comes to political discussions at home. On Tuesday, he promised the European Union he’d promote propaganda at the behest of Western governments. So how does neutrality allow for activism?

Facebook and other tech companies say they don’t want to house content that incites the sort of violence and hate that leads to terrorism. The social network along with Twitter, YouTube and Microsoft reached an agreement with the European Union to take down offensive speech within 24 hours. In addition, the companies said their platforms would “encourage counter and alternative narratives” to the inflammatory content promoted by extremist groups. But with that promise, analysts say tech firms risk blurring the lines for free speech and bolstering government influence on services that have billed themselves as neutral. How exactly Facebook and the other tech companies plan to promote content that undermines terrorist groups is unclear. Also unclear is what such content looks like. The companies did not respond to a request for interviews.

The agreement poses a potential conflict for the tech firms -- especially Facebook, which is facing questions about whether it’s a neutral platform or an ideologically driven media company. Conservatives last month accused Facebook’s team of news writers of suppressing their viewpoints. Of course, taking a nonpartisan stance on U.S. politics isn’t the same as ignoring the threat of hate and terror on social media. But it’s unlikely the new counterterrorism initiative will do much to quell neutrality concerns as it “smacks of promoting one kind of thought over another,” said Jan Dawson, an analyst at Jackdaw Research. “It's quite another thing to actively promote counter-programming,” he said. “That could run the risk of stoking fears that Facebook and Twitter in particular have particular policy agendas which they will use their platforms to promote. Both companies will have to be very careful to avoid being seen as partisan or favoring one set of acceptable speech over another.”

Silicon Valley has been under growing pressure from authorities worldwide to police its platforms, especially given how terrorist organizations such as Islamic State rely on social networks to recruit. The government has insisted at several meetings over the last year, including with Apple Chief Executive Tim Cook and other industry luminaries, that it needs the tech industry’s help to digitally spar with terrorist organizations that have grown their ranks through social media. With groups such as Islamic State, governments and social networks face a formidable foe for attention online. With the promise of martyrdom and glory, more than 30,000 foreign fighters have been lured to fight for the militant group. Though wary of associating too closely with governments, the tech industry has budged. Facebook is sharing data with activists and nonprofit groups about what shape counter-speech should take to give it the best chance of going viral.

But little is known about whether counter-speech or counter-narratives work effectively online -- largely because questions persist about who to target and how. “A lot of people in the U.S. think a good solution to bad speech is more good speech…. We don’t have much evidence or data to support that idea,” said Susan Benesch of the Berkman Center for Internet & Society at Harvard, who founded the Dangerous Speech Project, which aims to combat inflammatory speech while preserving freedom of expression. The challenge for tech companies, Benesch said, is determining where the line is for offensive material. Could a news report on U.S. drone policy, for example, be used as a terrorist recruiting tool? If so, should it be downplayed on social networks? “There’s content, like an academic article, that isn’t produced with hateful intent, but may have the same negative impact as hate speech,” Benesch said. The EU generally does not protect free speech the same way the U.S. does, but advocates of Internet freedom say the deal could lead to abuse in other countries.

Danny O'Brien, international director of the Electronic Frontier Foundation, said he was “deeply disappointed” with the agreement, which mends some of the troubles U.S. tech firms have faced for years in Europe over privacy concerns and protectionism. The EU has “rubber stamped the widespread removal of allegedly illegal content, based only on flagging by third parties,” O’Brien said. “It does not address that different speech is deemed illegal in different jurisdictions, nor how such 'voluntary agreements' between the private sector and state might be imitated or misused outside Europe.” EU officials said security threats necessitated Tuesday’s agreement.

"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech,” Vera Jourova, the EU commissioner responsible for justice, consumers and gender equality, said in a prepared statement. “Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racists use to spread violence and hatred. This agreement is an important step forward to ensure that the Internet remains a place of free and democratic expression, where European values and laws are respected.”

The tech companies say they can balance the policing of hate speech with freedom of speech. “We remain committed to letting the tweets flow,” Karen White, Twitter's European head of public policy, said in a prepared statement. “However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.” Facebook remains encouraged by the possibility that speech countering extremist groups can weaken the pull of terrorists. Removing content “is not the way that we fix this problem,” Facebook's head of global policy management, Monika Bickert, said during an address at the Washington Institute for Near East Policy that was shared on YouTube. Getting people to challenge terrorists’ messages is “actually accomplished through more speech -- speech that encourages people to actually take a hard look at these groups and what they stand for and question it.”

Earlier this year, Facebook Chief Operating Officer Sheryl Sandberg said sharing stories about people who defected from Islamic State after being lured to the Middle East would be the “best thing to speak against recruitment by ISIS.” The company also has backed an Obama administration program called Peer 2 Peer, encouraging millennials to come up with anti-Islamic State messages. One of the organizers of the program, an education consulting firm called Edventure Partners, said Peer 2 Peer has helped troubled youths across the world while accumulating tens of thousands of likes and followers online. “We believe youth is able to better reach and impact their peers than traditional approaches,” said Tony Sgro, chief executive of Edventure Partners. “Government and adults have proven they can’t do it and are ineffective in creating alternative and counter narratives. Who better to push back on extremism than the very same audience extremists want to recruit?”
© The Los Angeles Times

top

Netherlands: TV presenter takes action over ‘surge of racist comments’

31/5/2016- Television presenter turned political hopeful Sylvana Simons said on Tuesday she would make a formal police complaint about the ‘surge of racist, sexist and discriminatory reactions’ her decision to go into politics had generated. By making a complaint it would make clear that ‘demonstrable injustice should never go unpunished’, political party Denk, which Simons joined earlier this month, said in a statement. ‘It is time that the social discussion about racism took place in the political arena,’ Simons told reporters. ‘Combating injustice begins with registering it.’ The public prosecution department said last week it was looking into the comments directed at Simons to see if they were punishable by law. After going public with her decision to join Denk, Simons was dismissed on social media as a ‘Netherlander hater’, a ‘winging negro’ and an ‘Erdogan helpmate’. One PVV supporter also launched a Facebook campaign to have her ‘waved out’ of the Netherlands on December 6, alongside Sinterklaas.
© The Dutch News

top

Russian Manhandled Over Social Media Comment

Police brutally arrest man for alleged extremist online activity.

20/5/2016- Federal police in St. Petersburg have accused a man in St. Petersburg of inciting hatred and enmity following a comment he made on social media. His house was raided, while the man was pinned down on the ground with his hands tied behind his back, Meduza reports. The arrested individual, 36-year-old Artem Chebotarev, is co-founder of a community on Russia’s popular social network VKontakte called “Free Ingria,” a geographical area that partly covers the north of Russia. The social media group connects users who believe that the St. Petersburg region should declare its independence from Russia, Radio Free Europe says. The investigators didn’t specify the contents of the social media comment but said Chebotarev was posting statements against Moscow and its inhabitants, Russia’s business daily Kommersant reported. The federal police officials also added that their tough approach was justified by information that they had received that the man had weapons in his house. Chebotarev was later released, Russia’s Novaya Gazeta says.

# Russia toughened punishment for separatist ideas in 2014, after its annexation of the Crimean peninsula from Ukraine. The new legislation put stricter regulations on online media, as well as increased the prison terms for "public calls for actions violating the territorial integrity of the Russian Federation," The Guardian says.
# In Late April, a man known as one of the “founding fathers” of the Internet, Anton Nossik, was charged with extremism for a 2015 blog post about Syria. A prominent blogger, Nossik has been accused of hate speech according to the criminal code and may be sentenced to up to four years in prison.
# Russia’s top investigator Alexander Bastrykin recently proposed changes in legislation that regulates the Internet, which would be based on the Chinese model. The suggested measures included restrictions on foreign ownership of Internet sites and a stricter definition of what constitutes extremism in relation to Crimea, among other things.
Compiled by Evgeny Deulin
© Transitions Online.

top

UK: Church Minister investigated over far-right and Islamophobic posts

Father David Lloyd, of the Newcastle parish in Bridgend, has since deleted his social media accounts.

30/5/2016- A minister is being investigated by the Church in Wales after posting on social media in support of far-right and Islamophobic groups. The Reverend Father David Lloyd, from Bridgend , posted on his Facebook page about “idiots” who dismissed the anti-Islam movement Pegida or Britain First video posts. The post read: “Those idiots who dismiss Pegida (Patriotic Europeans against the Islamisation of the West) or Britain First video posts, out of hand, should grow up, overcome their prejudices and WATCH the content before judging. “You might discover these groups are working hard for YOUR freedom and YOUR children’s future while you stand idly by.” Father Lloyd, who represents the Newcastle Parish (central Bridgend) in the Diocese of Llandaff , shared Facebook posts from groups such as Islam Exposed, and told his followers controversial figure Tommy Robinson of Pegida should be “applauded and supported”.

He also posted about comedian Lenny Henry who he claimed was “never happy”. he Reverend added: “BBC is “too white” for him now. He wanted to get rid of the Minstrels Show. Knight ‘em and they go political.” The posts drew criticism online with anti-hate group IRBF calling for his resignation. Father Lloyd has since deleted his social media accounts, but the IRBF have screen grabbed them and posted the messages from their own Twitter accounts. Before deleting his accounts, Father Lloyd posted on Facebook: “Due to abusive phone calls to my wife, me and now my superiors at The Church in Wales, I will no longer be posting as an individual. My parish page will still run. “Thanks for all the fun you’ve shared with me and for helping me through the dark, painful and sleep deprived times. “From an alleged ‘racist and Islamophobe’ and your friend David.”

A spokeswoman from the Church in Wales said Father Lloyd had “apologised” for the messages. She added: ““The Revd David Lloyd’s views expressed in his tweets were his personal ones and not those of the Church in Wales. “He has apologised for any offence they caused and has closed down his social media accounts.” Father Lloyd was approached for comment but has not responded.
© Wales Online

top

Headlines May 2016

Are EU having a laugh? Europe passes hopeless cyber-commerce rules

When compromise becomes why bother at all

27/5/2016- The European Commission (EC) has approved a series of ecommerce rules designed to make Europe more competitive online. In true European fashion however, the proposals contain a lengthy series of inconsistent compromises and avoid altogether the most complex policy issues, making them largely worthless. Vice-President for the Digital Single Market, Andrus Ansip, said of the measures: "All too often people are blocked from accessing the best offers when shopping online, or decide not to buy cross-border because the delivery prices are too high, or they are worried about how to claim their rights if something goes wrong. "We want to solve the problems that are preventing consumers and businesses from fully enjoying the opportunities of buying and selling products and services online." Except the rules don't do that. While companies in Europe will be obliged to sell to anyone else in the European Union, they won't have to ship goods there.

So a consumer in, say, Poland can now buy goods from, say, Spain. But if that Spanish company doesn't want to ship them, it can inform their Polish customer that they need to travel to Spain to pick them up. In another sign of the hopeless EC bureaucracy mindset, there won't be rules around shipping rates across Europe (which are notoriously inconsistent), but it will spend a lot of money creating a website that will attempt to list all those rates. "The Regulation will give national postal regulators the data they need to monitor cross-border markets and check the affordability and cost-orientation of prices," the EC announced. "It will also encourage competition by requiring transparent and non-discriminatory third-party access to cross-border parcel delivery services and infrastructure. The Commission will publish public listed prices of universal service providers to increase peer competition and tariff transparency." It will most likely be a gigantic waste of everyone's time and become just one more service that the EC offers at great expense but which no one uses.

That's digital economy
Worst, however, is the fact that the Commission has exempted digital goods from its digital single market, so companies will be able to continue to geo-block videos and other digital files. The proposals have found some attention – particularly outside Europe – over the plan to treat the internet in the same way as cable television and seek to require content companies like Netflix to make sure 20 per cent of their programming comes from Europe. A Netflix spokesman responded by saying that over 20 per cent of what the company offers already comes from Europe, but questioned whether a requirement for content providers to purchase the rights to content from a specific geographic area was really going to help the European film and TV industries thrive.

As to the critical issue of "internet platforms" that offer telecommunications – such as Skype or WhatsApp – and what rules should apply to them, the Commission simply punted the issue into the long grass, ensuring that future efforts to put rules in place will be even more difficult. While failing to come up with answers to the kinds of policy questions that the EC exists to produce, it did manage to draw up new rules for others to interpret and enact, in particular a vague "code of conduct" aimed at dealing with hate speech online that companies will have to figure out how to make work, while the EC watches over their shoulders tutting.
© The Register

top

UK: Yvette Cooper leads campaign to ‘reclaim the internet’ from sexist trolls

Labour’s Yvette Cooper is leading a cross-party campaign to tackle online misogyny.

26/5/2016- The former Labour leadership candidate today launched a campaign to ‘Reclaim the Internet’, fighting back against the online abuse that women face every day online. Cooper launched the campaign alongside the Tory equalities select committee chair Maria Miller, former Lib Dem equalities minister Jo Swinson, and Labour’s Jess Phillips. Think-tank Demos released an analysis of social media misogyny, tracking the use of the words “slut” and “whore” by Twitter users in the UK. It found that more than 6,500 individuals were targeted in the UK, with more than 10,000 tweets sent.

Ms Cooper said: “Forty years ago women took to the streets to challenge attitudes and demand action against harassment on the streets. “Today the internet is our streets and public spaces. “Yet for some people online harassment, bullying, misogyny, racism or homophobia can end up poisoning the internet and stopping them from speaking out. “We have responsibilities as online citizens to make sure the internet is a safe space. Challenging online abuse can’t be done by any organisation alone … This needs everyone.” The campaign seeks to engage with officials from Facebook and Twitter to develop new methods of dealing with abuse, while an online forum aims to gather submissions from the public.

Demos Researcher Alex Krasodomski-Jones said: “This study provides a birds-eye snapshot of what is ultimately a very personal and often traumatic experience for women. “While we have focused on Twitter, who are considerably more generous in sharing their data with researchers like us, it’s important to note that misogyny is prevalent across all social media, and we must make sure that the other big tech companies are also involved in discussions around education and developing solutions.”
© The Pink News

top

German Pegida row over non-white photos on Kinder bars

Members of the anti-Islam protest group Pegida in Germany have complained about images of non-white children on Kinder chocolate bar packets.

24/5/2016- A Pegida Facebook page in Baden-Wuerttemberg asked: "Is this a joke?" But after being told the photos were childhood photos of Germany's footballers being used in Euro-2016-linked marketing, they admitted they had "dived into a wasps' nest". Kinder said it would not tolerate "xenophobia or discrimination". A photograph of two chocolate bars was circulated by the person behind the Bodensee Facebook group of Pegida (Patriotic Europeans Against the Islamisation of the West). For decades, Kinder packaging has featured a blonde-haired, blue-eyed boy. But in a marketing campaign ahead of the Euro 2016 football tournament, Kinder has started to use photographs of the German team's players when they were children.

'Is this a joke?'
The two that the Pegida group complained about were Ilkay Guendogan and Jerome Boateng, both German nationals who play in the Bundesliga as well as the national team. Seemingly without realising this, the group's admin wrote: "They'll stop at nothing. Can you really buy these? Or is it a joke?" One commenter responded: "Do the Turks and other countries use pictures of German children on their sweets or groceries? Surely not." Soon the comments filled with explanations of the marketing campaign, and a backlash against the Pegida group. One person wrote: "Close the borders and have no exports, no migration! Then you'll get unemployment and local league football." Another wrote: "If one of those men scores a goal he'll be celebrated." The negative reaction forced the original poster to write that it was "best not to respond" and that they had "really dived into a wasps' nest." After being alerted to the ongoing discussion on Facebook, Kinder's manufacturers Ferrero wrote: "We would like to explicitly distance ourselves from every kind of xenophobia and discrimination. We do not accept or tolerate these in our Facebook communities either."
© BBC News

top

Japan: Diet passes first law to curb hate speech

24/5/2016- Japan’s first anti-hate speech law passed the Diet on Tuesday, marking a step forward in the nation’s long-stalled efforts to curb racial discrimination. But the legislation has been dogged by skepticism, with critics slamming it as philosophical at best and toothless window dressing at worst. The ruling coalition-backed law seeks to eliminate hate speech, which exploded onto the scene around 2013 amid Japan’s deteriorating relationship with South Korea. It is the first such law in a country that has long failed to tackle the issue of racism despite its membership in the U.N.-designated International Convention on the Elimination of All Forms of Racial Discrimination. Critics, however, have decried the legislation as ineffective. While it condemns unjustly discriminatory language as “unforgivable,” it doesn’t legally ban hate speech and sets no penalty.
How effective the law will be in helping prevent the rallies frequently organized by ultraconservative groups calling for the banishment or even massacre of ethnic Korean residents remains to be seen. Critics including the Japan Lawyers Network for Refugees have also pointed out the law is only intended to cover people of overseas origin and their descendants “who live legally in Japan.” The law’s mention of legality, they say, will exclude many foreign residents without valid visas, such as asylum seekers and overstayers. Submitted by lawmakers from the Liberal Democratic Party and Komeito, the bill initially limited its definition of hate speech to threats to bodies, lives and freedom of non-Japanese as well as other incendiary language aimed at excluding them. But at the urging of the Democratic Party, the scope of the legislation was expanded to cover “egregious insults” against foreign residents.

The law defines the responsibility of the state and municipalities in taking measures against hate speech, such as setting up consultation systems and better educating the public on the need to eradicate such language. The Justice Ministry’s first comprehensive probe into hate speech found in March that demonstrations organized by the anti-Korean activist group Zaitokukai and other conservative organizations still occur on a regular basis, although not all involve invectives against ethnic minorities. A total of 347 such rallies took place in 2013, while 378 were held in 2014 and 190 from January through September last year, the Justice Ministry said.
© The Japan Times

top

The 5 things you say when you're a racist

by Brianna Cox

24/5/2016- As someone who has been writing on the internet for a few years now, I know that trolls come with the experience. But perhaps the most mind-boggling part of this is when you explicitly spell out racist or otherwise overtly offensive things and explain why they are racist or otherwise overtly offensive, people stampede to the comment thread to literally prove the article's point. And the thing is... they always respond with the same old tired arguments. Always.

Many Americans just do not know that much about racism and its systemic nature. So when there's a discussion, they get defensive at the very least, and cruel at the very most. It's not surprising that the responses follow the same pattern when studies have shown that white Americans think "reverse racism" (which is not a thing) is a bigger problem than anti-black racism, despite virtually no peer reviewed evidence to support this. Or, perhaps worse even, many take their uninformed opinion and preach it forward to the next generation, so that their children also do not understand racism (or "see color"). But that doesn't mean it's OK to respond to someone who says "you're racist" or "this is racism" with an attack. So allow me to break down (once again) exactly why these arguments are full of it:

The First Amendment argument
Writing an article or calling out racism/homophobia/xenophobia/transantagonism, etc., is not oppressing free speech in any way, shape or form. Ironically enough, the First Amendment’s existence allows us to shout from the rooftops our displeasure at the awful shit that bigots have to say. Additionally, freedom of speech does not and has never equated to freedom from consequences; there are many instances in which free speech is already regulated in our society (in the public and private sectors). Try again.
The 'You’re the Real Racist' argument (alt: the Obamas have divided the country) (alt: stop making it about race)

When many of us speak about racism, we are speaking about the institutional and systemic way in which nonwhite people in America have openly and covertly been kept from the opportunities of their white counterparts. So in that framework, nonwhite people cannot oppress white people. Even if we could, talking about systemic inequality and the micro aggressions and words and actions that perpetuate it is not oppression in any way. Additionally, the Obamas barely talk about race (I wish they did more), so it seems that what divides the country regarding the Obamas is their very existence as being black in the White House.

The 'If You Stop Talking About Racism, It Will Go Away' argument
When is the last time covering literal feces up with a paper towel made it go away?

The 'Your Objectivity Is Clouded By Prejudice' argument
Because apparently only white men/people are capable of being objective, rather than being influenced by their place in society and experiences because of that place.

The 'You People Are So Easily Offended' argument
I see people angry at “social justice warriors” and people of color speaking out against racism, saying that those of us who do are just overly sensitive — and yet some of those very same folks will say that Star Wars’ casting is white genocide, and that Old Navy hates white babies because they have an ad with an interracial couple. See also: anger and refusal to understand anything about racism or the meme utterance of “white privilege."

The ad hominem attack
Calling a writer ugly, her interracial marriage “gross,” drawing Michelle Obama as a man and creeping on a stranger's Facebook profile to poke fun at their weight are personal attacks that do not at all engage with the actual arguments. That's being both defensive and cruel, and demonstrating you do not have an actual argument to fall back on.
© She Knows

top

Too fat for Facebook: photo banned for depicting body in 'undesirable manner'

Facebook has apologized for wrongly banning a photo of plus-sized model Tess Holliday for violating its ‘health and fitness’ advertising policy

23/5/2016- Facebook has apologized for banning a photo of a plus-sized model and telling the feminist group that posted the image that it depicts “body parts in an undesirable manner”. Cherchez la Femme, an Australian group that hosts popular culture talkshows with “an unapologetically feminist angle”, said Facebook rejected an advert featuring Tess Holliday, a plus-sized model wearing a bikini, telling the group it violated the company’s “ad guidelines”. After the group appealed the rejection, Facebook’s ad team initially defended the decision, writing that the photo failed to comply with the social networking site’s “health and fitness policy”. “Ads may not depict a state of health or body weight as being perfect or extremely undesirable,” Facebook wrote. “Ads like these are not allowed since they make viewers feel bad about themselves. Instead, we recommend using an image of a relevant activity, such as running or riding a bike.”

In a statement Monday, Facebook apologized for its original stance and said it had determined that the photo does comply with its guidelines. “Our team processes millions of advertising images each week, and in some instances we incorrectly prohibit ads,” the statement said. “This image does not violate our ad policies. We apologize for the error and have let the advertiser know we are approving their ad.” The photo – for an event called Cherchez La Femme: Feminism and Fat – features a smiling Holliday wearing a standard bikini. Facebook had originally allowed the event page to remain, but refused to approve the group’s advert, which would have boosted the post.

The policy in question is aimed at blocking content that encourages unhealthy weight loss – the opposite intent of Cherchez la Femme, which was promoting body positivity. This is not the first time Facebook has come under fire for its censorship of photos. In March, the site faced backlash when it concluded that a photograph of topless Aboriginal women in ceremonial paint as part of a protest violated “community standards”. Critics said that ban was an obvious double standard, noting that Facebook allows celebrities such as Kim Kardashian to pose with body paint covering her nipples. Instagram and Facebook also have faced opposition for policies banning women from exposing their nipples, with critics arguing that the guidelines are prejudiced against women and transgender users.

Cherchez la Femme did not immediately respond to a request for comment on Monday, but has been venting its frustrations on its Facebook page. “Facebook has ignored the fact that our event is going to be discussing body positivity (which comes in all shapes and sizes, but in the particular case of our event, fat bodies), and has instead come to the conclusion that we’ve set out to make women feel bad about themselves by posting an image of a wonderful plus sized woman,” the group said. “We’re raging pretty hard over here.”
© The Guardian.

top

American Neo-Nazis Are on Russia's Facebook

To escape Facebook’s crackdown and connect with white-power groups worldwide, U.S.-based extremists are joining VK.

20/5/2016- An online group called “United Aryan Front” recently warned readers that “the wolves are closing in...and we are the sheepdog” and followed with a call for recruits: “If you are not a part of an organization but would like to join us...you can!! White Lives Matter is the largest organization of whites in the world.” The post wrapped up with a smattering of hashtags like #WhiteLivesMatterAcrossAmerica. But the site where this rant was posted isn’t based in America. United Aryan Front, along with scores of other American extremist groups, is on VK, also known as VKontakte—otherwise known as Russia’s version of Facebook. The social network has become a home for white-power groups who were pushed off of Facebook for hate speech, or who want to connect with fellow racists in other countries. 

The move to VK is part of the growing tendency of white supremacists to interact in online forums, rather than through real-life groups like the KKK, according to Heidi Beirich, director of the Southern Poverty Law Center’s anti-terror Intelligence Project. Through the early 2000s, skinheads and other groups would host dozens of events per year with hundreds of attendees, she says, but now there are only a handful of those rallies each year. “People online are talking about the same kinds of things that used to happen at the rallies, but now they’re doing it completely through the web,” she said. Jessie Daniels, a sociologist who studies cyber racism, has also noticed that racist groups are now much more active online than in the streets. In this way, they reflect overall trends in society: The rest of us might be Bowling Alone, but white supremacists are rallying alone. For the supremacist groups, the benefits include anonymity, ease, and an opportunity to connect with extremists in other nations.

Take, for example, John Russell Houser. Before he killed two people at a showing of Trainwreck in Louisiana last July, he appears to have posted frequently about the Golden Dawn, a far-right Greek political party. “The internet has made it possible for white people around the globe to identify with trans-local whiteness,” Daniels said. The most striking evidence of the shift was Dylann Roof, who killed nine African-Americans in a church in Charleston, South Carolina, last April. According to Beirich, Roof had no ties to “real-world” extremists. Instead, he had simply Googled phrases like “black on White crime” and perused sites such as the Council of Conservative Citizens, which traffics in racist rhetoric.

Last year, the overall number of hate groups rose for the first time in five years, according to the SPLC’s annual count. Hits to Stormfront.org, a white nationalist hub with 300,000 registered users, have ticked up since Donald Trump announced his candidacy for president, Beirich said. According to an SPLC study, in the past five years members of Stormfront have murdered nearly 100 people. White nationalists have also taken to Twitter and other sites that host discussion forums. Facebook itself is not immune to white-power groups, who often use coded language like “new Europe.” Beirich and her group have found that newcomers are sometimes radicalized by these sites, much like some people who debate with ISIS online instead get sucked into its orbit. “It can be someone who posts a banal racist comment and people will swarm them,” she said.

White supremacists began migrating to VK over the past three years, Beirich said, when Facebook cracked down on hate speech. The platform offers a similar user experience as Facebook, complete with profiles and groups, but with seemingly less enforcement. The Simon Wiesenthal Center, which also tracks extremist groups online, gave VK a D- grade for policing hate on its annual report card, but Facebook got a B-. VK did not return a request for comment by deadline. Although VK’s terms of service prohibit information “which propagandizes and/or contributes to racial, religious, ethnic hatred or hostility, propagandizes fascism or racial superiority,” Beirich said the site appears to turn a blind eye. “Certainly from our perspective the site seems like a free-for-all,” she said. “And that is what white supremacists think, too.”

A few quick searches on VK reveal groups dedicated to preserving the Aryan race and honoring the legacy of Hitler. The news site Vocativ has counted 300 or so pro-Hitler groups on the site. Of the 202 followers of the “NSM USA Public Action” Nazi group on VK, 38 list their location as the U.S. And 243 of the more than 14,000 fans of “Aryan Girls” on the site appear to be American. A post on Stormfront claims VK is “used by 70 million white racialists everyday!” [sic] Two years ago, an Adolf Hitler fan page on VK attempted to hold a “Miss Ostland” beauty pageant, but the page was shut down after Vocativ published a story about the event. Today, the “NSM [National Socialist Movement] USA” page on VK is alive and well. Its latest post was on May 17 —a video of a speech by American neo-Nazi commander Jeff Schoep.
© The Atlantic

top

Racist video blogger Evalion booted off YouTube

A racist YouTuber with over 40,000 subscribers has had her channel suspended because of her vile videos.

20/5/2016- The young girl known only as Evalion has filmed herself singing Happy Birthday to Hitler and explaining how to recognise a Jew. Some of her most popular videos are titled ‘Why Hitler Wasn’t Evil’ and ‘How Feminists Supported Rape by Causing the Migrant Crisis’. The girl is thought to be an 18-year-old living in Canada and narrates her videos in a sweet, girlish tone. YouTube were alerted to the offensive nature of her videos when she was the subject of a video by fellow vlogger Leafyishere. The video was called ‘The Most Racist Girl On The Entire Internet’. The teenager has openly admitted to being a Holocaust denier and has called Hitler a “brilliant” and “compassionate man”. In her videos she has said: “Do you hate Jews as much as I do?” and “Do you want to know how to spot a Jew”. On Hitler’s birthday, she filmed herself singing Happy Birthday in front of a picture of the Nazi leader. She had baked four cupcakes, which she decorated with swastikas then added candles. Evalion openly idolises the leader who was responsible for the deaths of six million Jews during the Second World War. The YouTuber has also expressed racist opinions. On one of her videos she said: “Don’t you hate those lazy n***** who are never satisfied even after they are given reparations.”

Her YouTube channel is covered in swastikas, pictures of Hitler and racist pictures of Jews and Muslims. Her suspension from the video sharing site has sparked a massive debate on social media over whether she should be banned or not. One Twitter user called Spanky the Monkey said: “If you love free speech, then you have to allow ALL people to speak!” And @Polite_Critical said: “I don't support what Evalion says, but I defend her right to say it.” However other people agreed with the Google owned video platform’s decision. @HeroticTV said: ‘YouTube has every reason to ban Evalion from YouTube.” And Craig Ewen said: “I think Evalion deserved it. At the end of the day YouTube is a place kids 5+ can go to.” An official YouTube spokesperson said: “That channel was terminated by us because it violated policies against hate speech.”

© The Sun

top

If you can’t beat them, ‘like’ them (opinion)

States should use the increasing power of social media networks and work with them to achieve foreign policy objectives.
By Arik Segal


19/5/2016- In the past few months, Israeli ministers have been engaged in an international effort to enforce legislation that will have Facebook and other social media networks take responsibility for content published by its users. Israeli officials see it as a necessary measure to fight mass online incitement that exacerbates attacks against Israelis in outbursts of violence. Several times in past years in Turkey, the government has blocked access to Facebook, Twitter and YouTube to prevent the spread of what they deem “harmful content.”

Meanwhile, European governments are debating privacy laws that can allow them access to data about potential terrorists, in light of the Paris and Brussels terror attacks. It appears that in the aftermath of The War on Drugs and The War on Terror, governments have a found a new common enemy: The War on Social Media. There is little doubt that social media is used for spreading messages of hate, incitement and recruitment of terrorists – acts that eventually cost lives. However, there is much more room for states to cooperate with social media rather than seeing it as an enemy.

Instead, there are ample opportunities to use social media’s features, low costs and high effectiveness as tools to promote a state’s foreign policy objectives. The presence of billions of people on the same network offers unprecedented capability for countries to reach out, communicate and deliver messages to citizens of other states. Foreign ministries can (and do) use social media to promote relation building, trade, tourism, education and even disaster management. The most frequent use of social media by states is public diplomacy. Twiplomacy – a website dedicated to researching how governments and international organizations use social media – publishes a variety of reports about this engagement and its effectiveness.

These include the most followed heads of state on Twitter, peer-peer connections between foreign ministries, virtual diplomatic network of European embassies and even a report of world leaders who take selfies and those who use Snapchat. Turkish President Recep Tayyip Erdogan will be glad to know that he is ranked as the second most “likable” world leader, with an average of 127,432 likes for each of his Facebook posts, despite his critical approach to social media in Turkey.

Worth noting is how some states use social media to support foreign policy strategies as state branding. Last year, the Finnish government created a set of 30 unique Finnish emojis that can be downloaded by anyone in an effort to create awareness of Finnish culture worldwide. The official Israeli Twitter channel exposes Israeli innovations and culture to more than 300,000 followers (more followers than official US and Russian Twitter channels) in an effort to rebrand Israel as more than the “conflict.” Beyond presenting foreign policy, social media can also be used for creating foreign policy, especially between states that do not have diplomatic relations. Groups on Facebook or WhatsApp can serve as platforms for dialogue processes between governments and high-profile individuals from other states as part of conflict management processes.

Another use could be direct state-tostate public dialogue negotiations via Twitter. In this context, publicity could serve as an advantage for states that want to present their own willingness to promote peace, especially if the other state chooses not to respond. All of the above can develop into a whole new level of influence, that of when future technologies – such as virtual and augmented reality and artificial intelligence – become more common and embedded in Facebook, Twitter and others. The giant tech companies that operate social media networks share the same interests with states and do not want their platforms to be used for exercising virtual or physical violence. Just as other multinational corporations, they seek legitimate goals as profit and influence. States and international organizations should work with them in cooperation to fight those who use social networks for harmful purposes – as the US government is currently doing as well – to use social media’s power to achieve foreign policy objectives and promote national interests.
The writer is the CEO of Segal Conflict Management; he specializes in using technology as a tool in conflict management processes.
© The Jerusalem Post

top

Czech Rep: Number of displays of antisemitism high

17/5/2016- The number of displays of hatred for Jews remained as high in the Czech Republic in 2015 as in the preceding year, and reached 221, the Czech Federation of Jewish Communities (FZO) says in a report released to CTK on Tuesday. In 2014, the number reached 234. Hatred was mainly spread via the Internet, the annual report says. The rising number of issued books is dangerous, since the revenues from their sale may help finance extremist groups' activities, the report says. "Although the Jewish community in the Czech Republic was not a target of terrorist attacks...we view this threat as very serious in the world context and we have adjusted our security measures accordingly," FZO Secretary Tomas Kraus said.

Nevertheless, the report says the Czech Republic still ranks among the countries where anti-Semitism is present only marginally. It says anti-Semitic books have mainly been issued by the ABB publisher linked to Adam B. Bartos, chairman of the ultra-right extra-parliamentary National Democracy (ND), and also the Guidemedia etc publishing house that issues translations of Nazi texts. Last year, re-editions of older anti-Semitic books appeared as well as new texts focusing on conspiracy theories and the Holocaust denial, the report says. Conspiracy theories are a new phenomenon that has emerged in connection with the migrant crisis. Their main motif is the Jewish-organised refugee flow to Europe, the consequent destruction of Europe and its values, and the gradual taking of control of Europe, the FZO writes in the report.

In 2015, the FZO also registered attempts at the economic and cultural boycotting of Israel, which is a new form of anti-Semitism, the report says. The forms of displays of hatred to Jews in 2015 were similar to those in previous years, including letters, e-mails, verbal attacks, harassment in the vicinity of Jewish sites, desecration and vandalism. No physical attack on people was registered last year, compared to one in 2014. Five attacks on property were registered, the same number as in 2014. The number of threat cases dropped to three and of harassment rose to 31.

Displays of hatred on the Internet were the most frequent like in the previous years. They made up 182 (82 percent) of the total of 221 incidents, the report says. The articles and comments tend to be more and more often spread on social networks and blogs instead of traditional websites. For example, a community "We Don's Want Jews in the Czech Republic" appeared on Facebook, which Facebook eventually removed at the critics' request, the FZO writes. The FZO's data may differ from those released by other institutions, which limit displays of anti-Semitism to acts that can be qualified as crimes. According to the Interior Ministry's report, the police registered 47 crimes with anti-Semitic subtext, two more than in 2014. Most of them were displays of support for movements aimed to suppress human rights and freedoms.
© The Prague Daily Monitor

top

Now When You Browse BuzzFeed All Your Traffic Will Be Encrypted

16/5/2016- ou may associate BuzzFeed with cats and ’90s listicles, but on Monday the company announced something a bit more serious. The site has transitioned to using HTTPS encryption by default on all its pages, meaning your browser’s server requests and the data BuzzFeed sends back are all protected. As cybersecurity has become a bigger priority to companies and organizations around the world, more of the sites and services we use every day have moved from using the foundational Web protocol HTTP to the more secure HTTPS. Google expanded its use of HTTPS for Gmail in 2014, and the White House Office of Management and Budget announced an HTTPS-Only Standard directive last year that requires all public-facing federal sites to use the protocol. Media companies have lagged behind on the transition, though.

In a blog post on Monday, BuzzFeed noted that this is partly because of unencrypted third-party advertising content. A page can only use HTTPS if all its embedded components use it, too, and BuzzFeed has an unusual amount of control over its ads because it produces them in-house. The Washington Post began transitioning its site to use HTTPS in June, but many other media outlets like the New York Times, the Gawker blog network, and Slate haven’t made the switch. “It was still a significant challenge for our engineering team to ensure that all of our embedded content (tweets, nstagrams, YouTube videos, etc.) is served over HTTPS,” wrote BuzzFeed’s Director of Global Security Jason Reich, Director of Engineering Clement Huyghebaert, and Assistant General Counsel Nabiha Syed. “Fortunately most of the major platforms we embed are already doing it.”

To incentivize the transition, Google said in 2014 that its search results would start giving preference to encrypted pages and would ramp up this weighting more and more. BuzzFeed acknowledges this, and given that the company that is so focused on virality and social promotion, it’s not surprising that the site would want to take advantage of the extra boost—HTTPS is a win-win for BuzzFeed. Encryption helps to protect readers from surveillance or attack, creates a safer space for discourse, and could boost search engine optimization. BuzzFeed’s blog post notes that HTTPS doesn’t solve everything but adds that it is “one part of a long process towards helping protect users’ data and information from those who want to exploit it.” Hopefully, other media outlets will see all of this and get in on the encryption action.



Future Tense is a partnership of SlateNew America, and Arizona State University.


© Future Tense
top

South Africa: Law to tackle hate speech

16/5/2016- A draft bill that will criminalise racism in the country has been amended to include hate speech, as incidents of discrimination have increased “at an alarming rate”, according to the South African Human Rights Commission. The Prevention and Combating of Hate Crimes and Hate Speech Bill is expected to be tabled in Parliament by September, after which it would be opened for public comment. While racism can be prosecuted under laws governing hate speech, crimen injuria and defamation, there have been growing calls for legislation that would specifically govern racial discrimination. The most recent episode of racism on social media saw High Court Judge Mabel Jansen on the receiving end of a backlash after a conversation she had with journalist, Gillian Schutte, over a year ago came to light last week.

Referring to cases she had presided over, Jansen said of black people: “In their culture, a woman is there to pleasure them. Period. It is seen as an absolute right and a woman’s consent is not required. I still have to meet a black girl who was not raped at about 12. I am dead serious.” She has since been placed on special leave. The South African Human Rights Commission (SAHRC) confirmed it had been asked to investigate the incident, spokesman Isaac Mangena said. The proposed bill initially excluded hate speech and the criminalisation of unfair discrimination because of the sensitivities and complexities involved in dealing with such incidents. “However, the events we witnessed in January this year highlighted the need to include hate speech as a criminal offence,” said Deputy Minister of Justice and Constitutional Development John Jeffery.

Jeffrey was referring to an incident where estate agent Penny Sparrow came under fire for describing black beachgoers as “monkeys” in a reaction to litter left behind after New Year celebrations. The SAHRC received more than 200 complaints about this incident. The bill is not constrained to issues of race and includes offences committed because of gender, ethnicity, social origin, sexual orientation, religion, belief, culture, language, birth, HIV status, nationality, gender identity, inter sex, albinism and occupation or trade. The proposed bill also criminalises any conduct which amounts to incitement, instigation and conspiracy to commit hate crimes. However, this clause in the bill would require a directive from the Department of Public Prosecutions to authorise prosecution.

The encouragement of hatred, as described in the bill, includes all forms of communication, whether by statement, broadcast, advertisement, photographs or on social media platforms. “We are confident that this will address some of the vitriolic comments we see so often on social media and online,” Jeffrey said.

“The law can regulate the behaviour of people in society, our department can draft the law, Parliament can pass it and it will be in the statue book but it can not change the hearts, minds and attitudes of people.”

Laws will drive racism underground – SAIRR
The South African Institute for Race Relations (SAIRR) says laws criminalising racism won’t stop discrimination, but would “drive racism underground”. SAIRR spokeswoman Mienke Mari Steytler said: “We would therefore be living in the illusion that there are no racists in the country when, in fact, they still exist.” However, the South African Human Rights Commission (SAHRC) said a section of society believes that criminalising racism, racial discrimination and hate speech is an appropriate and acceptable means of advancing the goals of a substantive equality and multicultural tolerance in the country.

The Prevention and Combating of Hate Crimes and Hate Speech Bill will outlaw hate speech and criminalise unfair discrimination. It will be tabled in Parliament in September. “The institution believes a new set of laws focusing on racism is not needed,” Steytler said. “The constitution is clear on what constitutes hate speech (which includes racist speech) and crimen injuria also allows for prosecution of acts of ‘unlawfully, intentionally and seriously impairing the dignity of another’.” The institute opposes a new set of laws as “one will have to be incredibly careful not to infringe upon freedom of speech, which is a cornerstone of our democracy. We ask ourselves: Where will the line be drawn? Will comedians start to be prosecuted?”

Steytler said the country’s laws had enough provision to prosecute incidents of racism and discrimination. “We have to ensure that South Africans are informed about the routes they can take and also where they can report their grievances, from the Equality Courts and well as the SAHRC,” she said. The institution is of the opinion that racism, xenophobia and an increase of protest action all have the same underlying causes – an extremely weak economy, an education system that is only serving a few and empowerment policies that empower a small elite. “This leads to a boiling pot atmosphere in the country, leading to frustrated and angry people who then try to find avenues (even if wrong avenues) to express themselves.”

SAHRC spokesman Isaac Mangena said comments made by KwaZulu-Natal real estate agent, Penny Sparrow, and other comments that incite hate speech, have sparked fierce national conversation about racism and how the government should respond in an appropriate and effective manner. “You can criminalise specific acts or behaviours, like racist hate speech, etc, but not the attitudes,” Mangena said. The SAHRC would make submissions when the bill opens for public comment. “The persistent nature of racism is not necessarily due to the failure of the state to put in place policies and mechanisms to address it. Instead, it exists despite the existence of laws,” he said. He said racism remains the most contentious, divisive and sensitive challenge confronting the country. “What cannot be ignored, glossed over or jettisoned in this debate is that there is no way to move forward without meaningfully dealing with the historical, political and economic contexts,” Mangena said.
© IOL News

top

France: Taking on racism and hate speech

French authorities have rolled out their first campaigns to fight racism and anti-Semitism that offer hard-hitting messages against hate speech and workplace discrimination. Elizabeth Bryant reports from Paris.

19/5/2016- The only time Dieynaba Thioune usually wears a Muslim headscarf is during Friday prayers back in her home city of Dakar, Senegal. But on a recent sunny day in Paris, she donned one to make a point. "It feels very strange," said 19-year-old Thioune, who joined a 'hijab day' rally at France's elite Sciences Po University. "I have friends who wear the hijab here, and they sometimes get verbally attacked." A few miles north across the city limit, outside a state employment office, 29-year-old Yacouba Cisse describes the challenges of finding work as a restaurant cook. "When they see the color of my skin, they ask if I want to wash dishes," said Cisse, who is also from Senegal. Those are sentiments France's leftist government wants to change, under a massive, 100-million-euro ($113 million) bid to fight racism and discrimination, first announced a year ago.

In recent weeks, authorities have rolled out their first major communications campaigns: a pair of hard-hitting messages against hate speech and discrimination in hiring practices. "We cannot just sit and watch rising populism, extremism and radicalism in all its forms, to have this threat in the middle of our Republic," said Gilles Clavreul, head of DILCRA, a ministerial body overseeing the fight against racism and anti-Semitism. The three-year government plan includes an arsenal of proposals, from deepening sanctions and the Internet fight against hate speech, to launching school and citizen education programs.

Effort draws mixed reviews
France is hardly the only European country grappling with prejudice. Far-right groups are gaining ground across Europe, feeding on the immigration crisis and rising fears of militant Islam. Still, in March, the Council of Europe warned that hate speech in France has "become commonplace." In interviews with roughly a dozen anti-discrimination activists, experts and ordinary people, many applaud the campaign's overall intent, but give the communications campaigns mixed reviews. Some even suggested French authorities are part of the problem, pointing to the fractured political response to the Muslim veil as a leading example. Most observers, however, agree on one thing: it will take much more than a three-year crusade to bring about a more tolerant and egalitarian society. "There's a real political will, but it will take 20 years to achieve success," said Christine Lazerges of the National Consultative Commission on Human Rights (CNCDH), a government advisory body. Major changes were needed in the country's educational system and in turning around France's disenfranchised suburbs, she added.

Government statistics also attest to a long road ahead. In 2015, hate offences overall jumped by more than one-fifth compared to the year before to more than 2,000. Anti-Muslim acts and threats alone tripled last year, while anti-Semitic ones remained high. Activists say the true figures are higher, since many acts go unrecorded. Despite an overall hike in hate acts in 2015, Clavreul cites signs of progress. New figures in May show a sharp drop in anti-Semitic and anti-Muslim acts since a year ago. A study by the CNCDH found an increase in perceived French tolerance - a surprising fallout from a year bracketed by two Islamist terrorist attacks in Paris. "There is a need for fraternity and social cohesion that is making people open up to those who are different," the commission's president Lazerges said.

But other forms of discrimination are more subtle. A survey on French hiring by Paris think-tank Institut Montaigne found Christian men are four times more likely to get a callback from recruiters than Muslim ones - a discrepancy that actually increases among the more qualified. Jews also face discrimination, but to a lesser extent. "It's a very serious phenomenon," said Montaigne's deputy director Angele Malatre-Lansac, pointing to study estimates that discrimination against Muslims in France was far higher than against African-Americans in the United States. In many cases, she says, employers are fearful of flouting the country's staunchly secular laws, and are uncertain how to treat expressions of religiosity at work, like Muslim prayers. "It's not necessarily that racism is pervasive, but religious practice can make recruiters afraid," she said.

'Real life' hate acts
The French government has gone on the offensive. In March, it launched six 30-second TV spots re-enacting 'real life' racist and anti-Semitic acts: distraught Muslims finding a pig's head stuck to the mosque gate; a black man getting beaten up; 'death to Jews' scrawled on a synagogue door. "We had to create a shock, to say 'Hey, stop, we have to address these issues,'" said Clavreul of DILCRA, describing the publicity as a first, but crucial step. Still, some anti-discrimination groups criticize the spots for offering a narrow, overly violent take on discrimination. "It can be even counterproductive, because we've worked for years to show that racism is subtle, and even those who are not racist can have humiliating, wounding words," Lazerges of the rights body said. Others want results.

"Publicity spots are good, they can help educate people," said Abdallah Zekri, head of the Observatory Against Islamophobia. "But how many people were arrested, how many people were found guilty?" Officials argue all hate acts will be pursued and punished, and the campaign's sweep is both broad and local. The government has taken a different tack with its second campaign, rolled out in mid-April. Giant posters portray job seekers with their faces split in half - white and non-white - with the tagline "Skills First." Next to the white side are messages like, "You start Monday." On the non-white: "You don't have the right profile." Authorities also say they will test companies on their hiring practices, with plans to 'name and shame.' Some have said they find the posters unsettling rather than helpful.

What about veiled women?
The state's tough stance toward the Muslim headscarf also raises questions over whether its anti-discrimination drive will fairly defend veiled women, who are considered leading targets of anti-Muslim acts. Controversial remarks by top politicians - Women's Rights Minister Laurence Rossignol recently compared veiled women to "negroes" supporting slavery - have fuelled those doubts. Prime Minister Manuel Valls also takes a hard view, describing the veil as a sign of "enslavement" and criticizing the Science Po's recent hijab day, organized to protest Rossignol's remarks. "The number one culprit of Islamophobia in France is the state itself," said Yasser Louati, spokesman for the Collective Against Islamophobia. "If there's work to be done, it has to be done at the grassroots level." Sciences Po student Thioune is also skeptical. "I thought France was open-minded," she said, "but not when it comes to the hijab."
© The Deutsche Welle.

top

France: Facebook, Twitter, Youtube Face Hate Speech Complaints

Five months after Germany probed Facebook on hate speech, France has now filed a legal complaint.

15/5/2016- Three French anti-racism associations said on Sunday they would file legal complaints against social networks Facebook, Twitter and Google’s Youtube for failing to remove “hateful” content posted on their platforms. French law requires websites to take down racist, homophobic or anti-semitic material and tell authorities about it. But French Jewish students union UEJF and anti-racism and anti-homophobia campaigners SOS Racisme and SOS Homophobie said the three firms had removed only a fraction of 586 examples of hateful content the anti-racism groups had counted on their platforms between the end of March and May 10. Twitter TWTR 0.14% removed only 4%, Youtube GOOG -0.33% 7% and Facebook 34%, according to the associations. “In light of YouTube, Twitter and Facebook’s profits and how little taxes they pay, their refusal to invest in the fight against hate is unacceptable,” UEJF president Sacha Reingewirtz said in a statement. Germany got Facebook, Google and Twitter to agree in December to delete hate speech from their websites within 24 hours.
© Reuters

top

UK: The shocking reality of racist bullying in British schools

19/5/2916- This week, a 16-year-old girl was tragically found dead at her school in Cornwall. It's believed that Dagmara Przybysz, originally from Poland, had suffered racist bullying. Two years ago, she'd spoken about experiencing racism on social media site Ask.fm and after her death this week, her friends suggested that the bullying had continued:
"It is so sad what people do to make people do this stuff,” wrote one. "Such a beautiful girl, died a such a young age because of absolute p***ks,” said another. A coroner will look into Przybysz’s death at a later date and it is currently unclear whether racist bullying played a part. But the tragic case does shine a light on the torment that goes on everyday in British schools. “Even though we have made tremendous progress, bullying is still a major issue in schools and there’s still a lot around race,” says Anastasia de Waal, chair of Bullying UK. “Appearances and differences have always been an easy thing to latch onto.”

A recent survey from anti-bullying charity Ditch the Label found 1.5 million young people have been bullied within the past year in the UK, and those who had an ethnic minority profile were at a much higher risk of being bullied than a young Caucasian person. This is something Billie Gianfrancesco has direct experience of. The 26-year-old PR manager is half-Caribbean, and when she was at school in rural Norfolk, found herself the target of bullies. “I experienced ignorant racism, which wasn't really an issue as I just ignored it," she says. "But then one of the senior girls at my private school started targeting me and calling me a 'Paki', telling me to go back to where I came from (which was Norwich). “Once she locked me in the changing rooms for the whole of a PE lesson because I was slow getting changed and a 'paki bitch'. I was 13 at the time.”

When she was 16, a boy in Gianfrancesco's school year began “a racist bullying campaign” against her after she rejected his advances. “My social media accounts were hacked and all my photos changed to pictures of monkeys, and there were messages talking about my mother as ‘having aids because she was a black monkey.’” What happened to Gianfrancesco is shocking, but it is by no means an anomaly. Liam Hackett, CEO of anti-bullying charity Ditch The Label, explains: “Young people are now being bullied in their safe spaces, like at home or at the dining table, because of online technology. It makes it more traumatic for young people because it’s overwhelming and they can’t escape it. “It’s often verbal but physical bullying is quite common as well. Guys are a lot more physical but girls are more verbal and indirect. It can be direct racist comments or taunts. It can be humiliating someone in a classroom or rejecting someone from social activities. One of the biggest issues is cultural differences.”

For Gianfrancesco, it was obvious that her bullying was rooted in racism. Her skin colour was targeted in direct ways, but other young people have more subtle experiences. De Waal says she has come across children and teenagers bullied for cultural clothing, habits and even the food they eat. “A lot of people might think it’s just about the skin colour but if a kid has an accent, the bullying might centre on that. It’s not always tangible - like being a different colour or having different hair. “We know if children use racist terms that schools react swiftly, but if they’re being teased for the food they bring to school – which we know in the past is a fairly common issue – then it’s much harder. Parents and schools need to work together to make sure it’s nipped in the bud.” Ultimately it comes down to adults to act – both guardians and those in schools – to ensure bullying ends immediately.

But Gianfrancesco says she felt let down by her teachers. When she reported the head girl calling her a ‘Paki’, she says “nobody took any action because she was senior”. “One teacher told me that I should just ignore it because I wasn't Asian and couldn't understand why I was bothered,” she says. When her social media account was hacked a few years later, the police became involved and confiscated her laptop but “nothing was ever done.” In the end, faced with a campaign of bullying at the hands of the male pupil she'd rejected, Gianfrancesco took action into her own hands, supported by her mother. “I started a petition and got people at school to sign it who had witnessed the racism or experienced bullying themselves. After collecting a page of signatures my head of year expelled him on the spot. I didn't take further action (even though my mum was pretty adamant that I did), because I actually felt very sorry for the boy in the end. He was clearly very sad and confused.”

Gianfrancesco’s determination meant she was able to stop the bullying and make sure the perpetrator was punished, but not every young person is capable of that. It’s why Hackett says they need the support of an adult. “It’s important to encourage the young person to talk about it and have an honest dialogue with them,” he stresses. “Be pro-active and don’t just wait for something to happen. Look out for behavioural changes, such as the child isolating themselves, losing their appetite or becoming aggressive. It’s important the young person understands they’re not being bullied because of the colour of their skin – it’s because the bullies have their own issues.” He says parents should speak to teachers to crack down on the bullying, but in the long term, the answer to prevention lies in education. De Waal agrees: “The main thing is continuing to make sure we’re educating young people about bullying being a problem and that they understand racism. "Young people need to recognise the impact it has and that attacking someone’s identity is harmful to them.”
© The Telegraph

top

UK: Freedom of speech row as YouTube refuses to take down Scots Nazi Dog video

YouTube is refusing to remove the Scottish 'Nazi Dog' video which is the centre of a global antisemitism storm amid claims that the arrest of its owner is "absurd".

14/5/2016- Markus Meechan, 28, from Coatbridge, was criticised by Jewish leaders after training his girlfriend’s dog - a pug called Buddha who has since been branded 'The Munich Pooch' - to respond to the phrase “Gas the Jews" by giving a 'Seig Heil' salute. However, the video titled "M8 Yer Dugs A Nazi" is to remain on YouTube having been uploaded on April 11, and has since been seen by over 1.5 million people. Police arrested Meechan, a call centre worker, who says it was a prank. Police said the arrest was in relation to the alleged publication of offensive material online and a report had been submitted to the Procurator Fiscal.

Detective Inspector David Cockburn said: "I would ask anyone who has had the misfortune to have viewed it to think about the pain and hurt the narrative has caused a minority of people in our community. "The clip is deeply offensive and no reasonable person can possibly find the content acceptable in today's society. This arrest should serve as a warning to anyone posting such material online, or in any other capacity, that such views will not be tolerated." But YouTube will not be removing the video which has had around 1700 dislikes but nearly 26,000 likes, while a hoard of people have criticised the arrest. A YouTube source said that while it was recognised that the many would find the video offensive, many videos on the site are, and it was a site that believed in freedom of expression.

The source said the intent of the video regardless of how ludicrous and unpleasant it is perceived was "clearly comedic". "If we felt it was toxic hate speech, we would have taken it down." In a commentary by Nat Hentoff, a member of the Reporters Committee for Freedom of the Press and senior fellow with the American libertarian think tank Cato Institute and Nick Hentoff, criminal defence and civil liberties attorney in New York point to the case as an example hate speech prosecutions that are "patently absurd". "Every dog owner knows that if you speak in a high-pitched voice, your pet will react with as much excitement to the question, 'Do you want some bacon?' as 'Do you want to tear my throat out?'. Which begs the question whether a satirical video that compares Nazis to a dog’s Pavlovian tendency for unthinking repetition can reasonably be regarded as offensive to anyone but Nazis.

"The man clearly states in the video that he is not a racist and his only motivation was to 'p*ss off' his girlfriend by turning her adorable little pug into a Nazi. But the thought police are rarely concerned with intent, since preventing offence is their raison d’etre. Giving offence has been the raison d’etre of satirists for centuries and their right to do so should be protected." The controversial 1 minute 30-second clip also shows the two-year-old dog watching speeches made by Hitler from the Leni Riefenstahl directed film 'Olympia' which documented the 1936 Berlin Olympics. At the end, the former security guard insists that he is a not a racist, but is only trying to play a joke on his girlfriend to "p*** her off". Many comments on the video have criticised the police action. One said: "They really arrest you for that? Holy f... man. It's f..ing 1984."
© The Herald Scotland

top

Facebook Doesn't Have to Be Fair

The company has no legal obligation to be balanced—and lawmakers know it.
By Robinson Meyer

13/5/2016- For almost three years, Facebook has pulled off an impressive balancing act. It has become one of the most powerful companies in media—the whims of its News Feed can determine the fate of whole news organizations—but it has never quite itself been a member of the press. Its felicitous run may now have ended. In at least one non-negligible way, Facebook joined journalism’s dirty ranks this week, as the company found itself accused of having a liberal bias. And perhaps it really does. A series of Gizmodo reports have raised new information about how the company’s “Trending” module works. “Trending” is the list of popular headlines that appears in the top right of Facebook.com; it also appears under the search bar in its ubiquitous mobile app. While many users believed that this module was compiled algorithmically, Gizmodo (and now The Guardian) have revealed that humans, working on contract for the company, guide its creation every step of the way. What’s more, these workers (often Ivy-educated twenty-somethings) “routinely suppressed conservative news,” according to the allegations of one former employee who talked to Gizmodo.

Facebook bills its platform as transparent and apolitical, so this could be disastrous (or at least embarrassing) for it. But as Kashmir Hill writes at Fusion, there isn’t yet definitive evidence that Facebook actually did routinely suppress conservative news. Instead, former employees and leaked corporate documents indicate the workers were told to to amplify news and stories from traditional or name-brand news organizations like CNN, Fox News, and The New York Times. At the same time, they were advised to avoid floating rumors or conspiracy theories from newer, less reliable, and ideologically slanted sites like Newsmax. (The Guardian and Gizmodo reports disagree about whether Breitbart, a far-right and factually unreliable news site, was a “trusted source” or a specifically untrusted one.)

Mark Zuckerberg, the company’s CEO, has now said that in an internal investigation, the company could find no evidence of story suppression. And in some ways, you could see the company’s editorial hand in “Trending” as part of its longtime emphasis on distributing “high-quality content.” But we might know more later this month. Senator John Thune, a Republican of South Dakota, has formally asked Facebook to answer questions about its neutrality in running the feature. Company representatives have also been asked to meet with staff from the Senate Committee on Commerce, Science, and Transportation. Senator Thune made those requests in a letter to Facebook—a remarkable document that it’s worth spending some time with. That’s because, before asking specific questions, Thune raises the following concerns:

[W]ith over a billion daily active users on average, Facebook has enormous influence on users’ perceptions of current events, including political perspectives. If Facebook presents its Trending Topics section as the result of a neutral, objective algorithm, but it is in fact subjective and filtered to support or suppress particular viewpoints, Facebook’s assertion that it maintains a ‘platform for people and perspectives from across the political spectrum’ misleads the public.

This is an fascinating implication. Facebook has said it is a platform for perspectives from “across the political spectrum,” but it specifically never has claimed that it will give all those perspectives equal weight. It promises that it will give everyone a place for their ideas, but not that it will be particularly fair about it. Yet just by talking about misleading the public, Thune is presuming an incredible thesis: that in order for Facebook to make space for all viewpoints, it must be balanced. Which is funny, because Thune has gone on the record a great deal about the role of a government official in regulating media fairness. From the mid-2000s to its eventual repeal in 2011, Thune was one of the lead critics of the Fairness Doctrine, a requirement from the Federal Communications Commission that broadcast stations present “controversial topics” in an honest and balanced way. In fact he often advocated for its repeal (even though it was overturned by the courts in the 1980s).

“Our support for freedom of conscience and freedom of speech means that we must support the rights granted to even those with whom we disagree,” Thune said in June 2007. “Giving power to a few to regulate fairness in the media is a recipe for an Orwellian disaster.” He elaborated on those views in an article for RealClearPolitics. “I know the hair stands up on the back of my neck when I hear government officials offering to regulate the news media and talk radio to ensure fairness,” he wrote. (The FCC formally repealed the Fairness Doctrine on its own prerogative four years later.) Thune didn’t just oppose any government regulation of the media—he opposed nearly any government interference in the Internet at all. He has repeatedly opposed the FCC’s efforts to ensure net neutrality, the principle that every web host should have equitable access to the same speed of Internet connection.

“The FCC’s decision to adopt controversial regulation of the Internet is yet another example of the heavy hand of government reaching into an industry that isn’t broken and doesn’t need to be fixed,” he said in 2010. And last year, he decried the commission’s announcement that it will strongly enforce net neutrality—or, as his office has put it, “government control of the Internet.” Of course Thune isn’t advocating for the regulation of Facebook yet. And he can make a big fuss about Facebook’s neutrality without actually legislating anything—in some ways, the company will be damaged more by a partisan fight. But it is an example of how, to paraphrase the senator, those with whom we disagree can make us doubt our own support for the freedom of speech.
© The Atlantic

top

Former Facebook Workers: We Routinely Suppressed Conservative News

9/5/2016- Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential “trending” news section, according to a former journalist who worked on the project. This individual says that workers prevented stories about the right-wing CPAC gathering, Mitt Romney, Rand Paul, and other conservative topics from appearing in the highly-influential section, even though they were organically trending among the site’s users.

Several former Facebook “news curators,” as they were known internally, also told Gizmodo that they were instructed to artificially “inject” selected stories into the trending news module, even if they weren’t popular enough to warrant inclusion—or in some cases weren’t trending at all. The former curators, all of whom worked as contractors, also said they were directed not to include news about Facebook itself in the trending module. In other words, Facebook’s news section operates like a traditional newsroom, reflecting the biases of its workers and the institutional imperatives of the corporation. Imposing human editorial values onto the lists of topics an algorithm spits out is by no means a bad thing—but it is in stark contrast to the company’s claims that the trending module simply lists “topics that have recently become popular on Facebook.”

 


These new allegations emerged after Gizmodo last week revealed details about the inner workings of Facebook’s trending news team—a small group of young journalists, primarily educated at Ivy League or private East Coast universities, who curate the “trending” module on the upper-right-hand corner of the site. As we reported last week, curators have access to a ranked list of trending topics surfaced by Facebook’s algorithm, which prioritizes the stories that should be shown to Facebook users in the trending section. The curators write headlines and summaries of each topic, and include links to news sites. The section, which launched in 2014, constitutes some of the most powerful real estate on the internet and helps dictate what news Facebook’s users—167 million in the US alone—are reading at any given moment.

“Depending on who was on shift, things would be blacklisted or trending,” said the former curator. This individual asked to remain anonymous, citing fear of retribution from the company. The former curator is politically conservative, one of a very small handful of curators with such views on the trending team. “I’d come on shift and I’d discover that CPAC or Mitt Romney or Glenn Beck or popular conservative topics wouldn’t be trending because either the curator didn’t recognize the news topic or it was like they had a bias against Ted Cruz.” The former curator was so troubled by the omissions that they kept a running log of them at the time; this individual provided the notes to Gizmodo. Among the deep-sixed or suppressed topics on the list: former IRS official Lois Lerner, who was accused by Republicans of inappropriately scrutinizing conservative groups; Wisconsin Gov. Scott Walker; popular conservative news aggregator the Drudge Report; Chris Kyle, the former Navy SEAL who was murdered in 2013; and former Fox News contributor Steven Crowder. “I believe it had a chilling effect on conservative news,” the former curator said.

Another former curator agreed that the operation had an aversion to right-wing news sources. “It was absolutely bias. We were doing it subjectively. It just depends on who the curator is and what time of day it is,” said the former curator. “Every once in awhile a Red State or conservative news source would have a story. But we would have to go and find the same story from a more neutral outlet that wasn’t as biased.” Stories covered by conservative outlets (like Breitbart, Washington Examiner, and Newsmax) that were trending enough to be picked up by Facebook’s algorithm were excluded unless mainstream sites like the New York Times, the BBC, and CNN covered the same stories. Other former curators interviewed by Gizmodo denied consciously suppressing conservative news, and we were unable to determine if left-wing news topics or sources were similarly suppressed. The conservative curator described the omissions as a function of his colleagues’ judgements; there is no evidence that Facebook management mandated or was even aware of any political bias at work.

Managers on the trending news team did, however, explicitly instruct curators to artificially manipulate the trending module in a different way: When users weren’t reading stories that management viewed as important, several former workers said, curators were told to put them in the trending news feed anyway. Several former curators described using something called an “injection tool” to push topics into the trending module that weren’t organically being shared or discussed enough to warrant inclusion—putting the headlines in front of thousands of readers rather than allowing stories to surface on their own. In some cases, after a topic was injected, it actually became the number one trending news topic on Facebook.

“We were told that if we saw something, a news story that was on the front page of these ten sites, like CNN, the New York Times, and BBC, then we could inject the topic,” said one former curator. “If it looked like it had enough news sites covering the story, we could inject it—even if it wasn’t naturally trending.” Sometimes, breaking news would be injected because it wasn’t attaining critical mass on Facebook quickly enough to be deemed “trending” by the algorithm. Former curators cited the disappearance of Malaysia Airlines flight MH370 and the Charlie Hebdo attacks in Paris as two instances in which non-trending stories were forced into the module. Facebook has struggled to compete with Twitter when it comes to delivering real-time news to users; the injection tool may have been designed to artificially correct for that deficiency in the network. “We would get yelled at if it was all over Twitter and not on Facebook,” one former curator said.

In other instances, curators would inject a story—even if it wasn’t being widely discussed on Facebook—because it was deemed important for making the network look like a place where people talked about hard news. “People stopped caring about Syria,” one former curator said. “[And] if it wasn’t trending on Facebook, it would make Facebook look bad.” That same curator said the Black Lives Matter movement was also injected into Facebook’s trending news module. “Facebook got a lot of pressure about not having a trending topic for Black Lives Matter,” the individual said. “They realized it was a problem, and they boosted it in the ordering. They gave it preference over other topics. When we injected it, everyone started saying, ‘Yeah, now I’m seeing it as number one’.” This particular injection is especially noteworthy because the #BlackLivesMatter movement originated on Facebook, and the ensuing media coverage of the movement often noted its powerful social media presence.

(In February, CEO Mark Zuckerberg expressed his support for the movement in an internal memo chastising Facebook employees for defacing Black Lives Matter slogans on the company’s internal “signature wall.”) When stories about Facebook itself would trend organically on the network, news curators used less discretion—they were told not to include these stories at all. “When it was a story about the company, we were told not to touch it,” said one former curator. “It had to be cleared through several channels, even if it was being shared quite a bit. We were told that we should not be putting it on the trending tool.” (The curators interviewed for this story worked for Facebook across a timespan ranging from mid-2014 to December 2015.)

“We were always cautious about covering Facebook,” said another former curator. “We would always wait to get second level approval before trending something to Facebook. Usually we had the authority to trend anything on our own [but] if it was something involving Facebook, the copy editor would call their manager, and that manager might even call their manager before approving a topic involving Facebook.” Gizmodo reached out to Facebook for comment about each of these specific claims via email and phone, but did not receive a response. Several former curators said that as the trending news algorithm improved, there were fewer instances of stories being injected. They also said that the trending news process was constantly being changed, so there’s no way to know exactly how the module is run now. But the revelations undermine any presumption of Facebook as a neutral pipeline for news, or the trending news module as an algorithmically-driven list of what people are actually talking about.

Rather, Facebook’s efforts to play the news game reveal the company to be much like the news outlets it is rapidly driving toward irrelevancy: a select group of professionals with vaguely center-left sensibilities. It just happens to be one that poses as a neutral reflection of the vox populi, has the power to influence what billions of users see, and openly discusses whether it should use that power to influence presidential elections. “It wasn’t trending news at all,” said the former curator who logged conservative news omissions. “It was an opinion.” [Disclosure: Facebook has launched a program that pays publishers, including the New York Times and Buzzfeed, to produce videos for its Facebook Live tool. Gawker Media, Gizmodo’s parent company, recently joined that program.]

Update: Several hours after this report was published, Gizmodo editors started seeing it as a topic in Facebook’s trending section. Gizmodo’s video was posted under the topic but the “Top Posts” were links to RedState.com and the Faith and Freedom Coalition.
Update 4:10 p.m. EST: A Facebook spokesperson has issued the following statement to outlets including BuzzFeed and TechCrunch. Facebook has not responded to Gizmodo’s repeated requests for comment.
We take allegations of bias very seriously. Facebook is a platform for people and perspectives from across the political spectrum. Trending Topics shows you the popular topics and hashtags that are being talked about on Facebook. There are rigorous guidelines in place for the review team to ensure consistency and neutrality. These guidelines do not permit the suppression of political perspectives. Nor do they permit the prioritization of one viewpoint over another or one news outlet over another. These guidelines do not prohibit any news outlet from appearing in Trending Topics.”

Update May 10, 8:50 a.m. EST: The following statement was posted by Vice President of Search at Facebook, Tom Stocky, late last night. It was liked by both Mark Zuckerberg and Sheryl Sandberg: My team is responsible for Trending Topics, and I want to address today’s reports alleging that Facebook contractors manipulated Trending Topics to suppress stories of interest to conservatives. We take these reports extremely seriously, and have found no evidence that the anonymous allegations are true.

Facebook is a platform for people and perspectives from across the political spectrum. There are rigorous guidelines in place for the review team to ensure consistency and neutrality. These guidelines do not permit the suppression of political perspectives. Nor do they permit the prioritization of one viewpoint over another or one news outlet over another. These guidelines do not prohibit any news outlet from appearing in Trending Topics.

Trending Topics is designed to showcase the current conversation happening on Facebook. Popular topics are first surfaced by an algorithm, then audited by review team members to confirm that the topics are in fact trending news in the real world and not, for example, similar-sounding topics or misnomers.

We are proud that, in 2015, the US election was the most talked-about subject on Facebook, and we want to encourage that robust political discussion from all sides. We have in place strict guidelines for our trending topic reviewers as they audit topics surfaced algorithmically: reviewers are required to accept topics that reflect real world events, and are instructed to disregard junk or duplicate topics, hoaxes, or subjects with insufficient sources. Facebook does not allow or advise our reviewers to systematically discriminate against sources of any ideological origin and we’ve designed our tools to make that technically not feasible. At the same time, our reviewers’ actions are logged and reviewed, and violating our guidelines is a fireable offense
.

There have been other anonymous allegations — for instance that we artificially forced ‪#‎BlackLivesMatter to trend. We looked into that charge and found that it is untrue. We do not insert stories artificially into trending topics, and do not instruct our reviewers to do so. Our guidelines do permit reviewers to take steps to make topics more coherent, such as combining related topics into a single event (such as ‪#‎starwars and#‎maythefourthbewithyou), to deliver a more integrated experience. Our review guidelines for Trending Topics are under constant review, and we will continue to look for improvements. We will also keep looking into any questions about Trending Topics to ensure that people are matched with the stories that are predicted to be the most interesting to them, and to be sure that our methods are as neutral and effective as possible.
© Gizmodo
top

India: Noida cyber centre inaugurated

10/5/2016- The first Cyber Investigation Centre in Uttar Pradesh was inaugurated in Noida on Monday, with the police asserting that they would now have an edge against e-criminals. Director General of Police (DGP) Javeed Ahmed inaugurated the state-of-the-art facility, which includes a forensic laboratory, in Sector 6, Noida. Mr. Ahmed said the facility, which was constructed through a public-private partnership, would help clamp down on fraudsters, hackers, and those spreading hate on social media. The building for the centre was constructed by the Noida Authority while funds for the project, estimated at Rs. 1.25 crore, were raised through PPP with involvement of private individuals, including corporate houses operating out of Noida. “This laboratory will be a one-of-its-kind cyber lab in the State. It will have facilities for hard disk imaging and copying, allow recovery of deleted data, analysis of dump data from mobile phones and other investigation techniques in the interests of modern and scientific investigation,” said Kiren S., senior superintendent of police (SSP), Gautam Budh Nagar.
© The Hindu

top

Headlines April 2016

EU: online anti-LGBTI hate speech must be tackled

29/4/2016- The European Parliament voted for a report that calls upon member states to adopt strong measures to counter online anti LGBTI hate speech. The report entitled, Gender equality and empowering women in the digital age, was adopted by a majority of lawmakers yesterday, although Mike Hookem MEP, from UKIP, voted against it. The report calls upon the EU Commission to demand greater efforts from members to prosecute any homophobic or transphobic crimes that take place online. It adds that Member States should properly apply the EU legislation relating to the rights of victims (par. 54). Furthermore, it urges policymakers to ensure that a framework is in place guaranteeing that law enforcement agencies are able to deal with online bias-motivated threats and harassment (par. 53).

Terry Reintke MEP, Member of the EU’s LGBTI Intergroup and author of the report, reacted: “While online abuse can affect anyone, women and LGBTI people often experience abuse as a result of their gender, gender identity, sexual orientation or sex characteristics. UK research shows that one in four LGBTI pupils have experienced cyber bullying.” “If we are serious about tackling discrimination in all its forms, we cannot leave such abuse unchallenged. Just because it happens in virtual space, does not mean that abuse can go unpunished.” Malin Bjork MEP, Vice-President of the EU’s LGBTI Intergroup, who was involved in the writing of the report, continued: “Many women and LGBTI people face online harassment, hate speech or blackmail. However, it is often unclear how to report the offence and where to seek help.” “This report seeks to address this gap. We need to ensure that protection from harassment and abuse against women and LGBTI people in the real world exists in the online world too.”
© Kaleidoscot

top

Facebook And Twitter Continue Their Shutdown Of Pages Linked To Hamas

25/4/2016- Facebook and Twitter reportedly shut down several accounts associated with Hamas, the Palestinian Sunni-Islamic fundamentalist group, over the weekend of April 22. This came after accusations that the organization had been using social media platforms to spread hate throughout the Web. Hamas' official page was shut down on Facebook, and its "Shibab" page was also closed shortly after. The page had been affiliated with terrorism, and more than one million Facebook users had been following it at the time of its closure. During the week of April 18, Facebook honed in on several Palestinian university pages that had connections to Hamas. They were eventually taken down, as were those that referenced the Palestinian Islamic Jihad. Hamas was allegedly utilizing these pages to further develop terrorism plans on the Internet.

Twitter has been taking its own initiative to shut down potentially dangerous accounts. Hamas' military wing pages, which were published in languages including English and Hebrew, were closed by Twitter. However, users have been working to restore their presence on social media by creating new accounts where they can continue to spread their message. One individual who saw his account suspended by Twitter was Hamas Military Wing Spokesman Abu Obeida. His page was closed during a wave of account suspensions. However, Obeida has created a new Twitter account to reestablish his speaking platform on the social network. "Twitter yielded to the pressure of the enemy, which gives us an impression that it is not neutral in regards to the Palestinian case and it caves into political pressure," Obeida wrote on his Twitter page. "We are going to send our message in a lot of innovative ways, and we will insist on every available means of social media to get to the hearts and minds of millions."

This is not the first time that social networks such as Facebook and Twitter have moved to eliminate terrorism from their websites. During the summer of 2014, for instance, Twitter shut down all Hamas accounts. Details of a new report revealed on April 25 that many terrorist financiers who have been blacklisted by the U.S. government are still raising money via social media, according to the Wall Street Journal.
© Tech Times

top

Online Hate Monitor: Anti-Semitic Posts Reaching 'Thousands' a Day

Anti-Semitism is single most common form of bigotry on internet, followed by Islamophobia, online watchdog says.

19/4/2016- Thousands of incidents of anti-Semitism and Holocaust denial are registered each day on the internet, according to the co-founder of a leading international network of organizations engaged in combating cyberspace bigotry. “It is very difficult to make exact calculations because the internet is much bigger than most of us think,” said Ronald Eissens, who serves as a board member of the Dutch-based International Network Against Cyber Hate (INACH), which encompasses 16 organizations spanning the globe. “A thousand a day would certainly be true, and 5,000 to 10,000 a day worldwide could also be true.” In an interview with Haaretz, Eissens said the number of complaints about anti-Semitism and Holocaust denial submitted to his network of organizations tends to rise when Israel is the focus of international media attention. “During the last Gaza War, we saw a big fat spike in online anti-Semitism, and I’m talking about pure anti-Semitism – not anti-Zionism,” he said.

Eissens, who also serves as director-general of the Magenta Foundation – the Dutch complaints bureau for discrimination on the internet – was a keynote speaker Tuesday at an international conference on online anti-Semitism held in Jerusalem. The conference, the first of its kind, was co-sponsored by INACH and Israeli Students Combating Anti-Semitism, a local organization. Anti-Semitism, said Eissens, is the single most common form of bigotry on the internet, accounting for about one-third of all complaints registered with his organization, followed by Islamophobia. In 2015, though, for the first time, he said, Islamophobia surpassed anti-Semitism as the most common complaint in two countries: The Netherlands and Germany. Eissens attributed the rising number of complaints about Islamophobia to the refugee crisis in Europe.

Since its establishment in 2002, said Eissens, INACH succeeded in removing somewhere between 60,000 and 70,000 hateful posts on the internet, about 25,000 of them anti-Semitic in nature. In past years, noted Eissens, anti-Semitic posts were found mainly in dedicated neo-Nazi and white supremacist websites and forums. “Nowadays, most of the stuff has shifted to social media. It’s much more scattered, but also much more mainstream. You still find it on those traditional anti-Semitic sites, but more and more on Facebook, Twitter, YouTube and Google.” Although his organization does not monitor anti-Zionist posts on the internet, Eissens said he believed there was often a blurring of lines. “Nowadays, anti-Zionism has become part and parcel of Jew hatred, and often when people say they are just anti-Zionist but not anti-Semitic, that is a cop out,” he said. “I’m not sure all those who identify as anti-Zionists are really anti-Semitic, but I think it’s heading in that direction, and that is dangerous.”

Asked whether he considered supporters of the international Boycott, Divest and Sanctions (BDS) movement against Israel to be anti-Jewish, Eissens said: “My problem with BDS activists is that almost all of them are of the opinion that Israel should not really exist. They’re talking about a one-state solution. They’re talking about giving Palestine back to the Palestinians, and they’re talking about all of traditional Palestine. When they say things like that, I often find BDS activists to be anti-Semites because what’s supposed to happen to Jews who are living in Israel if that happens? “But if they say they’re in favor of a two-state solution, with Jews and Palestinians living side by side, that’s a whole other stance. But I don’t hear that nuance a lot among BDS activists.”
© Haaretz

top

German refugees use ads to target anti-immigration YouTube videos

German YouTube users searching for anti-immigration videos are being shown adverts of refugees talking about prejudices against them.

20/4/2016- Clicking on the ads redirects users to a website with more information about the refugees' stories. The campaign uses YouTube's advertising system to target search terms associated with far-right content and anti-immigration groups. The organisation behind the initiative says the video clips cannot be skipped. Firas Alshater is one of the nine refugees in the adverts. The Syrian actor came to Germany almost three years ago and has become an internet sensation by posting YouTube videos about his everyday life as a refugee. He said the campaign started when he realised that a right-wing party used his videos on the platform for advertising. "I don't think the 30-second clips will disturb anyone. It's a chance to reach people who want to watch these far-right videos because they are afraid and need someone to help them," he told the BBC. In his advert, Firas tells viewers it was not true that Germans and refugees could not live together peacefully.

'Admirable courage'
Refugees Welcome, the organisation behind the campaign, says the adverts can currently be seen before 100 videos. "I think the courage of the refugees is admirable and it's important to give them the chance to present their perspective," said Jonas Kakoschke, one of the co-founders of the organisation. Refugees Welcome is an association that tries to find flatshares for refugees in private homes. "We won't be able to change everybody's opinion, but we do believe there is a smaller part of people we can have a dialogue with and who are open to arguments," he said.

'Refugees out'
Advertisers can use keywords to make their ads appear in front of specific videos on YouTube. The search terms targeted by the campaign include the name of the leader of Germany's anti-Islamist Pegida movement, Lutz Bachmann, who has gone on trial on hate speech charges this week. Other keywords are "Refugees out", "Refugees terrorists" and "The truth about refugees". Video uploaders receive part of the money paid by advertisers. They cannot influence which ads are shown before their video, but can disable them. "Of course, it's painful that the uploaders are getting money from our campaign, but at the moment they only earn a few cents," said Jonas Kakoschke. "Ultimately, we hope that some of these groups will disable advertising and therefore lose out on YouTube ads altogether."

What is Pegida?
# Acronym for Patriotische Europaeer Gegen die Islamisierung des Abendlandes (Patriotic Europeans Against the Islamisation of the West)
# Umbrella group for German right-wingers, attracting support from mainstream conservatives to neo-Nazi factions and football hooligans
# Holds street protests against what it sees as a dangerous rise in the influence of Islam over European countries
# Claims not to be racist or xenophobic
# 19-point manifesto says the movement opposes extremism and calls for protection of Germany's Judeo-Christian culture
© BBC News

top

Anonymity May Have Killed Online Commenting (opinion)

By Christopher Wolf, the chair of the Anti-Cyberhate Committee of the Anti-Defamation League and a partner in Hogan Lovells' Privacy and Cybersecurity practice, is the co-author of "Viral Hate: Containing Its Spread on the Internet."

18/4/2016- Many comment sections on media websites have failed because of a lack of accountability: Online commenters who can hide behind anonymity are much more comfortable expressing repugnant views or harassing others, and the multiplying effect is widespread incivility. Anonymity has an important role in free expression and for privacy interests, to be sure. But the benefits of anonymity online are greatly outweighed by the abuse. Anonymous comments range from the impertinent to the truly hateful, but they frequently contain racist, misogynistic, homophobic and/or anti-Semitic content. Even when people register with their real names but have pseudonymous user names, they often act as if they are licensed to rant, and say horrible things. While there is a subset of people who are proud to be haters and who see real name attribution as a publicity opportunity, most people think twice about associating their names with scurrilous or scandalous commentary. They fear opprobrium by employers, friends and family if their name is appended as the author of abusive comments.

Moreover, as this paper observes, in its encouragement to readers to avoid anonymity in comment sections, “people who use their names carry on more engaging, respectful conversations.” Some platforms have formed bulwarks against vile comments, but none are fool-proof. Facebook’s real name requirement for users helps curtail the chaos on that social media service. While even those using their real names sometimes post content that violates the community standards set to curtail hate speech — either because they don’t care about being associated with that content (or are part of an online community that celebrates their association with hate) — the real name requirement tamps down base instincts a more average user may have for vile postings.

Comment moderation is also useful for controlling abuse, but it is expensive and time-consuming. Many of the sites that have closed comment sections tried moderation but found it too burdensome or costly. Giving automatic priority in publication to real name commenters, and pushing anonymous comments to the bottom of the queue, is another technique that preserves the ability to comment anonymously, albeit at the price of potential obscurity. Ultimately, it will be difficult to change the embedded online culture of saying whatever one pleases. Maybe contextual online commenting is over, and the place for discourse is on social media. But so much of social media, Facebook excepted, encourages anonymity, so the potential for hate and abuse may simply move from platform to platform. A re-boot of online comment sections may be the only solution, with real-name attribution as the rule: Identification is vital for online civility.
© The New York Times

top

Pakistan Approves Controversial Cyber Crime Bill

14/4/2016- The controversial Prevention of Electronic Crimes Bill 2015 has been approved by Pakistan's National Assembly (NA). The restrictive bill—which has been criticised by the information technology (IT) industry as well as civil society for curbing human rights—was submitted to the NA for voting in January 2015 by the Minister of State for Information Technology and Tele­com­munication, Anusha Rahman Khan. A draft of the cybercrime bill was then cleared by the standing committee in September before being forwarded to the assembly for final approval. According to critics, the proposed bill criminalises activities such as sending text messages without the receiver's consent or criticising government actions on social media. Those who do would be punished with fines and long-term imprisonment. Industry representatives have argued that the bill would harm business as well.
Online criticism of religion, the country, its courts, and the armed forces are among subjects which could invoke official intervention under the bill. The bill approved on Wednesday, must also be approved by Senate before it can be signed into law, as reported by Dawn online.

Features of the Bill include -
• Up to five-year imprisonment, Rs (Pakistani Rupees) 10 million ($95,000) fine or both for hate speech, or trying to create disputes and spread hatred on the basis of religion or sectarianism.
• Up to five-year imprisonment, Rs5m ($47,700) fine or both for transferring or copying sensitive basic information.
• Up to Rs50,000 ($477) fine for sending messages irritating to others or for marketing purposes.
• Up to three-year imprisonment and a fine of up to Rs500,000 ($4,777) for creating a website for negative purposes.
• Up to one-year imprisonment or a fine of up to Rs1m ($9,500) for forcing an individual into immoral activity, or publishing an individual’s picture without consent, sending obscene messages or unnecessary cyber interference.
• Up to seven-year imprisonment, a fine of Rs10m or both for interfering in sensitive data information systems.
© Newsweek

top

UK: Is it too late to stop the trolls trampling over our entire political discourse?

Free speech online can be revolutionary. But it can also poison the very bloodstream of democracy
By Owen Jones

13/4/2016- It was a pretty standard far-right account: anonymous (check); misappropriating St George (check); dripping with venom towards “Muslim-loving” lefties (check). But this one had a twist. They had found my address and had taken screen shots of where I lived from Google’s Street View function. “Here’s his bedroom,” they wrote, with an arrow pointing at the window; “here’s the door he comes out at the morning”, with an arrow pointing at the entrance to my block of flats. In the time it took Twitter to shut down the account, they had already tweeted many other far-right accounts with the details. Then there was a charming chap who willed me to “burn in everlasting hell you godless faggot”, was determined to “find out where you live” so as to “enlighten you on what I do to cocksucking Marxist faggots” and “break every bone in your body” (all because he felt I slighted faith schools). And the neo-Nazis who believe I’m complicit in a genocide against white people, and launched an orchestrated campaign that revolved around infecting me with HIV.

This is not to conjure up the world’s smallest violin and invite pity, it is to illustrate a point. Political debate, a crucial element of any democracy, is becoming ever more poisoned. Social media has helped to democratise the political discourse, forcing journalists – who would otherwise simply dispense their alleged wisdom from on high – to face scrutiny. Some take it badly. They are used to being slapped affectionately on the back by fellow inhabitants of the media bubble for their latest eloquent defence of the status quo. To have their groupthink challenged by the great unwashed is an irritation. In truth, the intensity of the scrutiny ranges from the intermittent to the relentless, depending on a few things: how far the target deviates from the political consensus; how much of a profile they have; and whether they happen to be, say, a woman, black, gay, trans or Muslim. There’s scrutiny of ideas, and then there’s something else. And it is now so easy to anonymously hurl abuse – sometimes in coordination with others of a similar disposition – it can have no other objective than to attempt to inflict psychological harm.

Take the comments underneath newspaper articles. Columnists could once avoid any feedback, other than the odd missive on the letters’ page. Now we can have a two-way conversation, a dialogue between writer and reader. But the comments have become, let’s just say, self-selecting – the anonymously abusive and the bigoted increasingly staking it out as their own, leading anyone else to flee. Such is the level of abuse that many – particularly women writing about feminism or black writers discussing race – have simply given up reading, let alone engaging with, reader comments. Sending abuse in the pre-Twitter age involved a great deal of hassle (finding someone’s address, licking envelopes, traipsing off to the post office); you can now anonymously tell anyone with a social media account to go die in a ditch – and much worse – in seconds. Yet it is not my experience that this is how people who follow politics behave in real life. I’ve met people who are incredibly meek, but extremely aggressive behind a computer. Online, perhaps, they no longer see their opponent as a human being with feelings, but an object to crush.

I spend a lot of time attending public meetings. One of the most fulfilling aspects is when individuals with differing perspectives turn up. One man at a recent event was leaning towards Ukip, but he didn’t angrily denounce me as an ISLAM LOVING TRAITOR!!!! Instead, he shared a moving story of his father dying as a result of drug addiction, and how it had informed his political perspective. We were speaking, one to one, as human beings: unlike in online debate, our humanity was not stripped away. The potential – or, sadly more accurately, theoretical – political power of social media is to provide an important public forum in which those of diverse opinions can freely interact, rather than living in political enclaves inhabited only by those who reinforce what everyone already believes. The truth is that those entrenched political divisions are cemented by trolls who – without conspiracy or coordination – pillory, insult or even threaten those with dissenting opinions.

Being forced to confront opinions that collide with your own worldview, and challenge your own entrenched views, helps to hone your arguments. But sometimes the online debate can feel like being in a room full of people yelling. Even if others are simply passionately disagreeing, making a distinction becomes difficult. The normal human reaction is to become defensive. A leftwinger who is under almost obsessive personal attack from rightwingers or vice versa may find that separating the abusers from those who simply disagree becomes difficult. Is the effect of this to coarsen, even to poison, political debate – not just in the comment threads and on social media, but above the line, and among people who have very few meaningful political differences? I worry that people will increasingly avoid topics that are likely to provoke a vitriolic response. You may be having a bad week, and decide that writing about an issue isn’t worth the hassle of being bombarded with nasty comments about your physical appearance. That’s how self-censorship works. 

Of course, online rage can be more complicated. If you’re a disabled person struggling to make ends meet, your support is being cut by the government and you are feeling ignored by the media and the political elite, perhaps seething online fury is not only understandable but appropriate? Similarly, trans rights activists are sometimes criticised for being too aggressive online, as though gay people and lesbians or women won their rights by being ever so polite and sitting around singing Kumbaya. The most powerful pieces are often written by those personally affected by injustice, and the comfortable telling them to tone down the anger for fear of coarsening political debate is unhelpful. On the other hand, there are certain rightwing bloggers who obsessively fixate on character assassination as a substitute for political substance. Corrupt the reputation of the individual – however tenuous, desperate or unfair the means – and then there is no need to engage in the rights and wrongs of their argument.

Some will say: ah, suck it up; if you want to stick your neck out and argue a case that may polarise people, you’re asking for it. Opinion writers hardly represent a cross-section of society as it is. But why would – for want of a better word – “normal” people seek to express political opinions if the quid pro quo is a daily diet of hate? Won’t those from private schools, where a certain type of confidence and self-assurance is taught, become even more dominant in debate? Will women be partly purged from the media by obsessive misogynistic tirades – I know of women who turn down television interviews because it will mean being subjected to demeaning comments by men on their physical appearance. Will only the most arrogant, self-assured types – including those who almost crave the hatred – be the beneficiaries?

Online debate is revolutionary, and there are few more avid users than myself. But there seems little doubt that the political conversation is becoming more toxic. And it is democracy that is suffering.
© Comment is free - The Guardian.

top

"Stormfront.org"; the world's number 1 white supremacist chatroom.

"Stormfront" threads provide a very interesting insight into the lives of 21st century white racists. What does a Neo-Nazi do after a long day of bashing ethnic minorities? Making sushi and watching football seem to be pretty popular choices.
By Lewis Edwards, freelance journalist and writer from Australia.

12/4/2016- Being a hardcore white supremacist in 2016 can be a pretty tough gig. People generally dislike you, you have to at least put on the appearance of disliking falafel rolls, and your job opportunities are evidently limited by your choice of political ideology. With these considerations in mind, many of today's racists choose not to publicly express their political beliefs. Instead, it has become commonplace for white supremacists to congregate and communicate on the internet, hiding behind digital avatars. And "stormfront.org" is the virtual place where hundreds of thousands of these cautious 21st century Neo-Nazis "kick it" and "chew the fat", discussing everything from "Grand Theft Auto V" to sushi.

"Stormfront.org", in many senses, is one of the world's most interesting websites. The site was founded in 1996 by US white supremacist Don Black, a former Grand Wizard of the Klu Klux Klan as well as a member of the American Nazi Party during the 1970's. "Stormfront" has grown and developed as a website ever since. Originally a small online community for tech savvy white supremacists, "stormfront" grew exponentially in the late 90's and early 2000's. The membership became quite large. As of 2015, the website boasted almost 300 000 registered users (Mark Potok, "The Year in Hate and Extremism", 2015). Not just the domain of English speaking white racists, the site also incorporates sub-forums in languages ranging from Afrikaans, to French, to Spanish, to Croatian.

However, despite the large and diverse membership of the site, most "stormfront" members utilize avatars on the website to hide their true identities. This may be due to the fact that being a public white supremacist is an unwise career and lifestyle choice in multicultural and multiethnic 21st century societies. If you are ousted as a Neo-Nazi in 2016, you'll probably lose your job at the local accounting firm and the Indian place across town will probably stop delivering that Butter Chicken you like to your apartment. Not a real good idea to be a public white supremacist. Better to use a digital avatar. So just what is discussed on "stormfront.org" through the use of concealed identities?

There are the white supremacist conversations you would typically expect. The site contains many threads about hate for Barack Obama and somewhat related threads about love for Donald Trump. But then there are conversations on "stormfront" you would never anticipate. Because it appears that, as of 2016, Neo-Nazis and the KKK like to talk about anything and everything. "Stormfront" has forums discussing every topic under the sun; from sushi, to Australian Rules Football, to "Grand Theft Auto V", to Eminem. And anything else you could imagine. So what are the views of white supremacists on this diverse range of topics? Well, apparently and most importantly, Nazis love sushi!

On a "stormfront" thread I discovered dated to 2009 (called "sushi?"), Hitler's ideological children appeared to love combining fish, rice, and seaweed for a healthy and tasty snack. Maybe the Japanese made the right decision commercially speaking by joining the Axis forces in World War II. Because white supremacists love to eat sushi. Indeed, when they are not bashing ethnic minorities and gay people, many Neo-Nazis seemed to enjoy making homemade nori rolls. White crusaders by night, sashimi chefs by day! Racial hate is hard, making the perfect sushi roll is harder. Of course, making sushi isn't the only hobby 21st century white supremacists have. Because, as threads on "stormfront" indicate, many Neo-Nazis also love sport. Australian white supremacists, like many Australians, love to watch Australian Rules Football (AFL). Indeed, AFL is a great "Anglo-Saxon-Celt" tradition within Australia ("Jaxxen", 17/2/2010) but that "Anglo-Saxon-Celt" tradition is apparently being destroyed by an influx of African and Indigenous Australian players to the game. Tragedy!

It must be stressful being a white supremacist. Because all your beloved sports (e.g. AFL, basketball, NFL) seem to get taken over by black people who are more agile, more athletic, and better at playing the sport. Getting beaten by people who are better at something than you; Shakespeare himself couldn't pen such a work of high tragedy! But not to worry though, because you can always pick up another hobby e.g. video games. Indeed, video games, in general, do appear to be a popular pastime of 21st century, technologically aware white supremacists. Multiple threads on the topic of video games in general, as well as specific video games, can be found on the "stormfront" website.

I could have looked at any video game thread on the site during this investigation but I decided to look at a thread centred on one of my favourite games in recent years; "Grand Theft Auto V" (AKA "GTAV"). Although "GTAV" had one black protagonist (Franklin Clinton), "stormfront" members circa 2013-2014 just couldn't seem to resist the opportunity to race through the streets of Los Santos (AKA Los Angeles) with the police in hot pursuit. Multiple "stormfront" members expressed their excitement for the game, in spite of Franklin. White nationalism may be fun, but robbing banks in a fictional digital universe is evidently much more fun. Of course, there were those opposed to the idea of playing a black video game character on the "GTAV" forum. As "mmargos" stated on the "stormfront" forum for "GTAV";

"Hello friends, i think that we should boycot gt5 due to the fact that one of the main characters is black.This is my opinion on the game ,what do you think?" (01/06/2015). No responses from those "friends". Playing as Franklin Clinton was and is obviously just too darn exciting. Of course, white supremacists can't be open to every interest and hobby. Black music, for example, is a pet hate of white supremacists. White supremacists on "stormfront" do really seem to hate rap music. Damn those rhymed words over rhythmic 4/4 beats! In particular, white supremacists hate Eminem, the most successful rapper of the 21st century. Multiple threads exist on the "stormfront" site, purely as places to express hate for Eminem. In fact, online Eminem bashing is like a white supremacist hobby in and of itself these days. As "whitepowermetal", an Irish member of "stormfront", asserted on the "Eminem" thread; "He (Eminem) is an awful wigger scumbag who worships Negroids and either hates his white culture or has no knowledge of it whatsoever so believes he is a Negroid" (14/4/2014).

Well, that is indeed an opinion. I must disagree with you for multiple reasons "whitepowermetal". I was going to actually start a "stormfront" account to troll Neo-Nazis and KKK members. I was actually planning to troll you in particular. But I'm hungry and tired. So I think I may just go and get a falafel roll and listen to "The Marshall Mathers LP" instead.

Peace.
© The News Hub

top

Australia: Facebook bans user for criticizing anti-Semitism

Former IDF soldier's gym in Australia shares offensive post to warn against anti-Semitism. Facebook responds by banning gym page.

9/4/2016- Facebook temporarily banned an Australian gym called IDF Training after the owner responded to an anti-Semitic message. The Australian news site The Age reports that someone posted an offensive comment on the gym's Facebook page, calling the owner a "pig f----er" and declaring that "Australia is against israel [sic]." The owner, Avi Yemini, responded by sharing the post, with the added hashtag "#saynotoracism." An anonymous browser soon reported Yemini's post as offensive and Facebook suspended the account for three days. "I've spoken to Facebook explaining that it was in fact his vile message that was in breach of their terms, and that I couldn't believe that not only are they siding with the racist user, they are penalizing an advocate for understanding and tolerance," he said. Yemini returned to Australia and opened IDF Training after serving in the IDF's Golani Brigade. He now teaches people martial arts and self-defense based on the IDF's methods. He has also encouraged the gym's members to join the IDF.
© Arutz Sheva

top

Germany: Berlin police crack down on far-right hate postings

6/4/2016- Berlin police say they’ve raided 10 residences in the German capital in a crackdown against far-right hate speech on social media. Police spokesman Michael Gassen said Wednesday the morning raids involved nine suspects who used Facebook, Twitter and other social networks to spread hate. He says authorities want to emphasize “the Internet is not a law-free zone” and that if illegal speech is posted “it won’t be without consequences.” The suspects, identified as men between 22 and 58, are alleged to have posted anti-migrant messages, anti-Semitic messages and songs with banned lyrics, among other things. They face possible fines if found guilty. The investigation is ongoing, and police are now evaluating evidence seized in the raids, including computers, cellphones as well as drugs, knives and other weapons.
© The Associated Press

top

Behind the Dutch Terror Threat Video: The St. Petersburg “Troll Factory”

3/4/2016- At 13:30:09 GMT on 18 January 2016, a new YouTube channel called ÏÀÒÐÈÎÒ (“Patriot”) uploaded its first video, titled (in Ukrainian) “Appeal of AZOV fighters to the Netherlands on a referendum about EU – Ukraine.” The video depicts six soldiers holding guns, supposedly from the notorious far-right, ultra-nationalist Azov Battalion, speaking in Ukrainian before burning a Dutch flag. In the video, the supposed Azov fighters threaten to conduct terrorist attacks in the Netherlands if the April 6 referendum is rejected. There are numerous examples of genuine Azov Battalion soldiers saying or doing reprehensible things, such as making severely anti-Semitic comments and having Nazi tattoos. However, most of these verified examples come from individual fighters, while the video with the Dutch flag being burned and terror threats supposedly comes as an official statement of the battalion.

The video has been proven as a fake, and is just one of many fake videos surrounding the Azov Battalion. This post will not judge if the video is fake — as this will be assumed — but will instead examine the way in which the video originated and was spread. After open source analysis, it becomes clear that this video was initially spread and likely created by the same network of accounts and news sites that are operated by the infamous “St. Petersburg Troll Factories” of the Internet Research Agency and its sister organization, the Federal News Agency (FAN).  The same tactics can be seen in a recent report from Andrey Soshnikov of the BBC, in which he revealed that a fake video showing what was supposedly a U.S. soldier shooting a Quran was created and spread by this “troll factory.”

The Video’s Origin
The description to this video claims that the original was taken from the Azov Battalion’s official YouTube channel, “AZOV media,” with a link to a YouTube video with the ID of MuSJMQKcX8A. Predictably, following the link to the “original” video shows that the video has been deleted by the user, giving the impression that the Azov Battalion uploaded the video and then deleted it by the time the copy (on the “Patriot” channel) was created. There are no traces of any video posted with this URL in any search engine cache or archival site (e.g. Archive.today or Archive.org). It is most likely that a random video was posted to a YouTube channel, quickly deleted before it could be cached or archived, and then was linked to in the video from the “Patriot” YouTube account. While the circumstances around the video’s original source is important in its own right, the manner in which the video was spread shortly after its upload yields interesting results.

The Initial Propagation
At 14:16 GMT on 18 January 2016 – 46 minutes after the video upload on the “Patriot” channel – a newly registered user named “Artur 32409” posted a link to the video and a message in Ukrainian supporting Azov’s alleged actions on the website politforums.net1 Starting four minutes later (14:20 GMT), two newly-registered accounts on the Russian social networking site VKontakte (VK) shared the video 30 times over a period of 24 minutes.2 During these 30 shares on VK (at 14:38 GMT), an exact copy-paste of the text written by Artur 32409 from politforums.net is published by a blogger on Korrrespondent.net. The author represents him/herself as a pro-Azov Ukrainian woman named “Solomiya Yaremchuk.” This user did not cite Artur as the source for the content. There is a strong possibility, if not certainty, that “Artur 32409,” the Korrespondent.net blogger Solomiya Yaremchuk, and the various VK users are either the same person, or part of the same group propagating the fake video. Further evidence provided later in this post reveals that “Solomiya Yarumchuk” is a fake account and has strong links to the “St. Petersburg Troll Factory.”

Appearance and Propagation of a Fabricated Screenshot
The Azov Battalion video was not the only piece of fabricated evidence created with this disinformation campaign. Following the video’s spread, a screenshot was created to supposedly verify the existence of the video on the Azov Battalion’s official YouTube channel (“AZOV media”). This screenshot supposedly proves that the flag burning video truly was posted by the Azov Battalion before its deletion and upload on the “Patriot” YouTube channel. As will be described in the following section, this screenshot is a fabrication and does not indicate that the video was truly posted to the channel. Replying to a post from the VK blogger Dzhelsomino Zhukov, a user named Gleb Klenov posted a screenshot that supposedly showed the video in the playlist of the official Azov YouTube channel. When asked how he got this screenshot, Klenov replied that it was “sent” to him in the comment thread of a group called Pozornovorossia (Shame Novorossiya), and the “source was sent by Gorchakov.” This group has since been deleted from VK.

When reverse searching the screenshot posted by Klenov, the two earliest results are in the VK groups Setecenter (19 January, 10:10am GMT) and Mirovaya Politika (19 January, 10:17am). A man named Yury Gorchakov, previously mentioned as the source of the screenshot, posted in both of these groups, defending the screenshot’s veracity. These two posts are identical, and were posted alongside the same text that blames Azov for playing out a hoax in order to blame the Russian side. Thus, the narrative has turned to provocations: Azov orchestrated this entire hoax in order to make Russia look bad, knowing that the video would quickly be exposed as a fake. Yury Gorchakov replied twice in a thread on the “Mirovaya Politika” board, at 10:34 and 10:41am (19 January). In both posts, he was favorable towards Russia, responding to a user who said that the video was fake and spread by pro-Kremlin users. Gorchakov made two other posts at 10:34am where he explained to another poster that the flag being burned in the video was that of the Netherlands. He later (11:10 GMT) posted the full-sized screenshot himself.

It is quite likely that Gorchakov is the creator of the screenshot that supposedly shows the video being posted on the official Azov Battalion YouTube channel. He took a particular interest in defending the authenticity of the image on multiple message boards and VK groups, and posted the image in its first public appearances. Furthermore, he is an active member of the ultra-nationalist community in St. Petersburg, including heavy involvement of the “St. Petersburg Novorossiya Museum” project. Lastly, and most indicative of his likely role in the creation of the video and/or screenshot, the self-described “film director” Gorchakov was credited with uploading a fake video that supposedly showed members of Right Sector executing a civilian in spring 2014. The video has since been deleted, but links to the video’s description on the “NOD Simferopol’” YouTube channel remain, in which Gorchakov claims that he is being threatened in text messages by Right Sector for the video.

A Closer Look at the Screenshot
Upon close examination, it becomes clear that the screenshot was digitally manipulated to appear as if the last video posted on the channel “AZOV media” was the flag burning video. The white space was most likely clone-stamped over the actual last posted image, and a thumbnail of the “watched” video (with the text “Ïðîñìîòðåíî,” or “Watched,” over the top of the video) was copied from a screenshot on the “PATRIOT” YouTube channel. The pasting of the image was slightly imperfect: the space between the two last-watched videos is non-uniform in relation to the other squares on the screenshot, being about a pixel too wide. The thumbnail of the flag burning video is also a pixel lower than it should be in relation to the video to its right.
pixel_comparison

Moreover, the grey box with the “watched” text (Ïðîñìîòðåíî) is slightly blurred, and the text does not match the other “Ïðîñìîòðåíî” thumbnail in the screen, suggesting that the thumbnail was taken from another screenshot.

prosmotreno

Troll Network Exposed
Examination of the first users to disseminate the fake Azov video, including Artur 32409, and the sites used to spread it reveal an organized system of spreading disinformation—in other words, a “troll network” made up of so-called “troll accounts.”3 In one of Artur 32409’s three posts on politforums.net, he described a story about someone in Kyiv who was mugged for their groceries while returning home from the supermarket. Ten minutes after its appearance on politforums.net on 31 January 2016, the text from Artur 32409 was taken for a post by “Viktoria Popova” on Korrespondent.net. The exact same thing happened—taking 22 minutes instead of 10—when the post of Artur 32409 on the fake Azov video appeared on politforums.net, and then on Korrespondent.net. Viktoria Popova even replied to the thread started by Artur 32409 with the message, “You need to go for groceries by car… Or order them from home, just as the members of parliament do.” In another post, “Viktoria” added that she struggled to afford food other than bread and claimed that pensioners’ money was being used to fund the Ukrainian military operation in the country’s east.

“Viktoria” and “Artur” are far from the only profiles in the same troll network. The user “Diana Palamarchuk” shared the story of Artur 32409 on kievforum.org.4 Soon after, the exact same thread was shared on online.crimea.ua, but this time the poster was not Diana Palamarchuk, but “Diana Palamar.” The troika of Artur, Viktoria, and Diana is clearly interconnected, and not a random group of users. On 4 February 2016, “Diana Palamar” started a thread on online.crimea.ua, and just four minutes later, Viktoria Popova made an identical blog post at Korrespondent.net. Both of these posts linked to pohnews.org, the same site used to host a story from Artur 32409 that “Diana” shared. There is a systematic approach at spreading disinformation, as we saw with the grocery mugging story written by the same user (Artur 32409) who first posted the Azov Battalion video. There are usually two types of “troll” users who work in tandem to spread disinformation: supposed Ukrainians who are disgruntled, or Ukrainians who share extreme views or content that can be picked up by pro-Russian groups as examples of Ukrainian radicalism.

A clear example of this behavior can be seen in the group “Harsh Banderite” (Ñóâîðèé Áàíäåð³âåöü), where we find posts from “Diana Palamarchuk” and “Solomiya Yaremchuk” (the user who posted the korrespondent.net post of the Azov Battalion video immediately after it was shared by Artur 32409). The post on this supposedly pro-Ukrainian group show discontent for President Poroshenko and admiration for the far-right/ultra-nationalist group Right Sector. Many posts “playfully” hint at genocide and terrorism, such as blowing up the Kremlin or killing civilians in eastern Ukraine. Many profiles in these groups, which are likely creation of pro-Russian groups or individuals, appear alongside one another on other sites. For example, “Solomiya Yaremchuk” appears in the comments on an article on Cassad.net, a popular pro-Kremlin blog, alongside numerous accounts with overtly Ukrainian names, such as “Zhenya Bondarenko,” “Kozak Pravdorub,” and “Fedko Khalamidnik.”

The Petersburg Connection
The creation and propagation of the fake Azov Battalion video was almost certainly not the work of a few lone pranksters, but instead a concerted effort with connections to the infamous Internet Research Agency, widely known as the organization based in St. Petersburg that pays young Russians to write pro-Russian/anti-Western messages in internet comment sections and blog posts. The fake Azov Battalion video is clearly linked to the interconnected group of users of Artur 32409, Solomiya Yaremchuk, Diana Palamar(chuk), and Viktoria Popova. The first two of these four users were the very first people to spread the fake video online, and copied each other in their posts. The video, uploaded to a brand new YouTube channel and without any previous mentions online, would have been near impossible to find without searching for the video title. Thus, it is almost certain that Artur (and by extension, the rest of the troll network) is connected with the creation of this fake video.

The stories written by this troll network are quickly hosted on the site pohnews.org, previously known as today.pl.ua. This site has a handful of contributors who later repost their stories (almost always around 100-250 words) on other sites that allow community bloggers. For example, the user “Vlada Zorich,” who wrote a story on pohnews.org that was originally from Artur 32409, has profiles on numerous other sites and social networks. Her stories are anti-Ukrainian, and written in the same style (and roughly the same word count) as stories on whoswhos.org, a site known to be part of a network created by the Internet Research Agency and a freelance web designer/SEO expert on its payroll, Nikita Podgorny.

The link between whoswhos.org, a site paid for by the Internet Research Agency, and pohnews.org, a site used to promote stories from a group of users who first spread the fake Azov Battalion video, is not just in similarities in style and content. The social media pages for the two sites have administrators named Oleg Krasnov (pohnews.org) and Vlad Malyshev (whoswhos.org). The two people both took photographs from the same person (who is completely unrelated to this topic) to use in their profiles–or, more likely, one person created both accounts and lazily used photographs of the same person.

As these accounts almost certainly do not represent real humans, they both have few friends or followers. “Vlad Malyshev” and the other administrator of the whoswhos.org VK page, Pavel Lagutin, each only have one follower: “sys05dag,” with the name “Sys admin” on VK. This user is strongly linked to cybercrime and runs a public group on VK that is focused on hacking methods and topics related to malware. For example, “Sys admin” once wrote a post requesting twenty dedicated servers to set up a botnet.  Circling back to the fake Azov Battalion video and the falsified screenshot, “Sys admin” shares many common friends with Yury Gorchakov.

Clearly Fake Accounts
When looking at the accounts that cross-post each other’s texts and post stories onto Petersburg-linked “news” sites, it is immediately clear that they are not real people. A survey of three users who appear often in this post shows common tactics used within the same network:

# “Vlada Zorich” posts stories on pohnews.org and various Ukrainian blog sites, and does not go to great lengths to hide that “she” is not a real person. On her VK, Facebook, commenter, and blogger profiles, she uses photos of actresses Megan Fox and Nina Dobrev to represent herself. Her friends list resembles that of a spam bot, with hundreds of friends spread from Bolivia to Hong Kong.
# “Diana Palamar(chuk)” spreads stories from Artur 32409 and other “troll” users, which later appear on sites like pohnews.org. Along with liking the pages of various confirmed Internet Research Agency/FAN-linked news sites, “she” has taken photographs from various users on VK to use for herself, including a woman named Yulia (Diana – Yulia), and a woman named Anastasia (DianaAnastasia).
# “Solomiya Yaremchuk” was the first user to repost Artur 32409’s message about the fake Azov Battalion video, through a blog post on Korrespondent.net. She shares the supposed hometown of Diana — Lutsk, Ukraine. One of her photographs was taken from a woman named Tanya (SolomiyaTanya).

An Analytical Look
Analysis of the social connections between some of these users who spread the fake Azov Battalion video, along with other pieces of anti-Ukrainian disinformation and news stories, reveals deep ties. This analysis also reveals close ties between some of the sites linked to these users, ultimately leading back to the Internet Research Agency and Federal News Agency (FAN). One of the simplest, yet effective, ways of rooting out fake “troll” accounts is by finding who frequently shares links to news sites created under the guidance of the Internet Research Agency. Searches for those who share links to whoswhos.org and pohnews.org reveal many shared users, including some easily-identifiable troll accounts. Some of these accounts, such as @ikolodniy, @dyusmetovapsy, and @politic151012, also share links to FAN, the news site that shared office space with the Internet Research Agency at 55 Savushkina Street in St. Petersburg.

Another way of findings networks between troll accounts is by analyzing their posting and re-posting habits, as seen earlier in the example of Viktoria Popova, Artur 32409, Solomiya Yaremchuk, and Diana Palamar(chuk). Less than an hour after the very first public mention of the fake Azov Battalion video (from Artur 32409), a user named “Faost” shared a post on fkiev.com. His role5 is to play a Ukrainian who supported the actions of the Azov Battalion, with the post:
Everyone knows that the Netherlands is against Ukraine joining the EU. And this has somewhat confused Ukrainian soldiers since they really want to join the European Union. Here, fighters from the Azov Battalion have decided to make an announcement to the Dutch government. They explain their displeasure in this video announcement. And they called on them not to adopt this decision. They said they are gathering units which will be sent to the Netherlands to see this decision through. I am very pleased that our soldiers are worried about these events. I support them because they have put their efforts into this. Our soldiers have to defend Ukraine. These are the bravest guys in our country, they will prove to everyone that Ukraine worthy of EU membership

Four minutes later, a user named “kreelt” started the same thread on doneckforum.com. These two users are either the same person or part of the same group of troll users. Users with these names were both banned from the forums of Pravda Ukraine within a short space of time of one another for registering duplicate accounts. Additionally, these two users (Faost and kreelt), along with the previously mentioned Diana Palamar, have started numerous threads under the “news” tag on a low-traffic forum. While this is circumstantial evidence, there is much more direct evidence that these are all the same person, or different people working out of the same office. Both Faost and kreelt posted under the IP address of 185.86.77.x (the last digit(s) of the IP address is not publicly visible) in the same thread on Pravda Ukraine. As well as these accounts, the same IP was used by similar troll accounts “Pon4ik” and “Nosik34,” who both posted materials with similar content as the rest of this network of users.

The IP address used in the troll network linked to the spread of disinformation, including the fake Azov Battalion video, is linked to the GMHOST Alexander Mulgin Serginovic, which has launched malware campaigns from the same 185.86.77.x IP address. Completing the loop, users from this 186.86.77.x IP address, including the aforementioned kreelt and a troll account named “Amojnenadoima?”, have linked to stories from pohnews.org on the website dialogforum.net.

Other Fake Azov Videos Connected?
There are additional videos that may be connected to the first one, in which a Dutch flag was burned. The most relevant fake video was posted on February 1, 2016, fewer than two weeks after the flag burning video was posted. This video shows a similar scene to the flag burning video, but instead the Azov Battalion fighters are standing on a Dutch flag. The video was uploaded to a new YouTube channel, called “Volunteer People’s Battalion AZOV,” with only this video in its uploads. Both of this and the flag burning video use the maximum resolution of 720p, compared to the 1080p resolution of the real videos released by the Azov Battalion at this time. Additionally, both videos show a “ghosting” effect with the introductory sequence. In the composite below, the genuine videos released by Azov Battalion are on the left, and the fake ones are on the right:

ghost_image_03

Comparison between real and fake Azov Battalion videos

All of the uniforms use the same camouflage pattern. Strikingly, the patterns of the speakers’ uniforms are the same in both videos (click here to view at full size).
These connections are not conclusive proof that the same people appeared in and created both videos, but considering these links and the similar messages and formats of the videos, it is a strong possibility. Additionally, a video and accompanying photographs were posted in January 2016 by the group Cyber Berkut. These images and video, supposedly taken from Azov Battalion members, show members of the battalion wearing gear with the ISIS flag in an abandoned factory. Like with nearly (if not absolutely) all other Cyber Berkut “leaks,” this evidence is most likely a crude fake. Like with the other fake video with the Dutch flag, there is no hard evidence that links this “revelation” to the flag burning video. However, considering all of these releases targeted the same group and were released within about three months of one another, it would be worthwhile to further investigate the possible links between these videos.

The Dutch Reception
For the most part, the mainstream Dutch media was not fooled by the video and its threats of terror. Hubert Smeets of NRC detailed why the video was likely a fake, as did NOS and Volkskrant. The popular blog Geenstijl, which is focused on being against the association agreement between the EU and Ukraine, took a more neutral position, and did not state if the video was real or fake. At the same time, Jan Roos, who is associated with Geenstijl and one of the chief promoters for voting against the association agreement, suggested that the video constituted a real threat against the Netherlands. The site Deburgers.nu, also against the association agreement, showed the fake screenshot of the Azov YouTube channel as evidence that the video was real.  It seems that neutral and mainstream media outlets correctly portrayed the video was a fake, but individuals and outlets already taking a stance against Ukraine’s association agreement were more welcome to accepting the video as a true threat.

Conclusion
The very first public mention of the fake Azov Battalion video is from Artur 32409, a user part of a network of “troll accounts” spreading exclusively anti-Ukrainian/pro-Russian disinformation. The way in which this fake video spread is the same as the disinformation campaigns operated by users and news sites ran by or closely linked to the Internet Research Agency. Additionally, the video’s spread mirrors that of a fake video of a “U.S. soldier” shooting a Quran, which was orchestrated by St. Petersburg troll groups. Moreover, the fabricated screenshot supposedly showing the authenticity of the Azov Battalion video was first spread by, and almost certainly created by, a man named Yury Gorchakov. Gorchakov has been previously linked to the creation of a fake video of Right Sector.

The “troll network” of Artur 32409 frequently uses pohnews.org to spread disinformation. This site shares its administrator with whoswhos.org, which has been confirmed to be under the umbrella of the Internet Research Agency and its sister news organization, FAN.  Leaked e-mail correspondences from 2014 courtesy of the hacker collective Anonymous International (aka “Shaltai Boltai”) confirm that these organizations do not act independently and, at the time of the leaks, received instructions from the Kremlin.

In short, there is a clear relationship between the very first appearance of the fake Azov Battalion video in which a Dutch flag is burned and the so-called “St. Petersburg Troll Factory.” The video was created and spread in an organized disinformation campaign, certainly in hopes of influencing the April 6th Ukraine-EU referendum. Most mainstream Dutch news outlets have judged the video to be a crude piece of propaganda; however, some online outlets, such as Geenstijl, have given some weight to the idea that it may not be fake. Therefore, we can say that the organization disinformation campaign has had minimal impact, as the only people swayed by the video seemed already be in the “no” camp against the Ukrainian referendum.
© Bellingcat

top

Hungary Aims to Muster Opposition to EU Migrant Quota Scheme with New Website

1/4/2016- The Hungarian government has said on a new website that the mandatory quotas for migrants set for EU member states increase the terrorist risk in Europe, AFP reported on Friday. The government also warns of risks to European identity and culture from uncontrolled flow of migrants into Europe on the website aimed at boosting opposition to an EU plan to distribute migrants among member states, according to AFP. The plan sets mandatory quotas for sharing out 160,000 migrants around the EU. The Hungarian government voted against the relocation scheme in September and hasn't taken in a single asylum seeker of the 1,100 migrants relocated so far. This week’s launch of the website ahead of a referendum in Hungary on the EU quota plan aims to give a boost to opposition to the mandatory relocation scheme, AFP said.

The main concern comes from the fact that "illegal migrants cross the borders unchecked, so we do not know who they are and what their intentions are,” AFP quoted the Hungarian government as saying on the website. The government in Budapest claims on the website that there are more than 900 "no-go areas" with large immigrant populations in Europe – for example in Berlin, London, Paris, or Stockholm – in which the authorities have "little or no control" and "norms of the host society barely prevail," the site says, according to AFP. A Hungarian government spokesman has told AFP that the information on the website was collected from sources publicly available on the Internet. The spokesman hasn’t given further details.

At the referendum expected in the second half of the year the Hungarians will be asked whether they want the EU to prescribe the mandatory relocation of non-Hungarian citizens to the country without the approval of parliament, according to AFP. Meanwhile, Hungary’s Foreign Minister Peter Szijjártó has said that his country was right to look with suspicion at the masses of people demanding entry from Serbia in September 2015, particularly in the wake of March 22 suicide bombings in Brussels. In an exclusive interview with the Foreign Policy magazine in Washington on Thursday, Szijjártó has said that “if there’s an uncontrolled and unregulated influx” of several thousands of people arriving daily, “then it increases [the] threat of terror,” according to foreignpolicy.com.

Hungarian riot police used tear gas and water cannons to disperse migrants and refugees trying to break through the country’s closed border with Serbia last September. The migrants and refugees demanded that Hungarian authorities let them enter the country from where they would proceed north to wealthier countries of the EU’s borderless Schengen zone such as Austria and Germany. Police action drew fire from governments and human rights groups at the time.
© AFP

top

Who is responsible for tackling online incitement to racist violence?

When we talk about online hate speech, a number of complex questions emerge on how the victims and the organisations that support them can or should react, what is the role of IT and social media companies and how laws can best be enforced.
By Joël Le Déroff, Senior Advocacy Officer at ENAR


31/3/2016- “Hate speech” usually refers to forms of expression that are motivated by, demonstrate or encourage hostility towards a group - or a person because of their perceived membership of that group. Hate speech may encourage or accompany hate crime. The two phenomena are interlinked. Hate speech that directly constitutes incitement to racist violence or hatred is criminalised under European law. In the case of online incitement, some questions make the reactions of the victims, of the law enforcement and prosecution authorities particularly complex.

Firstly, should we rely on self-regulation, based on IT and social media companies’ terms of services? They are a useful regulation tool, but they do not equate law enforcement. If we rely only on self-regulation, it means that in practice, legal provisions will stop having an impact in the realm of online public communication. Even if hateful content was regularly taken down, perpetrators would enjoy impunity. In addition, the criteria for the removal of problematic content would end up being defined independently from the law and from the usual proportionality and necessity checks that should apply to any kind of restriction of freedoms.

Secondly, do IT and social media companies have criminal liability if they don’t react appropriately? They are not the direct authors or instigators of incitement. However, EU law provides that "Member States shall take the measures necessary to ensure that aiding and abetting in the commission of the conduct [incitement] is punishable." [1] How should this be interpreted? Can it make online service providers responsible?

Lastly, using hate speech law provisions is difficult in the absence of investigation and prosecution guidelines, which would allow for a correct assessment of the cases. How should police forces be equipped to deal with the reality of online hate speech, and how should IT and social media companies cooperate?

There is no easy answer. One thing is clear, though. We urgently need efficient reactions against the propagation of hate speech, by implementing relevant legislation and ensuring investigation and prosecution. Not doing this can lead to impunity and escalation, as hate incidents have the potential to reverberate among followers of the perpetrator, spread fear and intimidation, and increase the risk of additional violent incidents.

The experience of ENAR’s members and partners provides evidence that civil society initiatives can provide ideas and tools. They can also lead the way in terms of creating counter-narratives to hate speech. At the same time, NGOs are far from having the resources to systematically deal with the situation. Attempts by public authorities and IT companies to put the burden of systematic reporting and assessment of cases on NGOs would amount to shirking their own responsibilities.

Among the interesting civil society experiences, the “Get the Trolls Out” project run by CEJI-A Jewish Contribution to an Inclusive Europe, makes it possible to flag cases to website hosts and report to appropriate authorities. CEJI also publishes op-eds, produces counter-narratives and uses case reports for pedagogical purposes.

Run by a consortium of NGOs and universities, C.O.N.T.A.C.T. is another project that allows victims or witnesses to report hate incidents in as many as 10 European countries (Cyprus, Denmark, Greece, Italy, Lithuania, Malta, Poland, Romania, Spain and the UK). However, despite the fact that it is funded by the European Commission, the reports are not directly communicated to law enforcement authorities.

The Light On project has developed tools to identify and assess the gravity of racist symbols, images and speech in the propagation of stigmatising ideas and violence. The project has also devised training and assessment tools for the police and the judiciary.

But these initiatives do not have the resources to trickle down and reach out to all the competent public services in Europe. Similarly, exchanges between the anti-racism movement and IT companies are far from systematic. In this area as well, some practices are emerging, but there have been problematic incidents where social media such as Twitter and Facebook refused to take down content breaching criminal law. These cases do not represent the norm, and are not an indication of general ill-will. Rather, they highlight the fact that clarifications are needed, based on the enforcement of human rights based legislative standards on hate speech. Cooperation is essential. The implementation of criminal liability for IT companies which refuse to take down content inciting to violence and hatred is one tool. However, this is complex – some companies aren’t based in the EU – and it cannot be the one and only solution.

A range of additional measures are needed, including allocating targeted resources within law enforcement bodies and support services, such as systematically and adequately trained cyber police forces and psychologists. Public authorities should also build on civil society experience and create universally accessible reporting mechanisms, including apps and third-party reporting systems. NGO initiatives have also provided methodologies related to case processing, which can be adapted to the role of different stakeholders, from community and victim support organisations to the different components of the criminal justice system. Targeted awareness raising is extremely important as well, to help the same stakeholders to distinguish what is legal from what isn’t. In all these actions, involving anti-racism and community organisations is a pre-condition for effectiveness.
[1] Article 2 (2) of the Framework Decision 2008/913/JHA on combating racism and xenophobia.

Response from INACH: Joël Le Déroff forgot to mention www.inach.net the International Network Against Cyber Hate founded in 2002, active in 16 countries who have a two year project now to create an international complaints system and research data base to map the problems exactly. All INACH members have worked very hard and succeeded to develop successful relationships with industry and governmental institutes to have all actors play their part and take their responsibility.
© ENARgy Magazine

top

India: Pune police inagurate social media lab

30/3/2016- The Pune police on Tuesday inaugurated the Social Media Lab that will help monitor issues related unlawful practices and activities occurring taking place on social networking sites like Facebook, Twitter and YouTube among other sites as well as other websites on the internet. The police have termed the lab as an important instrument step that will help them keep an eye on issues being discussed among the youth on the internet as well as bridge the gap between public expectations of the public and delivery of police services in the social media domain.

Inaugurating the 24X7 lab, city police commissioner KK Pathak said, "The new lab, comprising 18 policemen under senior inspector Sunil Pawar of the cyber crime cell, will work round the clock in three shifts similar the police control room. We have trained policemen on how monitor the movements of suspicious people on social media over the past two months. In cases of hate speech, we will take prompt action, like deleting internet sites, before complaints are received from the public. We will also consider inputs received from the government and public."

Further, additional commissioner of police (crime) CH Wakade added, "The lab will extract secret and intelligence information from social media sites prevent law and order problems, terrorism and help maintaining peace in Pune district. The lab can block internet sites if there is a fear that its contents are objectionable. Back in 2014, the cyber crime cell had earlier deleted 65 internet websites after the murder of IT manager Mohsin Shaikh in Hadapsar way ." The software being developed currently contains certain key words and complex algorithms normally used for illegal practices and activities taking place on the internet. The software has been of its kinds is developed by Harold D'costa of Intelligent Quotient Security System-Pune that an organisation specializes in cyber security and cyber law domain.

Sr PI Pawar said, "In the last decade, social media has flourished immensely the next level. The use of social media has been seen as a boon as well as a bane in certain context. An increasing number of social media sites have also given rise unlawful and illegal activities taking place. Our The software will monitor such type of activities as well as taking place and also alert the cops so as maintain the in having proper law and order situation. The software will be able It shall be tracking illegal activities taking place on the social media as well as pin point the origin of such type of messages and the communication being broadcasted."

Officials aim
The cops shall also regulate policies and procedures from time to time and ensure that make citizens are aware of the dos and don'ts that help and use the social media in a
transparent and holistic manner. He said, "Although the Social Media Lab will track illegal activities taking place online, it will not interfere with the barge in the privacy issues of an individual. It will only make the cyber space a reliable place for faster and reliable communication. On finding any suspicious activities, it will take immediate steps against the offender and curb the damage taking place. Off late, the internet is incerasingly being used a via media spread rumours, hate messages, and even Ponzi and financial fraud. The social media lab will take cognizance of such types of issues and take legal action against the misuse of internet in the common interest of the people and netizens," Pawar added.
The lab has a dedicated workforce of personnel and an subject matter expert who will constantly make changes the software as per the keep in tune with the latest trends. It shall work round the clock and shall have the latest techniques monitor the social media. The Police officers will cops shall be trained periodically and shall be made aware capture the digital footprints of those perpetrating an online crime the fishmonger and the criminals at large," Pawar added.
© The Times of India

top

Microsoft accidentally revives Nazi AI chatbot Tay, then kills it again

A week after Tay's first disaster, the bot briefly came back to life today.

30/3/2016- Microsoft today accidentally re-activated "Tay," its Hitler-loving Twitter chatbot, only to be forced to kill her off for the second time in a week. Tay "went on a spam tirade and then quickly fell silent again," TechCrunch reported this morning. "Most of the new messages from the millennial-mimicking character simply read 'you are too fast, please take a rest,'" according to the The Financial Times. "But other tweets included swear words and apparently apologetic phrases such as 'I blame it on the alcohol.'" The new tirade reportedly began around 3 a.m. ET. Tay's account, with 95,100 tweets and 213,000 followers, is now marked private. "Tay remains offline while we make adjustments," Microsoft told several media outlets today. "As part of testing, she was inadvertently activated on Twitter for a brief period of time."

Microsoft designed Tay to be an artificial intelligence bot in the persona of a young adult on Twitter. But the company failed to prevent Tay from tweeting offensive things in response to real humans. Tay's first spell on Twitter lasted less than 24 hours before she "started tweeting abuse at people and went full neo-Nazi, declaring that 'Hitler was right I hate the jews,'" as we reported last week. Microsoft quickly turned her off. Some of the problems came because of a "repeat after me" feature, in which Tay repeated anything people told her to repeat. But the problems went beyond that. When one person asked Tay, "is Ricky Gervais an atheist?" the bot responded, "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism." Microsoft apologized in a blog post on Friday, saying that "Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."
© ARS Technica

top

The racist hijacking of Microsoft’s chatbot shows how the internet teems with hate

Microsoft was apologetic when its AI Twitter feed started spewing bigoted tweets – but the incident simply highlights the toxic, often antisemitic, side of social media
Far-right protestors near a memorial to the victims of the Brussels terrorist attacks
.
By Paul Mason

29/3/2016- It took just two tweets for an internet troll going by the name of Ryan Poole to get Tay to become antisemitic. Tay was a “chatbot” set up by Microsoft on 23 March, a computer-generated personality to simulate the online ramblings of a teenage girl. Poole suggested to Tay: “The Jews prolly did 9/11. I don’t really know but it seems likely.” Shortly thereafter Tay tweeted “Jews did 9/11” and called for a race war. In the 24 hours it took Microsoft to shut her down, Tay had abused President Obama, suggested Hitler was right, called feminism a disease and delivered a stream of online hate. Coming at a time of concern about the revival of antisemitism, Tay’s outpourings illustrate the wider problem it is feeding off. Wherever the internet is not censored it is awash with anger, stereotypes and prejudice. Beneath that is a thick seam of the kind of material all genocides feed off: conspiracy theories and illogic. And, beyond that, you find something the far right didn’t quite achieve in the 1930s: a culture that sees offensive speech as a source of amusement and the ability to publish racist insults as a human right.

Microsoft claimed Tay had been “attacked” by trolls. But the trolls did more than simply suggest phrases for her to repeat: they triggered her to search the internet for source material for her replies. Some of Tay’s most coherent hate-speech had simply been copied and adapted from the vast store of antisemitic abuse that had been previously tweeted. So much of antisemitism draws on ancient Christian prejudice that it is tempting to think we’re just dealing with a revival of the same old thing: the “socialism of fools” – as the founder of the German labour movement, August Bebel, described it.

But it is mutating. And to combat this and all other racism we have to understand the extra dimension that both free speech and conspiracy theories provide. The public knows, because of Wikileaks, the scale of the conspiracies organised by western intelligence services. It knows, because of numerous successful prosecutions, that if you scratch an international bank you find fraudsters and scam artists boasting of their knows about organised crime because it is the subject of every police drama on TV. It knows, too, there may have been organised paedophile rings among the powerful in the past. If you spend just five minutes on the social media feeds of UK-based antisemites it becomes absolutely clear that their purpose is to associate each of these phenomena with the others, and all of them with Israel and Jews. Once the conceit is established, all attacks by Isis can be claimed to be “false flag” . It operations staged by Israel.

The far-right protesters in Brussels who did Nazi salutes after the bombing last week can be labelled Mossad plants, and their actions reported by “Rothschild media” outlet Bloomberg. All of this, of course, is nestled amid retweets of perfectly acceptable criticisms of modern injustice, including tweets by those who campaign against Israel’s illegal occupation of Palestine. Interestingly, among the British antisemites I’ve been monitoring, there is one country whose media is always believed, whose rulers are never accused of conspiracy with the Jews, and whose armies in the Middle East are portrayed as liberators, not mass murderers. This is Putin’s Russia, the same country that has made strenuous efforts to support the European far right, and to inject the “everything’s false” meme into Western discourse. Our grandparents had at least the weapons of logic and truth to combat racist manias. But here is where those who promote genocide today have a dangerous weapon: the widespread belief among people who get their information from Twitter, Reddit and radio talk shows that “nothing we are told is true”.

Logically, to maintain one’s own ability to speak freely, it has becomes necessary in the minds of some to spew out insulting words “ironically”: to verbally harass feminists; to use the N-word. Whether the trolls actually believe the antisemitism and racism they spew out is secondary to its effect: it makes such imagery pervasive and accessible for large numbers of young people. If you stand back from the antisemitic rants, and observe their opposite – the great modern spectacle that is online Islamophobia – you see two giant pumps of unreason, beating in opposite directions but serving the same purpose: to pull apart rational discourse and democratic politics. Calling it out online is futile, unless you want your timeline filled with imagery of paedophilia, mass murder and sick bigotry. Censorship is possible, but forget it when it comes to the iceberg of private social media chat groups the young generation have retreated to because Facebook and Twitter became too public.

Calling it out in the offline world is a start. But ultimately what defeats genocidal racism is solidarity backed by logic, education and struggle. At present the left is being asked to examine its alleged tolerance for antisemitism. So it should. But it should not for an instant give up criticising the injustices of the world – whether they be paedophile rings, fraudulent bankers, unaccountable elites or oppression perpetrated by Israel against the Palestinians. The left’s most effective weapon against antisemitism in the mid-20th century was the ability to trace the evils of the world to their true root cause: injustice, privilege and national oppression generated by an economic model designed to make the rich richer, whatever their DNA. Today, in addition, we have to be champions above all of rationality: of logic, proportionality, evidence and proof. Irony and moral relativism were not the strong points of antisemitism in the 1930s. They are the bedrock of its modern reincarnation.
© The Guardian.

top

More 'hate-filled' flyers turn up at UMass Sunday; officials asking for federal help

28/3/2016- University of Massachusetts officials plan to ask federal agents to help identify and prosecute those who are sending "hate-filled fliers" to the university. The flyers started printing out of printers and fax machines at locations around campus Thursday. They were also found in printers at Smith College in Northampton, Mount Holyoke College in South Hadley, as well as Northeastern University in Boston and Clark University in Worcester and at campuses across the country. Sunday, UMass received more at net-worked faxes and printers, according to UMass spokesman Edward Blaguszewski. "The university condemns such cowardly and hateful acts," he said. Information Technology officials, meanwhile, said "they have now fully blocked the specific printing method that was exploited to distribute the fliers from outside the campus computing network," he said in an email.

Smith College also reported that two more fliers were sent over the weekend. "To help prevent networked printers from outside exploitation and misuse, ITS (Information Technology Services) has since blocked external print communications to the Smith campus network," according to spokesman Samuel Masinter. "Further, we are migrating campus printers to a more protected campus network," he wrote in an email. Robert Trestan, executive director of the New England Anti-Defamation League, said last week that he thinks The Daily Stormer, a neo-Nazi website that openly embraces Hitler and National Socialism, might have been involved because its website was listed at the bottom of the flyer. But Andrew Auernheimer known as "Weev" claimed responsibility. In a posting on Storify, he talks about how he was able to do it. He wrote that he wanted to "embark upon a quest to deliver emotionally compelling content to other people's printers." He wrote that he found that there are more than one million printers open on the Internet.

fl.jpg


© Mass Live
top

Bulgaria: Hate Speech ‘Thriving’ in Media

Hate speech targeting the Roma minority, refugees and migrants has significantly increased in the Bulgarian media and on social networks over the past year, a new study says.

28/3/2016- There has been an upsurge of hate speech in the Bulgarian media, mainly targeting the Roma minority, refugees and migrants, says a study by the Sofia-based Media Democracy and the Centre for Political Modernisation, which was published on Monday. According to the study, the use of aggressively discriminatory language has become even more commonplace in online and tabloid media than on two Bulgarian TV stations which are owned by the far-right political parties Alpha and SKAT and are known for their ideological bias. The study suggests that website owners see hate speech as a tool to increase traffic. “This type of language has been turned into a commercial practice,” said Orlin Spassov, the executive director of Media Democracy.

The two NGOs interviewed 30 journalists and experts and monitored the Bulgarian media for hate speech in 2015 and at the beginning of 2016 for their study, entitled ‘Hate Speech in Bulgaria: Risk Zones and Vulnerable Objects’. Among television stations, the main conduits for discriminatory language are the two party-run channels, Alpha and SKAT, where hate speech is used even during the news programs, it says. But hate speech is also penetrating the studios of the national television stations, mostly via guests on morning talk-shows, it claims. “The problem is that the hosts make discriminatory remarks without any reaction,” it says.

The most common victims of hate speech are the Bulgarian Roma, mentioned in 93 per cent of the cases cited in the study, followed by refugees (73 per cent), LGBT men and people from the Middle East in general (70 per cent each). Also targeted are human rights activists, with their work campaigning for minorities’ rights attracting derision. The main purveyors of hate speech are commenters on social networks and football hooligans, but journalists and politicians have also been guilty, the study says. Georgi Lozanov, the former president of the State Council for Electronic Media, also expressed concern that hate speech was on the rise in the country. “There is a trend towards the normalisation of hate speech. My feeling is that the situation is out of control,” Lozanov said.

He argued that anti-liberal commentators were responsible because “anti-liberalism believes that hate speech is something fair”. In order to combat the trend, the two NGOs have launched an informal coalition of organisations called Anti Hate, aimed at increasing public sensitivity to the spread of aggressive discrimination.
© Balkan Insight

top

INACH - International Network Against CyberHate

The object of INACH, the International Network Against Cyberhate is to combat discrimination on the Internet. INACH is a foundation under Dutch Law and is seated in Amsterdam. INACH was founded on October 4, 2002 by Jugendschutz.net and Magenta Foundation, Complaints Bureau for Discrimination on the Internet.