- UK: You've got hate mail: how Islamophobia takes root online
- UK: Eastwood blogger Simon Tomlin guilty of harassment
- USA: Gay Slur Removed From Google Maps
- Hate Crimes in Cyberspace, by Danielle Keats Citron (book review)
- Ireland: New legislation needed to stop online trolls
- Malta: Website helps report racism
- Gaza war caused explosion of online hate speech in Europe, report finds
- 'Facebook Murder' - Should Crimes Using Social Networks Get Their Own Category?
- Australia: Racist posts on Facebook - how should you respond?
- USA: Law Enforcement Increasingly Reliant on Social Media
- USA: Court Agrees to Reconsider Decision Over Benghazi-Linked Anti-Islam Video
- Racism in Canada finds fertile ground online
- Does Twitter have a secret weapon for silencing trolls?
- UK PM Cameron says Internet must not 'be an ungoverned space'
- UK: How can football tackle the social media hate merchants?
- Disturbing Trend: Pro-Palestinians Promoting Carintifada on Social Media
- Gamehit Clash of Clans allows opportunity for anti-semitism
- UK: Ed Miliband demands zero-tolerance approach to antisemitism
- Canada: B.C. minor hockey coach fired over pro-Nazi Facebook posts
- Why terrorists and far-Right extremists will always be early adopters
- Kremlin Attack on Russian Website for Nazi List of Wealthy Jews Meets Skeptical Response
- Australia: How Facebook decides what to take down
- Why online Islamophobia is difficult to stop
- Social Networks Bringing People Together like Never Before (opinion)
- 'It's hard being openly Jewish'
- Hungary's 'internet tax' sparks protests
- Mob Violence Has No Place in Ireland (press statement)
- UK: J. Mann MP: Berger abuse reveals failure to curb racism on Twitter (opinion)
- UK: PDMS technology powers innovative new website for UK police
- UK: Far Right on Facebook - The group with more likes than all three main parties
- UK: Neo-Nazi gave out internet abuse tips in campaign against MP Berger
- UK: Silencing extreme views, even if they are those of internet trolls, is wrong (opinion)
- Hate Speech Is Drowning Reddit and No One Can Stop It
- Facebook re-invents the 1990s chat room with Rooms iPhone app
- After Twitter ruling, tech firms increasingly toe Europes line on hate speech
- Czech authorities alarmingly unwilling to prosecute online hate crimes
- Poland: Team behind Hatred lashes out in blog post, thanks press for attention
- Who Has the Right to Be Forgotten on the Internet?
- UK: Britain First 'tricks' Facebook users with Lynda Bellingham post
- British man gets jail time for sending lawmaker anti-Semitic tweet
- U.K. Seeks Help From Tech Firms in Combating Extremists Online
- United against Salafism, right-wing scene surges in Germany
- Italy: Online racial discrimination on the rise
- Web retailers accused of selling Nazi-related paraphernalia
- UK: Social media should not descend into a tool for far-right (opinion)
- UK: Jewish student union uses new media to fight oldest hatred
- Dutch government pressures ISPs to remove 'jihadic' web content
- Freedom of expression complicates EU law on 'right to be forgotten'
- Czech Republic: Neo-Nazis hack websites of human rights NGOs
- EU hosts anti-extremist tech meeting
- Just Because a Hate Crime Occurs on Internet Doesn't Mean It's Not a Hate Crime (opinion)
- Northern Ireland: Facebook: A Breeding Ground For Racism (opinion)
- Germany: Right-wing extremism on the internet (annual report 2013 Jugendschutz.net)
- USA: Supreme Court To Weigh Facebook Threats, Religious Freedom, Discrimination
- South Africa: Vicious tweets scare Jewish community
- Its time Facebook repents (opinion)
- The right to be forgotten; Drawing the line
- Germany: SoundCloud faces wave of jihadi postings
- USA: Brooklyn Coffee Shop Owner Posts Anti-Semitic Rants on Facebook, Instagram
- Google Chief Sees Bots as Weapon on Anti-Semitism
- Facebook agrees to drop real name policy which banned drag queens
- The burqa debate: lifting the veil on Islamophobia in Australia (comment)
- Austria: Fine and jail time for Nazi comments
- Goodbye Facebook, Hello Ello: Gay Users Are Leaving the Site En Masse
- ADL Releases Best Practices for Challenging Cyberhate
- Social Media Trace Australia Islamophobia
- Germany: Jewish community calls for more progress against anti-Semitism
- USA: Facebook agrees to discuss "real name" policy with performers
- USA: Protesters confront racist social media comments
- Facebook is under fire from gay and transgender users who are being forced to use real names
- Canada: 'People of Winnipeg' Facebook page outlet for racism, says activist
- Fighting racism " one keystroke at a time (opinion)
- Study: Hate Posts on Social Media Cause Real Harm
- UK: Antisemitic incidents reach record level in July 2014
- Dutch security service broke privacy rules with web forum hacks
- Ireland: Longer sentences for hate crimes proposed in report
By Imran Awan, Senior Lecturer and Deputy Director of the Centre for Applied Criminology at Birmingham City University
21/11/2014- In late 2013 I was invited to present evidence, as part of my submission regarding online anti-Muslim hate, at the House of Commons. I attempted to show how hate groups on the internet were using this space to intimidate, cause fear and make direct threats against Muslim communities – particularly after the murder of Drummer Lee Rigby in Woolwich last year. The majority of incidents of Muslim hate crime (74%) reported to the organisation Tell MAMA (Measuring Anti-Muslim Attacks) are online. In London alone, hate crimes against Muslims rose by 65% over the past 12 months, according to the Metropolitan Police and anti-Islam hate crimes have also increased from 344 to 570 in the past year. Before the Woolwich incident there was an average of 28 anti-Muslim hate crimes per month (in April 2013, there were 22 anti-Muslim hate crimes in London alone) but in May, when Rigby was murdered, that number soared to 109. Between May 2013 and February 2014, there were 734 reported cases of anti-Islamic abuse – and of these, 599 were incidents of online abuse and threats, while the others were “offline” attacks such as violence, threats and assaults.
A breakdown of the statistics shows these tend to be mainly from male perpetrators and are marginally more likely to be directed at women. After I made my presentation I, too, became a target in numerous online forums and anti-Muslim hate blogs which attempted to demonise what I had to say and, in some cases, threaten me with violence. Most of those forums were taken down as soon as I reported them.
It’s become easy to indulge in racist hate-crimes online and many people take advantage of the anonymity to do so. I examined anti-Muslim hate on social media sites such as Twitter and found that the demonisation and dehumanisation of Muslim communities is becoming increasingly commonplace. My study involved the use of three separate hashtags, namely #Muslim, #Islam and #Woolwich – which allowed me to examine how Muslims were being viewed before and after Woolwich. The most common reappearing words were: “Muslim pigs” (in 9% of posts), “Muzrats” (14%), “Muslim Paedos” (30%), “Muslim terrorists” (22%), “Muslim scum” (15%) and “Pisslam” (10%). These messages are then taken up by virtual communities who are quick to amplify their actions by creating webpages, blogs and forums of hate. Online anti-Muslim hate therefore intensifies, as has been shown after the Rotherham abuse scandal in the UK, the beheading of journalists James Foley, Steven Sotloff and the humanitarian workers David Haines and Alan Henning by the Islamic State and the Woolwich attacks in 2013.
The organisation Faith Matters has also conducted research, following the Rotherham abuse scandal, analysing Facebook conversations from Britain First posts on August 26 2014 using the Facebook Graph API. They found some common reappearing words which included: Scum (207 times); Asian (97); deport (48); Paki (58); gangs (27) and paedo/pedo (25). A number of the comments and posts were from people with direct links to organisations such as Britain First, the English Brotherhood and the English Defence League.
Abuse is not a human right
Clearly, hate on the internet can have direct and indirect effect for victims and communities being targeted. In one sense, it can be used to harass and intimidate victims and on the other hand, it can also be used for opportunistic crimes. Few of us will forget the moment when Salma Yaqoob appeared on BBC Question Time and tweeted the following comments to her followers: “Apart from this threat to cut my throat by #EDL supporter (!) overwhelmed by warm response to what I said on #bbcqt.” The internet is a powerful tool by which people can be influenced to act in a certain way and manner. This is particularly strong when considering hate speech that aims to threaten and incite violence. This also links into the convergence of emotional distress caused by hate online, the nature of intimidation and harassment and the prejudice that seeks to defame groups through speech intending to injure and intimidate. Some sites who have been relatively successful here include BareNakedIslam and IslamExposed which has a daily forum and chatroom about issues to do with Muslims and Islam and has a strong anti-Muslim tone which begins with initial discussion about a particular issue – such as banning Halal meat – and then turns into strong and provocative language.
Most of this anti-Muslim hate speech hides behind a fake banner of English patriotism, but is instead used to demonise and dehumanise Muslim communities. It goes without saying that the internet is just a digital realisation of the world itself – all shades of opinion are represented, including those Muslims whose hatred of the West prompts them to preach jihad and contempt for “dirty kuffar” Clearly, freedom of speech is a fundamental right that everyone should enjoy, but when that converges with incitement, harassment, threats of violence and cyber-bullying then we as a society must act before it’s too late. There is an urgent need to provide advice for those who are suffering online abuse. It is also important to keep monitoring sites where this sort of thing regularly crops up; this can help inform not only policy but also help us get a better understanding of the relationships forming online. This would require in detail an examination of the various websites, blogs and social networking sites by monitoring the various URLs of those sites regarded as having links to anti-Muslim hate.
It is also important that we begin a process of consultation with victims of online anti-Muslim abuse – and reformed offenders – who could work together highlighting the issues they think are important when examining online Islamophobia. The internet offers an easy and accessible way of reporting online abuse, but an often difficult relationship between the police and Muslim communities in some areas means much more could be done. This could have a positive impact on the overall reporting of online abuse. The improved rate of prosecutions which might culminate as a result could also help identify the issues around online anti-Muslim abuse.
© The Conversation
An Eastwood man who sent death threats to a former friend and harassed a police officer has been found guilty of five charges relating to malicious communications.
20/11/2014- Simon Tomlin, 46, of Lawrence Avenue, was convicted at Nottingham Magistrates Court today (Thursday) in his absence, after failing to attend a two-day trial. Tomlin was found guilty of criminal harassment of former friend Melony McElroy and PC Richard Reynolds, of sending a series of tweets containing grossly offensive material which refe-renced Ms McElroy on October 9, 2014, and of repeatedly referring to her as a ‘neo-Nazi’ on his blog, The Daily Agenda. When explaining his decisions, the magistrate said that despite Tomlin’s denial of harassment in police interviews, it was clear he deliberately caused alarm and distress and was aware that it would constitute harassment. He said of Melony McElroy: “He caused fear that she was at risk of murderous reprisals and sent matter that was grossly offensive and menacing. “It is clear from Ms McElroy’s evidence that he caused her fear and distress and I am quite sure that is what the defendant intended.” Tomlin was also convicted of sending by public communications network pictures of police officers’ private vehicles on October 5, 2014, which the magistrate described as ‘hate material’ against the police. He added: “In view of the number of followers and the nature of the website that the Facebook page was associated with, the officers whose cars were identified had every reason to fear damaging consequences.” A warrant for Tomlin’s arrest was issued and he will be sentenced at a later date.
© The Eastwood & Kimberley Advertiser
A mountain couple makes an unsettling discovery on Google Maps.
20/11/2014- Jennifer Mann and Jodi McDaniel say they've never had any problems living at their home in Canton. But, on Google Maps, instead of a street name, their driveway was labeled with a gay slur. Mann and McDaniel say the gay slur was hurtful and amounts to a hate crime. They’d like to find out who did it and take legal action. “And if I can I'm going to get legal advice about it,” Mann said. “I really don't know what to say to him other than grow up,” McDaniel said. “I have no problems with them....none. They're good neighbors,” Fay Capps said. There's an option on Google Maps to report problems like inappropriate content. Google Maps says its policy considers discrimination based on sexual orientation as a hate crime. As a result of News 13’s attention, Google removed the slur. A spokeswoman says there is a mapma-ker tool where people can edit maps. She says they don't know who did it or when. But she says the gay slur slipped through their check systems, perhaps because it was such a small road-driveway. She says they'll continue investigating. McDaniel’s has a message for whoever did it. “Live your life and leave us alone you know. We don't bother anybody.”
A compelling argument for strong-arm tactics against those who perpetrate abuse on the net.
By Helen Fenwick
20/11/2014- This book sets forth a compelling argument that the internet should not be allowed to maintain its “Wild West” anarchic status, because its ability to facilitate cyber-bullying outweighs the virtues of maintaining that status. It argues that the virtues of the web – in particular, anonymity, which fosters truth-telling and self-expression – also translate into vices: people become de-individuated in anonymous postings, and the lack of identification fosters the refusal to conform to social norms. The result is online harassment and bullying that can take extreme forms.
Hate Crimes in Cyberspace’s main strength lies in its sustained and detailed exploration of the bizarrely convoluted, sustained and extremely hurtful nature of online abuse of individuals. Danielle Keats Citron, a legal scholar, pertinently compares the social response to online bullying (which informs the legal one) to the response to domestic violence and workplace sexual harassment in the 1970s. At that time it was thought that both could be relegated to the sphere of the private choices of women – that the responsibility lay with the woman to deal with the problem, by growing a thicker skin or by simply packing her bags and leaving. Feminist campaigns from the 1970s onwards changed that perception and triggered legal change. Citron argues that the tendency to trivialise online abuse (as frat-boy banter) and to blame the victim for failing to shrug it off is highly prevalent and is retarding the development of stronger laws and law enforcement. She makes her case successfully for changing social perceptions and creating a far more effective legal response, particularly by utilising civil rights laws.
Nevertheless, her book is somewhat selective in its approach. Its very broad title is misleading – it might easily have been titled Cyber-Based Sexual Harassment and Proposals for US Legal Reform. Clearly, that title would have been less snappy and less appealing. But it would have been more accurate. The book focuses very strongly on the harassment and denigration of women via online abuse, and this is the right approach to take, rather than focusing on the harassment of white heterosexual males, who suffer significantly less online abuse. Its pioneering research could and should be used to support the case for introducing a criminal offence of gender-based hate speech in various countries, including the UK.
However, the book only touches on abuse suffered by lesbian, gay, bisexual and transgender persons and on racial grounds, largely disregards the abuse of persons due to other characteristics, and also largely disregards group-based online hate crimes (or hate speech as hate crime). So, for example, it does not discuss Salafi/Wahhabi or Christian fundamentalist online hate speech that is aimed at gay people, which can clearly have an impact on individuals. Many such groups – as has been brutally illustrated in recent months by the actions of Islamic State – understand the impact and utility of social media all too well.
Citron’s proposals for law reform are practical but also selective. They are very US-centric – which is understandable up to a point, but also ironic, given the book’s message about the nature of cyberspace and the difficulties of prosecuting in a borderless space. International initiatives aimed at cyber-bullying could have been considered, as could examples from other countries, since for obvious reasons this is an international problem. A strong, compelling, readable exploration of this problem is proffered here, but the call for action that it represents requires a wider focus.
Hate Crimes in Cyberspace
By Danielle Keats Citron
Harvard University Press, 352pp, £22.95
Published 20 October 2014
© Times Higher Education
The report published today looks at the need for regulation specific to online harassment.
19/11/2014- The law reform Commission has today published a paper that aims to tackle the issue of online harassment by “trolls”. In the paper, a number of issues relating to online bullying and anonymous posting are raised – and the adequacy of the current legislation is questioned. At the moment, the law requires sustained harassment for a offence to be committed online, while one-off incidents are not considered in the same way. The paper questions the current legislation that is in place, and whether it is sufficient to deal with the new challenges posed by online abuse, particularly in relation to hate speech.
It is asked whether there should be a new legislation for instances where:
@ There is a serious interference with privacy.
@ Content that goes online has the potential to cause serious harm due to its international reach and permanence.
@ There is no sufficient public interest in publication online.
@ The accused intentionally or recklessly caused harm.
As the law stands
At the moment the law deals with online offences through legislation designed to deal with general circumstances. Online bullying is considered under the Non-fatal Offences Against the Persons Act 1997, which makes harassment – also commonly referred to as ‘stalking’ – an offence. In the issues paper it is suggested that updates to the Prohibition of Incitement to Hatred Act 1989 could be updated in line with suggestions from the EU Commission. Such a change would bring Ireland into line with the 2008 EU Framework Decision on combating xenophobia and racism.
The ‘Issues Paper on Cyber-crime Affecting Personal Safety, Privacy and Reputation, including Cyber-bullying’ is being published as part of the Commission’s Fourth Programme of Law Reform. In it, the issue of dealing with how civil law remedies problems that comes from websites located outside of the state are also considered.
Speaking on Newstalk’s Pat Kenny show, Raymond Byrne, the Director of Research at the Law Reform Comission, said: We happen to have a lot of the big social media companies here in Dublin. We have the opportunity to do something here that is a good guide for other countries as well. Byrne went on to point out that penalties for offences in Ireland were more severe than elsewhere. “The harassment offence carries up to seven years imprisonment so that is pretty tough in terms of a sentencing. Most of the sentences we’ve had here in terms of malicious telephone calls –they are already higher than the comparison in England– where the maximum is six months at the moment. They are putting that up to two years. We are already way beyond that here in Ireland,” said Byrne.
© The Journal Ireland
Victims and witnesses of racism will, as from today, be able to report abuse through a website created to address its low reporting rate and offer support.
16/11/2014- The site – reportracism-malta.org – is intended to increase the reporting of such incidents, inform individuals about the remedies available and support them through the process. It was launched by human rights think tank The People for Change Foundation and also aims to gather data to understand the reality of racism in Malta and provide evidence to inform legal and policy development in the area. Anyone who witnesses or experiences racism can fill in an online form – available in Maltese, English and French – asking questions such as where and when the incident occurred, what it consisted of and whether a police report was filed. People can also send in evidence, such as photos or footage, to back up their claims. If the person filing the report agrees to be contacted, the foundation will offer its support. This will include information, as well as help with filing official reports and following them up.
85% - the percentage of racism victims who keep quiet
“The need for such a system is clear from the high levels of incidence and low levels of reporting of racist incidents,” the foundation said in a statement. Maltese authorities receive very low numbers of racism reports. A National Commission for the Promotion of Equality report showed that 85 per cent of victims of racism keep quiet. In contrast, a report published by the European Union Agency for Fundamental Rights found that 63 per cent of Africans in Malta experienced high levels of discrimination, the second highest incidence in the EU. In addition, 29 per cent fell victim to racially motivated crime. Taken together, these figures highlight a gap between reports and incidents. This could be due to the lack of access to information and a reporting system, the foundation said, as it pointed out that a Fundamental Rights Agency report found that only 11 per cent of African immigrants in Malta knew of the existence of the National Commission for the Promotion of Equality.
“We hope that this website will promote a culture of reporting racist incidents, while developing a better understanding of the state of play of racism in Malta through the compilation of information about such incidents,” the foundation said.
© The Times of Malta
The summer war between Israel and Hamas generated an explosion of online anti-Semitic hate speech in several European countries, an international watchdog reported.
14/11/2014- The assertion came in a report on 10 European countries released Wednesday by the International Network Against Cyber Hate and the Paris-based Inter-national League Against Racism and Anti-Semitism — or INACH and LICRA respectively. In the Netherlands, the Complaints Bureau Discrimination Internet, or MDI, recor-ded more instances of online hate speech against Jews during the two-month conflict than during the entire six months that preceded it, revealed the report, which the groups presented in Berlin at a meeting on anti-Semitism organized by the Organization for Security and Co-operation in Europe, or OSCE.
More than half of the 143 expressions of anti-Semitism documented by MDI in July and August, when Israel was fighting Hamas in Gaza, contained incitements to vio-lence against Jews, the report stated. Roughly three quarters of the complaints documented in that period occurred on social media. In Britain, the Community Security Trust recorded 140 anti-Semitic incidents on social media from January to August, with more than half occurring in July alone. And in Austria, the Forum against Antisemitism recorded 59 anti-Semitic incidents online during the conflagration of violence between Israel and Palestinians — of which 21 included incitements to violence — compared to only 14 incidents in the six months that preceded it.
The data on online anti-Semitic incidents corresponded with an increase in real-life assaults, LICRA and INACH wrote. The report’s recommendations included a submis-sion by the Belgian League Against Anti-Semitism, which called for OSCE member states to adopt the “Working Definition of Anti-Semitism” that the European Union’s agency for combating xenophobia enacted in 2005 but later dropped. The definition includes references to the demonization of Israel.
© JTA News.
14/11/2014- Is there such a thing as a Facebook murder? Is it different than any other murder? Legally, it can be. From a common sense point of view, there is no 'hate crime' status that should make a murder worse if a white person kills a latino person or a Catholic instead of a white person or a Protestant, but legally such crimes can be considered more heinous and get a special label of hate crime. But social media is ubiquitous and criminal justice academics are always on the prowl for new categories to create and write about so a 'Facebook Murder', representing crimes that may somehow involve social networking sites and thus be a distinct category for sentencing, has been postulated.
Common sense should prevail, says Dr. Elizabeth Yardley, co-author of a paper on the subject in the Howard Journal of Criminal Justice. Yes, perpetrators had used social networking sites in the homicides they had committed but the cases in which those were identified were not collectively unique or unusual when compared with general trends and characte-ristics - certainly not to a degree that would necessitate the introduction of a new category of homicide or a broad label like 'Facebook Murder'.
"Victims knew their killers in most cases, and the crimes echoed what we already know about this type of crime," said Yardley. "Social networking sites like Facebook have become part and parcel of our everyday lives and it's important to stress that there is nothing inherently bad about them. Facebook is no more to blame for these homicides than a knife is to blame for a stabbing--it's the intentions of the people using these tools that we need to focus upon." So banning guns or Facebook would not prevent murders any more than banning spoons would prevent people from getting fat. The justice system will be happy not to have another set of arcane guidelines to follow.
By Monica Dux
14/11/2014- Everyone has a right to be a bigot, or so Oberleutnant Brandis insists. But does that mean we're also obliged to put up with bigots on Facebook? We hear a lot about trolls and online bullying, but what if the problem is not an anonymous hater but someone you know? Perhaps even a member of your own family? My friend Claudia recently wrestled with this question after she reconnected with a distant cousin via Facebook. Friendly messages were exchanged - reminiscences about eccentric relatives and long-ago family Christmases. There was even reckless talk, as there so often is on Facebook, of meeting up in person. Then the racist posts started appearing in Claudia's feed: rants about refugees rorting the welfare system, people who come to this country but don't bother to learn English, and burqa-wearing housewives plotting to take over Parliament.
Feeling that she could not let this pass unchallenged, Claudia commented on one of the posts, calling it out as offensive rubbish. In a sense this had the desired effect in that the racist posts stopped appearing in her feed. But they had not disappeared because Claudia's cousin had seen the error of her ways. Claudia had simply been unfriended. One of the things I like about Facebook, when I like it at all, is its plurality. In the real world most of us socialise with a relatively small cohort of like-minded people. By contrast, on Face-book, we typically rub digital shoulders with a far more diverse collection of "friends", from life-long pals to some guy you met briefly at a party and have never seen again, although you are regularly updated on what he's having for breakfast.
With such a varied collection of people, your Facebook feed will inevitably contain many posts that you don't agree with. When this happens, you might choose to engage in friendly online debate, or you can just let it pass, huffing and puffing in the silence of the real world. But things get trickier when the opinions being expressed don't just offend your sensibilities or your political leanings but challenge your concept of basic human decency. If we choose to ignore repugnant, racist views, don't we become complicit? We're told that the only thing necessary for evil to thrive is for good people to remain silent. But if we are morally obliged to speak up in the face of bigotry, are we not under an equal obligation to post?
After all, challenging racism is far easier on Facebook than in the real world. When you're at your family Christmas and Uncle Bob starts to sound off about how Australia ought to be reserved for Australians, calling him out as a disgraceful racist will probably mean that Christmas is ruined, everyone goes home angry and you'll all have to drink even more next year to get through the ordeal. On the other hand, at least Bob's racism will have been publicly debunked. Or will it? Perhaps the real reason so many of us hesitate to slam the Uncle Bobs of this world is not a cowardly desire to avoid conflict but an understanding that doing so will achieve nothing, aside from making you feel good about your own moral righteousness. For whatever you might say, Bob's mind will probably not be changed.
Obviously it is important to speak up to institutional racism, such as that evidenced in our government's draconian treatment of asylum seekers. Similarly, calling out and critiquing the drivel expressed by people with an influential public voice, such as shock jocks, is vital. But what about the unanalysed racism of people like Claudia's cousin, which is so often born of ignorance and disempowerment? People with little education, or radically different life experiences, who have been encouraged by a dog-whistling government to focus their fears and frustrations on vulnerable groups within our society? This kind of bigotry has many and varied roots, and it'll take a lot more than a withering comment on their Facebook page to dig them out.
The social media is often criticised for creating a false sense of intimacy, while actually distancing people from genuine, meaningful interaction. But perhaps this distance is sometimes a positive. Because stepping back and being a mere bystander, a witness, can provide you with a valuable opportunity to see how others think, acting as a reminder that the world is filled with people who hold views radically different from your own. And that tackling those ideas will require far more than simply clicking the unfriend button.
© The Sydney Morning Herald
Law enforcement professionals throughout the US are increasingly leveraging social media to assist in crime prevention and investigative activities, according to a new study released by LexisNexis Risk Solutions.
13/11/2014- The LexisNexis 2014 Social Media Use in Law Enforcement report solicited feedback from 496 participants at every level of law enforcement—from rural localities to major metropolitan cities and federal agencies—to examine the law enforcement community’s proclivity to use social media for crime investigation and prevention. The study, a follow-up to an initial study conducted in 2012, found that eight out of 10 law enforcement professionals are actively using social media for investigations, with 25 percent using social media on a daily basis. “The benefits of social media from an information-gathering and community outreach perspective became very evident during the subsequent investigations of the Boston Marathon bombings and the Washington Navy Yard tragedy,” said Rick Graham, Law Enforcement Specialist, LexisNexis Risk Solutions and former Chief of Detectives for the Jacksonville (Fla.) Sheriff’s Office. “It is imperative that agencies invest in formal social media investigative tools, provide formal training, develop or amend current policies to ensure investigators and analysts are fully armed to more effectively take advantage of the power social media provides.”
Use of social media by law enforcement grew in 2014 and the upward trend is likely to continue. Over three-quarters of respondents indicate plans to use social media even more in the next year. Moreover, the value of social media in helping solve crimes more quickly and assisting in crime anticipation is increasing. 67 percent of respondents believe that social media is a valuable tool for anticipating a crime. Law enforcement officials cited a number of real-world examples in which social media helped thwart impending crime, from stopping an active shooter to tracking gang behavior. Although social media use among law enforcement personnel is high and is likely to continue to grow, few agencies have adopted formal training in the use of social media to boost law enforcement efforts. In fact, there has been a decrease in formal training since the 2012 study, with most law enforcement personnel indicating that they are self-taught. “Lack of access to social media channels is the single biggest driver for non-use and has increased from 2012. Whereas, lack of knowledge has decreased significantly as a reason for not using social media,” states the study.
Fortunately, although agency support of social media training for law enforcement officials remains low, three quarters of law enforcement professionals are very comfortable using social media, showing a seven percent increase over 2012 despite a decrease in availability of formal training. As law enforcement personnel become more comfortable and familiar with social media tools, they are increasingly discovering new and effective ways to utilize it in criminal investigations. For instance, one law enforcement respondent used Facebook to discover criminal activity and obtain probable cause for a search warrant. “I authored a search warrant on multiple juveniles’ Facebook accounts and located evidence showing them in the location in commission of a hate crime burglary. Facebook photos showed the suspects inside the residence committing the crime. It led to a total of six suspects arrested for multiple felonies along with four outstanding burglaries and six unreported burglaries,” said one respondent. Another law enforcement official achieved success in using social media to identify networks of criminals, by using Facebook to identify suspects that were friends or associates of other suspects in a crime. “My biggest use for social media has been to locate and identify criminals,” the respondent stated. “I have started to utilize it to piece together local drug networks.”
Law enforcement officials have also used social media to collect evidence, identify witnesses, conduct geographic searches, identify criminals and their locations, and raise public safety awareness by posting public service announcements and crime warnings to Facebook. “As personnel become even more familiar and comfortable using it, they will continue to find robust and comprehensive ways to incorporate emerging social media platforms into their daily routines, thus yielding additional success in interrupting criminal activity, closing cases and ultimately solving crimes,” the report concluded.
Editor's note: Also read the report, The Rise of Predictive Policing Using Social Media, in the Oct./Nov. issue of Homeland Security Today.
© Homeland Security Today
A copyright claim on the "Innocence of Muslims" will be reviewed by the full 9th Circuit Court of Appeals.
12/11/2014- A federal Appeals Court on Wednesday agreed to reconsider its decision to order Google to take down an anti-Islam propaganda film that was linked to the 2012 Benghazi attack. Earlier this year, a three-judge panel sided with Cindy Lee Garcia, who sued Google for infringing on her copyright by hosting the video—titled Innocence of Muslims—on YouTube. The actress argued that she was fooled into appearing in the video after following up on an ad posting purporting to be for another movie. The video was taken down following the decision. Now, the full U.S. Court of Appeals for the 9th Circuit will review that decision, and the three-judge panel's ruling will not hold precedent in the full Court's review. Garcia originally had her case dismissed by a trial judge.
The case presents a thicket of thorny issues, including a debate over the balance between copyright protections and free speech in the Internet age. Open-Internet activists and several tech companies argued that the February ruling facilitates overly burdensome copyright limits. Facebook, Twitter, Yahoo, eBay, and Netflix have all supported Google's position. "This is very welcome decision," said Corynne McSherry, intellectual-property director at the Electronic Frontier Foundation. "The court's ruling was mistaken as a matter of law and a terrible precedent for online free speech. What happened to Cindy Garcia was truly shameful, but the 9th Circuit took a bad situation and made it worse." And the tensions over the case are ratcheted up by the video's controversial nature—as well as its connection to the September 2012 attack on the U.S. consulate in Benghazi.
According to an extensive New York Times piece published last December, the video partially contributed to the violence, in which four Americans were killed. "Contrary to claims by some members of Congress, [the violence] was fueled in large part by anger at an American-made video denigrating Islam," according to The Times. The role of the video is hotly debated, and many conservatives accuse the Obama administration of overstating its impact to deflect attention from a terrorist attack in the run-up to the 2012 presidential election. Earlier this year, a second actor in the film, Gaylor Flynn, filed a separate lawsuit also arguing that Google had reproduced his performance without consent.
© The National Journal
Joanne St. Lewis case is just one that shows how internet easily spreads racist message.
12/11/2014- When Joanne St. Lewis wrote a critical evaluation of a student racism project, she could not have known the grief it would cause. And certainly not the years it would take to finally erase the racial slur that accompanied her name in every online search. It began six years ago, and continues today in spite of an Ontario Superior Court decision in June. The decision found an Ottawa blogger had defamed St. Lewis by attaching a racial epithet meaning to "sell out," stemming from the black slave experience, to her name. St. Lewis, a University of Ottawa law professor, has taken steps most would find daunting. Going to court, winning a decision and now fighting an appeal. "It's extremely expensive. It’s difficult. It’s imperfect. It’s painful. And it may not always even remotely be an opportunity or a remedy for someone," she said. But for St. Lewis, standing up against the slur, written in a blog and repeated by others, it was a sense of duty and dignity. "If it is my fate to be the first black Canadian so publicly defiled, then it is my hope to be the last. It was essential that no other suffer as I have," she wrote after a jury found the words used against her were defamatory. In accordance with the court’s decision, the blog post has been removed from the internet, but the term can still be found in Google searches of her name. St. Lewis was also awarded $350,000 in damages. "I think there’s a recklessness, a casual cruelty, a complete indifference and egotism that the internet permits," she said in an interview with CBC News. "What it seems to do is allow people to be bullies and behave like feral pack animals on the internet to target activists."
Researcher tries to quantify online racism
There is little research to quantify the extent of online racism in Canada. Irfan Chaudhry is trying to change that. A PhD Candidate at the University of Alberta, Chaudhry is tracking Twitter for terms that would be considered racist and offensive. With Twitter, Chaudhry is able to look at racist terms and references, and which cities they originate from. Spe-cifically he looked at Edmonton, Winnipeg, Calgary, Vancouver, Montreal and Toronto. He chose those cities because in 2010, they reported some of the highest rates of hate crime in the country. His three-month study found about 750 instances he considered overt racism. "People were tweeting about things that you’d probably want to have left in your mind," he explains. He cites examples such as people boarding a bus or plane and tweeting: "About to board, stuck beside a --- and a --- #thanks." Other cases were far more direct. "It was someone saying ‘I hate’ and then insert racialized group here." He found those sorts of statements were more likely directed at aboriginal populations in Winnipeg and Edmonton, while in Toronto and Montreal, racist comments were largely aimed at people of colour. "When you break down the amount of tweets... it kind of reflected different demographic patterns," he notes.
In Thompson, Manitoba, a community with a large aboriginal population, a local newspaper was forced to shut down its Facebook page in response to a large number of racist comments. Lynn Taylor, general manager of the Thompson Citizen, said racist sentiments have long simmered in the community, but recently surfaced online. The tipping point came when someone posted a photoshopped picture showing the front of the newspaper’s building with racist comments painted over it.. She hopes to reopen the site next year, with better monitoring of comments before they are posted. Other media outlets, including the CBC, closely monitor or disable comments to minimize the risk of racist material being posted. St. Lewis said part of the problem is the medium itself. "It allows people to behave in a way that if they did it in the bricks and mortar universe amongst flesh and blood people, we know it’s not acceptable. We know there’s legal consequence. But somehow, that piece of being virtual, that piece of being on the internet seems to give this incredible permission," she said.
© CBS News
A British lawmaker complained of abuse. Suddenly, the abuse stopped.
12/11/2014- Luciana Berger, a member of British Parliament, has been receiving a stream of anti-Semitic abuse on Twitter. It only escalated after a man was jailed for tweeting her a picture with a Star of David superimposed on her forehead and the text "Hitler was Right." But over the last few weeks, the abuse began to disappear. Her harassers hadn’t gone away, and Twitter wasn't removing abusive tweets after the fact, as it sometimes does, or suspending accounts as reports came in. Instead, the abuse was being blocked by what seems to be an entirely new anti-abuse filter.
For a while, at least, Berger didn’t receive any tweets containing anti-Semitic slurs, including relatively innocuous words like "rat." If an account attempted to @-mention her in a tweet containing certain slurs, it would receive an error message, and the tweet would not be allowed to send. Frustrated by their inability to tweet at Berger, the harassers began to find novel ways to defeat the filter, like using dashes between the letters of slurs, or pictures to evade the text filters. One white supremacist site documented various ways to evade Twitter’s censorship, urging others to "keep this rolling, no matter what."
In recent months, Twitter has come under fire for the proliferation of harassment on its platform—in particular, gendered harassment. (According to the Pew Center, women online are more at risk from extreme forms of harassment like "physical threats, stalking, and sexual abuse.") Twitter first implemented the ability to report abuse in 2013, in response to the flood of harassment received by feminist activist Caroline Criado-Perez. The recent surge in harassment has again resulted in calls for Twitter to "fix" its harassment problem, whether by reducing anonymity, or by creating better blocking tools that could mass-block harassing accounts or pre-emptively block recently created accounts that tweet at you. (The Blockbot, Block Together, and GG Autoblocker are all instances of third party attempts to achieve the latter.) Last week, the nonprofit Women, Action, & the Media announced a partnership with Twitter to specifically track and address gendered harass-ment.
While some may welcome the mechanism deployed against Berger’s trolls as a step in the right direction, the move is troubling to free speech advocates. Many of the proposals to deal with online abuse clash with Twitter’s once-vaunted stance as "the free speech wing of the free speech party," but this particular instance seems less like an attempt to navigate between free speech and user safety, and more like a case of exceptionalism for a politician whose abuse has made headlines in the United Kingdom. The filter, which Twitter has not discussed publicly, does not appear as if it's intended to be a universal fix for harassment that is experienced by less-important users on the platform, such as the women targeted by Gamergate. Prior to the filter being activated, Luciana Berger and her fellow MP, John Mann, had announced plans to visit Twitter’s European Headquarters, to talk to higher-ups about the abuse. Parliament is currently discussing more punitive laws against online trolling, including a demand from Mann for a way to ban miscreants from "specific parts of social media or, if necessary, to the Internet as a whole."
In a letter to Berger that is quoted in part here, Twitter’s head of global safety outreach framed efforts over the past year as including architectural solutions to harassment. "Our strategy has been to create multiple layers of defense, involving both technical infrastructure and human review, because abusive users often are highly motivated and creative about subverting anti-abuse mechanisms." The letter goes on to describe known mechanisms, like the use of "signals and reports from Twitter users to prioritize the review of abusive content," and hitherto unknown mechanisms like "mandatory phone number verification for accounts that indicate engagement in abusive activity." However, the letter says nothing about a selective filter for specific words. To achieve that result, the company appears to have used an entirely new tool outside of its usual arsenal. A source familiar with the incident told us, "Things were used that were definitely abnormal."
A former engineer at Twitter, speaking on the condition of anonymity, agreed, saying, "There’s no system expressly designed to censor communication between individuals. … It’s not normal, what they’re doing." He and another former Twitter employee speculated that the censorship might have been repurposed from anti-spam tools—in particular, BotMaker, which is described here in an engineering blog post by Twitter. BotMaker can, according to Twitter "deny any Tweets" that match certain conditions. A tweet that runs afoul of BotMaker will simply be prevented from being sent out—an error message will pop up instead. The system is, according to a source, "really open-ended" and is frequently edited by contractors under wide-ranging conditions in order to effectively fight spam.
When asked whether a new tool had been used, or BotMaker repurposed, a Twitter spokesperson replied: "We regularly refine and review our spam tools to identify serial accounts and reduce targeted abuse. Individual users and coordinated campaigns sometimes report abusive content as spam and accounts may be flagged mista-kenly in those situations." It’s not clear whether this filter is still in place. (I attempted to test it with "rat," the only word that I was willing to try to tweet, and my tweet did go through. The filter may have been removed, the word "rat" may have been removed from the blacklist, or the filter may have only been applied to recently created accounts). It’s hard to shed a tear for a few missing slurs, but the way they were censored is deeply alarming to free speech activists like Eva Galperin of the Electronic Frontier Foundation. "Even white supremacists are entitled to free speech when it’s not in violation of the terms of service. Just deciding you’re going to censor someone’s speech because you don’t like the potential political ramifications for your company is deeply unethical. The big point here is that someone on the abuse team was worried about the ramifications for Twitter. That’s the part that’s particularly gross."
What’s worrisome to free speech advocacy groups like the EFF about this incident is how quietly it happened. Others may see the bigger problem being the fact that it appears to have been done for the benefit of a single, high-profile user, rather than to fix Twitter’s larger harassment issues. The selective censorship doesn’t seem to reflect a change in Twitter abuse policies or how they handle abuse directed at the average user; aside from a vague public statement by Twitter that elides the specific details of the unprecedented move, and a few, mostly-unread complaints by white supremacists, the entire thing could have gone unnoticed. Eva Galperin thinks incidents like these could be put in check by transparency reports documenting the application of the terms of services, similar to how Twitter already puts out transparency reports for government requests and DMCA notices. But while a transparency report might offer users better information as to how and why their tweets are removed, some still worry about the free-speech ramifications of what transpired. One source familiar with the matter said that the tools Twitter is testing "are extremely aggressive and could be preventing political speech down the road." He added, "are these systems going to be used whenever politicians are upset about something?"
© The Verge
UK prime minister David Cameron has called for “extremist material” to be taken offline by governments, with help from network operators.
14/11/2014- Speaking in Australia's Parliament on a trip that will also see him attend the G20 leaders' summit, Cameron spoke of Australia and Britain's long shared history, common belief in freedom and openness and current shared determination to fight terrorism and extremism. Cameron said [PDF] poverty and foreign policy are not the source of terror. “The root cause of the challenge we face is the extremist narrative,” he said, before suggesting bans on extremist preachers, an effort to “root out” extremism from institutions and continuing to “celebrate Islam as a great world religion of peace.”
He then offered the following comment:
“A new and pressing challenge is getting extremist material taken down from the internet. There is a role for government in that. We must not allow the internet to be an ungoverned space. But there is a role for companies too. In the UK, we are pushing them to do more, including strengthening filters, improving reporting mechanisms and being more proactive in taking down this harmful material. We are making progress, but there is further to go. This is their social responsibility, and we expect them to live up to it.” Cameron's remarks have a strong whiff of a desire to extend state oversight of the internet. The UK already prohibits “Dissemination of terrorist publications” under Part 1, Section 2 of the the Terrorism Act 2006. The country also operates a plan to reduce hate crime, in part by removing hate material found online.
A May 2014 report [PDF] on that plan's progress notes difficulties securing co-operation from ISPs and social networks, especially those outside the UK. Security is not on the G20 agenda, but what the leaders choose to discuss around the table is fluid. Might the sentence “We must not allow the internet to be an ungoverned space” therefore be an attempt to steer talks in the direction of international co-operation around internet regulation? The summit runs over the weekend and Vulture South has accreditation to the event, mea-ning we can get our hands on any communiqués the leaders emit. Most of the output of such events is negotiated in advance, but we'll keep an eye on things in case Cameron's thought bubble expands and also because a major initiative to combat multinational tax avoidance is expected to be one of the event's highlights.
© The Register
Many observers were encouraged to see Manchester City midfielder Yaya Toure speak out via the BBC last week against those who had racially abused him over Twitter just hours after he had reactivated his account.
11/11/2014- As one of the sport's most high-profile figures, it felt as if the Ivory Coast international had made a stand on behalf of an ever-growing number of similar victims in the game - because Toure is far from alone in being subject to such treatment. Already this season, Liverpool striker Mario Balotelli has been racially abused over the internet after he made fun of Manchester United following their defeat to Leicester City. Last year I interviewed former footballer and Professional Footballers' Association (PFA) chairman Clarke Carlisle at his house. He showed me his laptop and the torrent of vile racial abuse he had received via Twitter, abuse he did not want his wife or children to see, and which had left him feeling numb. And all because he had been commentating on a match that week on TV. Last season, 50% of all complaints about football-related hate crime submitted to anti-discrimination organisation Kick It Out (KIO) related to social media abuse. So severe is the problem, KIO now employs a full-time reporting officer whose job is to act on such incidents and refer them to the relevant authorities. Greater Manchester Police are investigating the Toure case, but don't be too surprised if no-one is ever punished. The anonymity users can gain on social media can make it very difficult to track down offenders.
But Kick It Out is also frustrated by what it feels is a lack of a co-ordination between the police and Twitter and the need for better communication between the two. It feels there needs to be more education for local police forces on the misuse of social media and how complaints are dealt with. In some cases, KIO says, it has made a report but has not heard back from the police, something one source there described as "very disheartening". In addition, KIO wants more clubs to be proactive in coming out to publicly support their players when they are the victims of discrimination online, by calling upon the authorities to work closely with the relevant platform to investigate and track down the offenders. Such concerns are nothing new. Accounts with false identities often mean the police need Twitter to provide them with an IP address for the account if they hope to find them. The Association of Chief Police Officers (ACPO) has said that Twitter only provides this information with a US court order, something which it is difficult to get because of the value and protection afforded there to freedom of speech.
Elsewhere - like in the UK - it is optional, although Twitter insists it is co-operating with law enforcement here more than ever before. During the first half of 2014, Twitter received 78 account information requests, 46% of which resulted in some information being produced, the highest proportion to date. It says it has made it easier for users to report malicious posts, claims it has become more vigilant in blocking offensive tweeters, and is developing technology that prevents barred trolls from simply opening up a new account.
Progress being made
The police insist important progress is being made, and that platforms are now beginning to appreciate the responsibility they have for what is posted on their networks. Last year, following a long legal battle in France when prosecutors argued Twitter had a duty to expose wrong-doers, the site agreed to hand over details of people who allegedly posted racist and anti-Semitic abuse. Although that set an important precedent, Twitter admits it could do better. Earlier this year it promised to change its policies after Robin Williams's daughter Zelda was targeted by trolls following his suicide. But there are signs that such abuse will not be tolerated.
In 2012 Liam Stacey, from Swansea, received a prison sentence after racially abusing former Bolton player Fabrice Muamba on Twitter. Last year, a man who admitted sending racist tweets to two footballers was ordered to pay £500 compensation to each of them. And police were heartened last month when a Nazi sympathiser was jailed for four weeks for sending anti-Semitic tweets to Jewish MP Luciana Berger. But these cases, of course, while dissuading some, will not prevent further incidents from occurring. Twitter admits it is impossible to monitor all of the 500 million or so postings going through its networks each and every day. One expert I spoke to told me that some of the cases the UK media has picked up on would simply not register in the US, where such abuse is often disregarded and denied the publicity some trolls crave. Others will insist that it is absolutely right that such vitriol is exposed and condemned.
Paul Giannasi, hate crime lead officer at ACPO, said the challenge was huge, but efforts to combat the problem were constantly evolving. ACPO sits on an international cyber-hate working group lead by the US based Anti-Defamation League. This group brings parliamentarians, professionals and community groups together with industry leaders to help find solutions that balance protection from offensive comments with the right to free speech. "The police will draw on the guidelines issued by the Director of Public Prosecutions and The College of Policing to assess whether the threshold for communications which are grossly offensive, indecent, obscene or false is met. "The CPS guidance is very clear that a high threshold applies in these cases. We encourage officers to work with the CPS at an early stage of an investigation to determine whether proceeding with a prosecution is in the public interest." Certainly, with its tradition of rivalry and tribal passions, football seems particularly vulnerable to the dark side of social media.
Twitter and other platforms have enabled fans and the players they idolise to get closer than ever. Amid the anodyne world of bland footballer interviews, it is refreshing that players' true emotions and opinions can often be glimpsed online even if sometimes it results in them being fined. But it also enables a sad and cowardly minority to abuse and insult in a way that would never be tolerated - and that they would never dare to - in a public, physical place. Amid unprecedented interest and media exposure, footballers can be followed by millions of supporters. This makes them an attractive target for the trolls who crave attention through a retweet, and seek maximum impact from their messages of hate. The question is how to tackle them without endangering the freedom that makes social media such a special place to so many.
How Twitter tackles abuse
Over the past year it has expanded the number of people working on abuse reports, reporting 24/7 cover. It has invested in technology to make it harder for serial abusers to create accounts, and perpetuate abusive behaviour. It has worked with the Safer Internet Centre and charities that specialise in developing strategies to counter hate speech.
© BBC News
11/11/2014- In the wake of a series of terrorist “run over” attacks, where Israeli pedestrians have been mowed down by Palestinian terrorists, more than 90 Facebook pages glorify-ing the attacks and urging more violence against Israeli civilians have been identified. The social media campaign, which uses the Arabic term “Daes” (Run-over), which is a play on the word “Daesh” (ISIS), praises the attacks as a form of resistance, according to the Anti-Defamation League. Some of the posts on these pages describe the “run-overs” as part of a new revolution, a form of “car Intifada.” Many of the pages also enable users to give vent to expressions of violent anti-Semitism. “This campaign is the latest example of how social media is being used to promote and glorify terrorism and anti-Semitism,” said Abraham H. Foxman, ADL National Director. “Social media platforms were not created to spread anti-Se mitism and terrorism to the masses.” The campaign is also starting to spread on Twitter, according to ADL. The “Daes” hashtag has attracted numerous terrorist sympathizers. Several pages include anti-Semitic posts depicting religious Jews with hooked noses running away from vehicles attempting to run over them. ADL is in the process of notifying those social media companies about those accounts promoting the campaign.
The huge gaming hit Clash of Clans allows its players the opportunity for anti-semitism. Among the millions of players are groups that call themselves 'holocaust', for example. Players also come up with provocative anti-semitic captions.
5/11/2014- In Clash of Clans various clans do battle. Clans are made up of at most 50 players, who combat players from other clans. A search by BNR Nieuwsradio found at least 45 clans calling themselves 'holocaust'. Other names used are - among others - 'jew raiders' and 'we kill jews'. Some captions used are 'we burn jews for fun' en 'Anne Frank was easy to find'. Many games, such as World of Warcraft, try to prevent this kind of behaviour. They employ moderators who police players’ illegal or offensive practices. It is not clear if Supercell, the Finnish game development company behind Clash of Clans, does this as well. Clash of Clans was launched in 2012. Supercell responded by email saying that ‘it is not possible to prevent the anti-semitic expressions from taking place, given the millions of people who play their games. “We will close down clans that use abusive language when we see it happening.”
© BNR (dutch)
Labour leader condemns spike in antisemitic attacks, and calls on social media sites to do more to identify online trolls.
4/11/2014- A recent spike in antisemitic attacks should serve as a “wake-up call” for anyone who thinks the “scourge of antisemitism” has been defeated in Britain, Ed Miliband warned on Tuesday. In a post on his Facebook page, the Labour leader called for a “zero-tolerance approach” to antisemitism and said that some Jewish families had told him they felt scared for their children. Miliband intervened after the Community Security Trust, which provides training for the protection of British Jews, recorded a 400% increase in antisemitic incidents in July this year compared with the same month in 2013. The Labour leader highlighted what he described as “shocking attacks” on Luciana Berger, the shadow public health minister, and Louise Ellman, the chair of the Commons transport select committee. The two senior Labour MPs, who are Jewish, were targeted by antisemitic trolls after a man was jailed for four weeks after he admitted sending what Miliband described as a “vile” tweet.
The Jewish Chronicle reported that Garron Helm was jailed after tweeting a photograph of Berger superimposed with a yellow star - as used by the Nazis to identify Jews during the war. Miliband called on social media sites to do more to identify the perpetrators. He wrote: “There have been violent assaults, the desecration and damage of Jewish property, antisemitic graffiti, hate-mail and online abuse. The shocking attacks on my colleagues Luciana Berger and Louise Ellman have also highlighted the new channels by which antisemites spread their vile views. That is why it is vital that Twitter, Facebook and other social media sites do all they can to protect users and crack down on the perpetrators of this sickening abuse.” He said that the rise in attacks took place during the recent conflict between Israel and Hamas in Gaza and that it was important to be temperate in discussing Israel.
“More than half of the anti-Semitic incidents recorded by the CST in July involved direct reference to the conflict and the previous highest number of monthly incidents recorded by CST (January 2009) also coincided with a period of fighting between Israel and Hamas. We need to tackle this head on because I am clear that this can never excuse antisemitism, just as conflicts elsewhere in the Middle East can never justify Islamophobia. All of us need to use calm and responsible language in the way we discuss Israel, especially when we disagree with the actions of its government. A zero-tolerance approach to anti-Semitism and prejudice in all its forms here in Britain will go hand-in-hand with the pursuit of peace in the Middle East as a key focus of the next Labour government’s foreign policy.” Miliband, who is Jewish, was recently criticised by the actor Maureen Lipman after he voted in favour of recognising Palestinian statehood.
In an article in Standpoint, Lipman wrote: “Just ... when our cemeteries and synagogues and shops are once again under threat. Just when the virulence against a country defending itself, against 4,000 rockets and 32 tunnels inside its borders, as it has every right to do under the Geneva convention, had been swept aside by the real pestilence of IS, in steps Mr Miliband to demand that the government recognise the state of Palestine alongside the state of Israel.” The New York Times recently reported on Miliband’s vote in favour of Palestinian statehood under the headline: British Labour Chief, a Jew Who Criticizes Israel, Walks a Fine Line. Its London correspondent Stephen Castle wrote: “Britain’s center-left Labour Party often sympathizes instinctively with the Palestinian cause, and Mr Miliband is not the first party leader to criticize Israel. Yet his willingness to speak about his family’s story and connections to Israel – showcased in a high-profile visit there this year – has brought a personal dimension to a loaded issue.”
© The Guardian
A 33-year-old minor hockey coach from Langley, B.C. has been fired after posting a series of shocking Nazi propaganda images to his Facebook page.
5/11/2014- Christopher Maximilian Sandau coached players in North Delta before league officials were alerted to his posts, some of which question the Holocaust death toll and suggest prisoners at the Auschwitz concentration camps were well-cared for. Another post features a swastika and reads, “If this flag offends you, you need a history lesson.” The North Delta Minor Hockey Association issued a statement confirming Sandau was let go over the weekend and condemning the material he shared online. “The posts contained extreme and objectionable material believed to be incompatible with an important purpose of our Minor Hockey Association: To promote and encourage good citizenship,” presi-dent Anita Cairney said in a statement. “The NDMHA requires that our coaches present themselves as positive role models for our children athletes.” The association said it won’t be commenting further on the advice of its legal counsel, but that alternative coaching arrangements have already been made. On Wednesday, Sandau told CTV News he’s been treated unfairly. The former coach said he was passionate about his job, and gave his players extra practice time every week free of charge.
“I was doing a good job and I wasn’t trying to impose my political beliefs or anything on anyone,” he said. “From the time I stepped onto the parking lot of the arena to the time I left, I was all about hockey and trying to help the kids get better.” Sandau acknowledged his opinions are likely to offend people, but insisted he’s not a neo-Nazi, merely a “history buff” who believes German atrocities during World War II have been misconstrued, or fabricated altogether. Apparent hostility toward Jewish people is a recurring theme in his posts, however. One features the image of a World War II soldier, claiming he was killed “so the Jews could control your banks,” and “so foreigners could run your civil and public services.” Asked about the post, Sandau conceded that “it does generalize a little too much, obviously,” and said he might consider taking that one down. Sandau said he was given a chance to keep his job by changing his Facebook settings and making his posts private, but turned it down on principle. Parents with children on either of the two North Delta minor hockey teams Sandau coached have been informed of his dismissal.
© CTV News
Jamie Bartlett explains why the battle for hearts and minds has moved online
4/11/2014- The head of GCHQ has warned that firms such as Facebook and Twitter are "in denial" about the use of their sites by terrorists and criminals. And he's right: extremists of all kinds have indeed "embraced the web". This is only natural. The battle for hearts and minds is a vital part of any conflict. To be seen as on the side of right; to create a groundswell of popular support; to reach new supporters. Whether it’s Isil or the extreme Right, the aim is to convince people to take your side. If not on the battlefield itself, then emotionally, morally, vocally, financially – and now, digitally. This battle used to be waged from on high: propaganda air dropped from governments and media broadcasters. Now it’s on Facebook and Twitter.
It barely needs saying that social media has been a boon to society – allowing anyone with a message or campaign to reach out to millions of people at almost zero cost. That includes charities, campaigning groups, political dissidents, and the rest. But for angry or violent groups social media is the perfect vehicle to spread a message and win new fans: a free and open way to share and disseminate propaganda to millions of people. What’s more, the cost of producing high-quality videos and multimedia content is now practically nothing. This means that small groups can exaggerate their influence and extend their reach more easily than ever before. And that’s exactly what they are doing.
Let’s start with Isil. So far, they have organised hashtag campaigns on Twitter to generate internet traffic. They then get those hashtags trending, which generates even more traffic. They hijack other Twitter hashtags – such as those about the World Cup, and more recently the iPhone 6, which they use to start tweeting Islamist propaganda – to increase their reach further still. They have posted real time footage from the battlefield, and directed it against their enemies. They use social media "bots" to automatically spam platforms with their content. In short, they are very active indeed: social media is an important part of their modus operandi. Although we’re constantly told that Isil are marketing geniuses, this is all pretty standard for any second-rate advertising company. And why wouldn’t it be? Many Isil supporters are young, Western men for whom social media is second nature. What they have done, crucially, is to create the impression of a much larger groundswell of popular support than they have – and generate enormous amounts of free publicity from the world’s media. (They do this quite deliberately too – directing tweets at the BBC and CNN in an effort to get coverage).
It goes something like this: this media mujahideen – most of whom aren’t even in Syria – post lots of tweets, attaching a hashtag to their tweets to ensure it reaches more people (such as #iphone6). People notice, and start using the same hashtag to criticise the group. Journalists write about how much support and traffic Isil is generating on Twitter, which then gets them mainstream media coverage. Isil will often include the Twitter accounts of major media outlets when they post. @BBCWorld and @BBCTrending were important Twitter accounts through which word spread about the threats Isil made to America. Between 3 and 9 July a BBC article, Americans scoff at Isil Twitter threats was the most shared article in tweets containing the tag #CalamityWillBefallUS. We’re doing their work for them.
According to Ali Fisher, a specialist who has been monitoring how Islamists use social media for the last two years, these Jihadist propaganda networks are stronger than ever. "They disseminate content through a network that is constantly reconfiguring, akin to the way a swarm of bees or flock of birds constantly reorganises in flight. This approach thrives in the chaos of account suspensions and page deletions’. Fisher calls this a 'user-curated' swarmcast." The UK’s far-Right is possibly even more impressive than Isil. Although it might be politically convenient to draw moral equivalences, they are quite different to Isil in their values, radicalism, brutality and threat to national security. Nevertheless, in September the BBC suggested that the far-Right is on the rise in the UK, as a result of Islamic State and sex abuse stories involving men of Pakistani descent. According to a senior Home Office official, the UK government underestimates the threat. He claimed that, since last year, at least five new far-Right groups have formed.
I’m not sure exactly what "far-Right group" means anymore, because the far-Right are also very gifted at using the net to give the impression they are bigger than they really are. For the most part the UK’s far-Right is relatively small and disjointed. Online, though, it's different. Just like Isil, the modus operandi of much of the far-Right has moved online: Facebook, Twitter, YouTube, forums, and blogs. There are hundreds of pages and forums dedicated to every shade of extreme nationalism. New groups pop up and disappear every day, and it’s very hard to work out if they are legitimate or not. Just with Isil, it’s often a handful of people making a lot of noise, without it necessarily becoming a significant force in the real world. The latest far-Right movement is called Britain First. They've been around for a while – and are perhaps the most cunning users of Facebook of any political movement. They have half a million Facebook "Likes" – far more than the Tories or the Labour Party. They produce and share very good content online: campaigns about the armed forces, about animal cruelty, about child sex abuse. Things that people with little interest in politics would share.
But according to Hope Not Hate, an anti-fascist campaign group, these general campaigns mask a more sinister motive. They argue that Britain First have been involved in intimidating British Muslims, including invading mosques, and call them "confrontational, uncomprising and dangerous". According to Hope Not Hate, Britain First has a core membership of only around 1500 people – most of whom were followers of former leader Jim Dowson, an anti-abortion campaigner. There are, reckons Matt Collins (a former National Front member who now works for Hope Not Hate) around 60 – 70 hardcore activists who are "willing to put on their badges and march on the street". But, Collins claims, their use of Facebook to increase their reach is "far beyond" anything he’s seen before. He also claims some of their Likes have probably been paid for. That’s the problem: it’s very hard to know.
NSA whistleblower Edward Snowden has complicated this story considerably. Since his revelations, there has been a significant growth in the availability and use of (usually free) software to guard freedom and keep internet users anonymous. There are hundreds of people working on ingenious ways of keeping online secrets or preventing censorship, designed for the mass market rather than the computer specialist: user-friendly, cheap and efficient. These tools are, and will continue to be, important and valuable tools for democratic freedoms around the world. Unfortunately, along with journalists, human rights activists and dissidents, groups like Isil and the far-Right will be the early adopters.
Censorship is not the answer. The Home Secretary has called for more action on tacking extremism – and I agree that it's necessary – but it's far easier to say than to do. Online, groups and organisations can be shut down and then relaunched quicker than the authorities can phone Facebook’s head office. And here’s the Gordian knot: the more we censor them, the smarter they get. When Isil was kicked off Twitter, some went to Diaspora, which is one of several new decentralised social media platforms run by users on their own servers, meaning, unlike YouTube or Twitter, their content is hard to remove.
The answer is found in riddle. Extremists are motivated, early adopters of technology – and their ideas and propaganda spread person to person, account to account. The battle for ideas used to be waged from on high. But today it’s more like hand-to-hand combat, played out across millions of social media accounts, 24 hours a day. Censorship doesn’t work in this distributed, dynamic ecosystem. But the same tools used by extremists are free to the rest of us too. That gives all of us both the opportunity and responsibility to defend what it is we believe. Unthinkable three years: you can now argue with an Isil operative currently in Syria, via Twitter or a Britain First activist on Facebook – all from your own home. The battle for ideas online can't be won, or even fought, by governments. It's down to us.
© The Telegraph
3/11/2014- A Kremlin-backed human rights body has assailed a Russian website as “Nazi” and “racist” for claiming that nearly one quarter of Russia’s billionaires are Jewish – but the response from one Jewish leader was more composed. Nikolai Svanidze of the Russian Human Rights Council – a Kremlin-affiliated body with no executive powers – condemned Lenta.ru, which covers the banking sector, for publishing a report that broke down by faith and ethnicity those Russian citizens appearing in Forbes Magazine’s 2014 list of the world’s wealthiest individuals. According to lenta.ru, 48 of the top 200 wealthy Russians are Jews, with a combined net worth of $132.9 billion. Mikhail Fridman, with a net worth of $17.6 billion, tops the list and is Russia’s second richest man. “It’s a Nazi and racist approach,” Svandiza was quoted as saying by the Slon.ru news site.
But , as JTA reported, Yuri Kanner, president of the Russian Jewish Congress, defended the decision to publish the study. “If you cannot compare the proportion of representatives of various nationalities in the general ethnic composition of the country, it is impossible to understand who is really successful and who is not,” he told the currsorinfo.co.il news website on Oct. 29. He said, however, that he doubted the authenticity of the research. “The proportion of Jews in the population of the Russian Federation is calculated incorrectly. Besides, to compare the Jewish population, which is mainly concentrated in the major cities and has a university degree, with a total mass of Russian citizens, it is not accurate,” Kanner said. Of the Jews who made the list, 42 are of Ashkenazi origin, and together have a net worth of $122.3 billion.
Six Kavkazi Jews (a group also known as “Mountain Jews”) appear on the list, with a combined net worth of $10.6 billion. There are only 762 Russian citizens classified as Kavkazi Jews, according to the Russian Bureau of Statistics and they represent just 0.00035% percent of the population. A leading Russian affairs analyst was skeptical of the Kremlin’s motivations in condemning the website, arguing that false claims of Ukrainian anti-Semitism had been advanced in partial justification of the Russian invasion of Crimea – claims that were both condemned and ridiculed by Jewish leaders in Ukraine. Michael Weiss, editor-in-chief of The Interpreter, a magazine covering Russian affairs, told The Algemeiner: “Russian ultra-nationalists and the far right seize on the theme of wealthy, bloodsucking Jewish oligarchs a great deal, but what nobody bothers to say is that the chief enabler of Russian nationalism is Vladimir Putin.”
Weiss pointed out that in spite of stringent laws against extremism, neo-Nazis marched openly in St. Petersburg earlier this year, while later this week, a full array of extremists is expected at the annual Russian March. “Putin is aligned with fascist parties in Europe like Jobbik in Hungary and Front National in France,” Weiss added. “He’s looking to create fifth columnists in Europe, drawn from racist and xenophobic parties with the occasional communist thrown in. So it’s a bit rich for the regime to be calling out antisemitism.”
© The Algemeiner
With 1.35 billion people checking into Facebook every month, there’s bound to be some things that pop up on your news feed that you’d rather not see.
1/11/2014- The social media site has the difficult job of being a place where people can feel free to share their views, likes and dislikes, but also respect the myriad of cultures and values held by its global audience. What one person may find hilarious, others may find deeply offensive. An Australian mother opened a can of worms surrounding Facebook censorship after complaining that photographs of her giving birth had been removed from the site. Milli Hill, who is shown naked in the pictures, campaigns for positive depictions of childbirth and said Facebook had censored her “powerful female images”. This prompted news.com.au to ask its Facebook followers whether they thought the site responded to offensive material effectively. We received nearly 700 Facebook comments and emails that revealed users had mixed experiences. Some were satisfied with the site’s prompt removal of offensive material, while others were left confused when content that they thought was abhorrent was found not to breach Facebook standards.
Our readers provided examples of content that they had reported, that was investigated and deemed acceptable. They included:
A pornographic cartoon
An animal cruelty video
A video that showing a sex act
An image of a man holding the decapitated head of someone else
Graphic photos of a dead baby
A photograph of a man pointing a gun at the head of a baby
A comment that Tony Abbott should be assassinated
A video of a teenager being beaten senseless.
While she was unable to comment on these specific cases, Facebook’s Australian spokeswoman said the site worked hard to create a safe and respectful place for sharing and connection. “This requires us to make difficult decisions and balance concerns about free expression and community respect,” she told news.com.au. “We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organising real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual, eg bullying. “Sometimes people encounter content on Facebook that they disagree with or find objectionable but that do not violate our community standards.” Many readers objected to videos or images of animal cruelty, but Facebook considers the context in which the video was posted before taking it down.
This type of content is often posted to condemn it or galvanise people into action in order to stop it. If so, that material is allowed. Similarly, the self-regulating nature of the Facebook community can be more effective than Facebook staffers because people can pressure their friends to remove content through their comments. “Facebook receives hundreds of thousands of reports every week and, as you might expect, occasionally we make a mistake and remove a piece of content we shouldn’t have or mistakenly fail to remove a piece of content that does violate our community standards,” the spokeswoman said. “When this happens, we work quickly to address this by apologising to the people affected and making any necessary changes to our processes to ensure the same type of mistakes do not continue to be made.”
While some news.com.au readers were disappointed with Facebook’s responses to complaints, many others said they were satisfied. Reader Michelle said she had reported content several times and each time the offensive page or material was promptly removed, including get-rich-quick spam, sexual content and racist jokes. Another reader, Cathy, helped to have a number of comments taken down that threatened violence towards Tony Abbott. Meanwhile, Karen said her experience had also been positive. “Not that I am a serial complainer either but I have reported material of graphic violence nature, primarily cruelty to animals, and on one occasion something was removed as a result of that feedback,” she told news.com.au.
How do I report something offensive?
Every update posted to Facebook carries with it a small arrow in the top right corner that allows users to hide the post or report it. If a complaint is made, it is then placed in a queue for assessment.
What does Facebook consider unacceptable?
Nudity: Photos of breastfeeding or Michelangelo’s David are likely to pass the test, however. Milli Hill’s childbirth photographs were most likely taken down because of the nudity depicted
Violence and threats
Self-harm: “We remove any promotion or encouragement of self-mutilation, eating disorders or hard drug abuse,” Facebook says
Bullying and harassment: Repeatedly targeting users with unwanted friend requests or messages is considered harassment
Hate speech: “While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition,” Facebook says
Graphic content: Some graphic content is considered acceptable if it is shared for the purposes of condemning it, but it should carry a warning. “However, graphic images shared for sadistic effect or to celebrate or glorify violence have no place on our site,” Facebook says
Privacy violations: Claiming to be another person and creating multiple accounts is a no-no
Selling items illegally
Phishing and spam
Fraud or deception.
Who assesses complaints? Are there programs that do it automatically?
All complaints are reviewed by Facebook staffers, and not by any automatic programs. Complaints are assessed against Facebook’s community standards, which govern what material is acceptable on the site. There are dedicated teams based in the US, Ireland and India, so complaints can be processed around the clock. More serious material is prioritised, but most reports are reviewed within 72 hours. Reporting a post does not guarantee it will be removed. “Because of the diversity of our community, it’s possible that something could be disagreeable or disturbing to you without meeting the criteria for being removed or blocked,” the Facebook community standards page reads. You can find out more about how complaints are assessed here.
What can I do if something I find offensive is not taken down?
Facebook also offers personal controls so every user can hide or quietly block people, pages or applications they find offensive. Facebook has tools for controlling what you see in your news feed, and tools for controlling your Facebook experience generally.
© News Australie
Laws not strong enough to police it, say experts
1/11/2014- Islamophobia has been an ongoing concern in the west since 9/11, but a number of recent incidents in Britain have given rise to a new wave of hatred that experts say is finding a breeding ground online. Part of the problem, researchers say, is that right-wing groups can post anti-Islamic comments online without fear of legal prosecution. “If they were to say, ‘Black people are evil, Jamaicans are evil,’ they could be prosecuted,” says Fiyaz Mughal, founder of Islamophobia reporting web site TellMamaUK.org. But because religious hatred isn't covered legally in the same way that racism is, Mughal says "the extreme right are frankly getting away with really toxic stuff.” Researchers believe the rise of the Islamic State in Iraq and Syria (ISIS) and incidents such as the murder of British soldier Lee Rigby and the recent sexual exploitation scandal in the town of Rotherham have contributed to a spike in online anti-Muslim sentiment in the UK.
Imran Awan, deputy director of the Centre for Applied Criminology at Birmingham City University, noticed the trend when he was working on a paper regarding Islamo-phobia and Twitter following Rigby's death. Rigby was killed in the street in southeast London in 2013 by two Islamic extremists who have since been convicted. Awan says the anonymity of social media platforms makes them a popular venue for hate speech, and that the results of his report were “shocking, to say the least.”
'A year-by-year increase'
Of the 500 tweets from 100 Twitter users Awan examined, 75 per cent were Islamophobic in nature. He cites posts such as "'Let's go out and blow up a mosque' and 'Let’s get together and kill the Muslims," and says most of these were linked to far-right groups. Awan’s findings echo those of Tell MAMA UK, which has compiled data on anti-Muslim attacks for three years. (MAMA stands for "Measuring Anti-Muslim Attacks.") Tell MAMA's Mughal says anti-Muslim bigotry is "felt significantly," and adds that "in our figures, we have seen a year-by-year increase." Researchers believe far-right advocates are partly responsible for a spike in online hate speech. “There’s been a real increase in the far right, and in some of the material I looked at online, there were quite a lot of people with links to the English Defence League and another group called Britain First,” says Awan.
Both Mughal and Awan believe that right-wing groups such as Britain First and the EDL become mobilized each time there is an incident in the Muslim community. The Twitter profile of the EDL reads: “#WorkingClass movement who take to the streets against the spread of #islamism & #sharia #Nosurrender #GSTQ.” Their Facebook page has over 170, 000 likes. Below that page, a caption reads, “Leading the Counter-Jihad fight. Peacefully protesting against militant Islam.” EDL spokesperson Simon North dismisses accusations that his group is spreading hate, emphasizing that Muslims are often the first victims of attacks carried out by Islamic extremists. “We address things that are in the news the same way newspapers do,” says North.
The spreading of hate
Experts in far-right groups, however, say their tendency to spread hateful messages around high-profile cases is well established. North allows that some Islamophobic messages might emanate from the group's regional divisions. But they do not reflect the group’s overall thinking, he says. “There are various nuances that get expres-sed by these organizations,” North says. “Our driving line is set out very clearly in our mission statement.” According to EDL's web site, their mission statement is to promote human rights while giving a balanced picture of Islam.
Awan argues online Islamophobia should be taken seriously and says police and legislators need to make more successful prosecutions of this kind of hate speech and be more “techno-savvy when it comes to online abuse.” Prosecuting online Islamophobia, however, is rare in the UK, says Vidhya Ramalingam of the European Free Initiative, which researches far-right groups. That's because groups like Britain First, which have over 400,000 Facebook likes, have a fragmented membership and do not have the traditional top-down leadership that groups have had in the past. Beyond that, UK law allows for the parody of religion, says Mughal, which can sometimes be used as a cover for race hate. “The bar for prosecution of race hate is much lower, because effectively the comedic lobby has lobbied so that religion effectively could be parodied.”
The case in Canada
Online Islamophobia is also flourishing in Canada. The National Council of Canadian Muslims (NCCM) is receiving a growing number of reports. But there are now fewer means for prosecuting online hate speech in Canada. Section 13 of the Canadian Human Rights Act protected against the wilful promotion of hate online, but it was repealed by Bill C-304 in 2012. “It’s kind of hard to say what the impact is, because even when it existed, there weren’t a lot of complaints brought under it,” says Cara Zwibel of the Canadian Civil Liberties Association. Though there is a criminal code provision that protects against online hate speech, it requires the attorney general’s approval in order to lay charges — and that rarely occurs, says Zwibel.
Section 319 of the Criminal Code of Canada forbids the incitement of hatred against “any section of the public distinguished by colour, race, religion, ethnic origin or sexual orientation." A judge can order online material removed from a public forum such as social media if it is severe enough, but if it is housed on a server outside of the country, this can be difficult. Ihsaan Gardee, executive director of NCCM, says without changes, anti-Muslim hate speech will continue to go unpunished online, which he says especially concerns moderate Muslims. “They worry about people perceiving them as sharing the same values these militants and these Islamic extremists are espousing.”
© CBC News
By Sam Volkering
28/10/2014- What’s the best way to start a riot? Let me help you out…suppress free speech. It’s possibly the number one reason people protest. And if the crowds face heavy handed control measures, these protests sometimes turn into full blown riots. Communities rally with greater force now than ever before thanks to social networks. Today, if your cause is engaging enough, it’s easy to rally the troops. A strong social media collective can be as powerful as a state army. In fact, using social networks is the best way to start a movement. Look at the Arab Spring or Euromaidan in Ukraine. Even the recent protests in Hong Kong…each was organised through social networks.
So much fear, so many reasons to protest
The world is in a very volatile state. And I’m not even talking about the markets. Ebola spread across western parts of Africa like wildfire, and now the whole world is panicked over it. Scandalous ‘news’ headlines don’t help. You can’t avoid it on social media either. In between ‘news’ about The Bachelor, all I see on my Facebook feed is horrible news: beheadings, ISIS and Ebola currently dominate. Thank the world for cat videos…oh blessed be the cat videos. At least there’s something to smile about day to day… But, along with Ebola, there’s plenty else wrong with the world. ISIS has created racial and religious tension not just in Islamic nations but also across the world. Earlier in the month, there were fatal protests in Turkey. Over the weekend, there was a violent riot in Cologne, Germany. The target of the protest — Islamic extremism. The Cologne protest was organised by a far right, neo-Nazi group. The protest had around 4,000 people, according to IBTimes. This was double the number expected by police. Most of the protesters were gathered through social media. And things got ugly. Riot police were called in. Water cannons and pepper spray were shot…
Just when you thought it couldn’t get worse
Of course, much of this violence is a direct result of the actions of global leaders. Whether related to ISIS or not, the violence and protests around the world stem from misguided government policies. The idea of the protest is nothing new, but social networks are. And the combination of the two has created greater influence on decision makers by the people. The voice of many is always more powerful than the voice of a few. For better or worse, connected networks allow people to share a voice and a view like never before. I highlight this because trouble is brewing in one particular eastern European country…one you probably wouldn’t expect. This country’s government is trying to implement one of the most regressive, oppressive policies of the modern era… The Hungarian government wants to implement an ‘internet tax’. The draft bill has a provision where a tax is paid to the government revenue collectors per gigabyte of data transfer. This would apply to consumers and businesses.
Hungary already has the highest VAT (GST) rate of any country in the world at 27%. You can see why another tax has angered the people of Hungary. But more than that, it’s widely viewed as the government taxing the freedom of information. The internet is perhaps the greatest tool of all time for creating and accessing information. It’s why we live in the ‘information age’. Anyone can use the internet to express opinions, ideas, ideals and views. It’s the ultimate tool for freedom of speech. On Sunday, approximately 100,000 Hungarians gathered in front of the Economic Ministry to protest these regressive laws. And the protest was organised through Facebook by a group with over 210,000 followers. Words broke out through Facebook, Twitter and other social networks and the people came together to have their say. As part of the protest, attendees held up their phones as a sign to the government.
This protest was peaceful, but it proved a significant point: Governments should not try to enact policies that aim to restrict what has become an essential human right —that is, access to information. That’s really what the internet is, after all — the world’s biggest collection of information. And it should be free to access by anyone, anywhere as a basic human right. It’s an optimistic goal, but hopefully, one day, the entire world will have free access to the internet. The world should also strive for clean water, food and shelter for all. But perhaps the internet is equally as important. Perhaps the internet could provide the information to help communities achieve those other goals… Regardless, social networks are clearly crucial to connecting and empowering people. And the internet is the backbone of that power. When go-vernment tries to restrict our freedom of information, they will face a resolute and defiant community.
© Tech Insider
As 'Hitler' Twitter account gains more and more followers and Facebook page displays 'list of Jews,' Foreign Ministry and EU representatives discuss ways to combat anti-Semitism.
28/10/2014- "It's hard being openly Jewish in Europe today," Gideon Bachar, the Director of the Department for Combating Anti-Semitism and Holocaust Remembrance in the Foreign Ministry, said Monday. An experts' meeting on the topic of fighting anti-Semitism and racism conducted in Jerusalem today led to various estimates as to the future of the Jewish community in Europe and links between radical Islam and anti-Semitism.
The "Hitler" account has 370,000 followers
"Anti-Semitism is like Ebola," Bachar said. "It's a virus. It constantly accumulates mutations. It changes all the time, adapts itself to the situation, and is transnational. The rise in anti-Semitism is a danger to civilization and to democracy in general." The meeting was attended by Yad Vashem representatives, the State Attorney's Office, the Association of Israeli Students and European Union representatives. "We are witnessing a strong willingness and desire to take action against this phenomenon," Bachar said. "There is an understanding of the problems it poses. Europe is seeing a steady and substantial increase in Anti-Semitism." Ido Daniel, Program Director at Israeli Students Combating Anti-Semitism, displayed during the meeting a photo of a French Facebook page with names, pictures and information about Jewish residents, including their place of prayer and the parks where they take their children. He also showed the attendees a faux Adolf Hitler Twitter account, with more than 370,000 followers, that has since been suspended. "The man tweeted a picture of Birkenau and wrote: 'It's a great day at work today,'" Daniel read out the sentence
According to Bachar, various European initiatives which include prohibitions on circumcision and kosher slaughter "do not stem from anti-Semitic motives, but they do pose a real threat to the continued existence of Jewish life in Europe. Apart from them, hundreds of anti-Semitic demonstrations have taken place. We are diagnosing three phenomena: Leaving the country, assimilation or isolation." Bachar also spoke about recent occurrences in which people removed mezuzahs from their doors, concerns of wearing a yarmulke in public while going to the synagogue and the hiding of Jewish identity.
© Y-Net News
27/10/2014- Up to 10,000 people rallied in Budapest on Sunday (26 October) in protest of Viktor Orban’s government plan to roll out the world’s first ‘internet tax’. Unveiled last week, the plan extends the scope of the telecom tax onto Internet services and imposes a 150 forint tax (€0.50) per every gigabyte of data transferred. European Commission spokesperson Ryan Heath said, under the tax hike, streaming a movie would cost an extra €15. Streaming an entire TV series would cost around €254. The levy, to be paid by internet service providers, is aimed at helping the indebted state fill its coffers. Hungary’s economy minister Mihaly Varga said the tax was needed because people were shifting away from phones towards the Internet. But unhappy demonstrators on Sunday threw LCD monitors and PC cases through the windows of Fidesz headquarters, Orban’s ruling party.
A Facebook page opposing the new tax attracted thousands of followers within hours of being set up after the regime was announced. The page called for a protest with some 40,000 people having signed up by Sunday early evening. Hungary’s leading telecoms group Magyar Telekom told Reuters the planned tax “threatens to undermine Hungarian broadband developments and a state-of-the-art digital economy and society built on it”. The proposal has generated controversy in Brussels as well. EU’s outgoing digital chief Neelie Kroes on Sunday told people to go out and demonstrate. “I urge you to join or support people outraged at #Hungary Internet tax plan who will protest 18h today,” she wrote in a tweet. The backlash prompted Orban’s government to rollback the plans and instead place a monthly cap of 700 forints (€2.3) for private users and 5,000 forints (€16) for businesses.
The concession did little to appease critics who say the levy will still make it more difficult for small businesses and impoverished people to gain Internet access. Others say it would restrict opposition to the ruling elite. “This is a backward idea, when most countries are making it easier for people to access the Internet,” a demonstrator told the AFP. "If the tax is not scrapped within 48 hours, we will be back again," one of the organisers of the protest told the crowds. Orban, who was elected for a second term in April, has come under a barrage of international criticism for other tax policies said to restrict media freedoms amid recent allegations of high-level corruption. Civil rights group say the Fidesz-led government fully controls the public-service media and has transformed it into a government mouthpiece.
An advertising tax imposed in August risks undermining German-owned RTL, one of the few independent media organisations in Hungary, which does not promote a pro-Fidesz editorial line. Kroes has described the advertising tax as unfair and one that is intended “to wipe out democratic safeguards” and to rid Fidesz of “a perceived challenge to its power.”
© The EUobserver
26/10/2014- Pavee Point strongly condemns any actions to intimidate and promote violence against Roma in Waterford. This follows the publication of multiple Face-book pages which openly incite hatred against Roma, and reports of a public order incident on Saturday evening where up to 100 people are reported to have gathered outside the home of Roma living in Waterford. The content on Facebook pages to date have shown huge misinformation and racism towards Roma and have included inflam-matory, dehumanising and violent language. There is a clear link between online hate speech and hate crime and there is an urgent need to address the use of the inter-net to perpetuate anti-Roma hate speech and to organise violence.
European institutions and groups such as the European Roma Rights Centre have raised concerns about rising violence in Europe and the strengthening of extremist and openly racist groups which spread hate speech and organise anti-Roma marches. Attacks in other European countries have included several murders of Roma. We don’t want this to become a feature in Ireland. “Anti-Roma racism does not occur in a vacuum and we now need strong public and political leaders to be visible, vocal and openly condemn anti-Roma actions in Waterford” said Siobhan Curran, Roma Project Coordinator Pavee Point. “At a national level a progressive national strategy to support Roma inclusion in Ireland needs to be developed as a matter of urgency” she continued.
Pavee Point calls on all elements of the media to take on board the recommendations from the Logan Report and avoid sensationalist and irresponsible reporting.
© Pavee Point
Our world is now more connected than ever. Technology – specifically social media – allows us to establish lines of communication hitherto unthinkable.
31/10/2014- As technology has developed and the use of social media proliferated, unfortunately so too has the echo chamber for racism expanded. As chairman of the All-Party Parliamentary Group Against Antisemitism and with the Inter-Parliamentary Coalition for Combatting Antisemitism, I have been working together with the industry and MPs from across the world to tackle cyber hate. Predominantly, this has been through improved self-regulation by the companies in question. In September, I went to California to agree protocols on hate speech on the internet with Facebook, Twitter, Google and Microsoft. They were among others that endorsed a series of pledges to introduce better, user-friendly reporting systems and more rapidly respond to allegations of abuse. It is easy to look at the big picture and work with companies to implement frameworks to tackle abuse.
It is, of course, a very different experience to be on the receiving end of antiSemitic hate and death threats. Recently, an important precedent was established when a man who had sent an anti- Semitic tweet to my parliamentary colleague, Luciana Berger MP, was jailed. While civilised people the world over celebrated the news, it elicited quite the opposite response from Nazi sympathisers and far-right extremists. Taking inspiration from one lunatic, posting articles to an American server, a number of ‘activists’ took to Twitter in an attempt to orchestrate a campaign of hate and vitriol. I was not prepared to let Luciana fight this alone and so raised a point of order at Prime Minister’s Questions and queried whether Twitter might be brought to the Commons to answer for the hate that was being espoused through its platform. Subsequently I, too, became subjected to the ire of fascists and racists on twitter. If you have ever suffered abuse through the medium of Twitter, you will know how difficult it is to report it and have action taken. Given the work I have already done with the company, it should not come as a surprise that I was able to make contact with the company and request action.
While individuals have sought to be helpful, hateful accounts and messages targeting both Luciana and myself remain online and my experience points to a significant structural failure to curb racist activity on social media. In November, I will visit Twitter and Facebook European HQ’s with parliamentary colleagues and will take my concerns to them. This week, I led a debate in the House of Commons about these matters and asked the government and the parliamentary authorities what action they would be taking. I set out a number of practical suggestions. What happens on social media has real world consequences. I expect Twitter to make it easier for any victim of abuse to report hate so the threat of harm is reduced. I want the social media companies to invest extra resources in tackling cyber hate.
Protocols that companies have signed up to, such as the ICCA/ADL accord, should be honoured and there should be more transparency so it is easier to contact people working for these companies. I expect these companies to work proactively to develop algorithms that identify repeat offenders and key words which, when they appear together, are automatically removed. I want racist and anti-Semitic pictures to be taken off these platforms and I want police RIPA requests to be more speedily processed through the UK. Specifically, I want our police and courts to ensure they are at the forefront of the fight against cyber hate. Sex offenders can be barred from social media and from online activity.
I believe that if they show a considered and determined intention to exploit social media networks to harm others, individual perpetrators of harassment and racist abuse should also be subject to such a ban. If they can do it for child exploitation, then they can do it for racism and anti-Semitism. Technology has helped us to create new and important means of communication. I will not allow the racists and anti-Semites of the world to be the primary beneficiaries.
© Jewish News UK
30/10/2014- Local software development company, PDMS are delighted to announce that they have been working with PNLD (Police National Legal Database) in the UK to provide the technology for their latest project - an innovative new web service aimed at helping victims and witnesses of crime. The website, aptly named www.helpforvictims.co.uk was launched on Friday 24th October by Yorkshire’s Police and Crime Commissioner, Mark Burns-Williamson, with an event in Leeds where Baroness Newlove, the UK Government’s Victims’ Commissioner was present to support the launch. Funded by the Ministry of Justice, it is hoped that the website will be rolled out to other police forces across England and Wales.
With the introduction of Help for Victims, individuals in Yorkshire will be able to immediately access all the information contained within the Victims’ Code and the Witness Charter in a question and answer format. The website also includes individual pages dedicated to over 400 local supporting organisations, which can help with concerns such as cyber bullying or hate crime, with trained advisers on hand to give advice. Additionally, the website utilises a self-referral service to local organisations who can provide particular specialist victim and witness services beyond the website.
Chris Gledhill, Managing Director of PDMS commented, “The new website is an integral part of Mr. Burn-Williamson’s Police and Crime Plan to ensure victims and witnesses in Yorkshire receive high quality support exactly when they need it. It is the only website of its kind that facilitates all of their local resources, whilst providing one place for clear and concise advice with regards to the criminal justice process and rights from the Victims Code. As well as English, the site has been translated into the five most frequently spoken languages in West Yorkshire - including Gujarati, Urdu, Punjabi, Arabic and Polish, and will shortly be launched in IoS and Android App format too”.
PDMS have been PNLD’s technology partner for over 10 years, helping them provide a range of services to the police and wider criminal justice sector in Yorkshire. Previous technology projects have included the Police National Statistics Database (PNSD), an internet-based solution allowing Police Forces to comparatively analyse and examine statistics at national and local levels, the ‘Ask the Police’ Portal (www.askthe.police.uk) for the Police Service in England and Wales, which is estimated to save forces over £25 million per year, and Apple and Android ‘Ask the Police’ apps, which reached over 30,000 downloads shortly after launch.
© Isle of Man
With political campaigns increasingly being fought on social media, The Telegraph investigates the rise of Britain First, a tiny group with more likes on Facebook than the three main parties
27/10/2014- Started in 2011 by former BNP members Paul Golding and Jim Dowson, Britain First describes itself as “a patriotic political party and street defence organisation”. The group has amassed almost 500,000 likes on Facebook compared to the Conservatives on 293,000, Labour with 190,000 and the Liberal Democrats’ 104,000. This popularity has led to questions about how the group has managed to gain so many likes when its offline activities seem to draw few supporters in comparison. I met the leader of Britain First, former BNP communications chief Paul Golding, and asked him about the kind of posts the group was using to attract likes. One tactic they employ is to post pictures of animal cruelty with text asking people to “Like and share if you demand far harsher penalties for those who mistreat animals”.
“All the top grossing charities in this country are animal charities and there’s a reason for that. We’re just tuning into the nation’s psyche (by) posting stuff like that,” explained Mr Golding. Creating posts which appear to have little to do with the aims of the group and which seem aimed at simply garnering the most amount of likes is a tactic used by many far right groups according to Carl Miller, a social media researcher for the think tank, Demos. “Far right groups have always wanted to appear more popular and influential than they are, this is one of the ways in which they think they can have influence on mainstream political decisions.” The people who respond to these messages online may not be aware of the kind of activities their likes are being used to support offline. Britain First has run a campaign of what they call ‘Mosque Invasions’. One of these took place at Crayford Mosque, in Kent in July of this year. Filmed by Britain First, the ‘invasion’ consisted of a small group dressed in matching green jackets entering the mosque and demanding to see the Imam.
A gentleman inside the Mosque points out that they are standing on the prayer mat with their shoes on, to which Mr Golding responds “Are you listening?” before demanding that the mosque remove signs denoting separate entrances for men and women outside. The man asks again for the group to leave and eventually convinces them to go after promising to remove the signs. Before leaving, Mr Golding warns him “You’ve got one week to take those signs down otherwise we will.” When challenged about the validity of these tactics, Mr Golding said his organisation would not treat those who followed Islam with respect because, in his opinion, they treated women like second class citizens. “We didn’t make a distinction in the second world war between moderate Nazi’s and extreme Nazi’s did we? We just went to war,” he said. Buoyed by the success of their Facebook page, Britain First plans to stand in the Rochester and Strood by election. How they poll will reveal whether the likes they have accrued online translate into votes offline.
© The Telegraph
30/10/2014- A neo-Nazi website based in the US is behind a co-ordinated campaign of antisemitic abuse targeting Britain's youngest Jewish MP, the JC can reveal. The site provides a user guide to harassing Luciana Berger and has created offensive images to be shared by internet trolls and sent to her via social media sites. It carries a series of "dos and don'ts" for those who intend to abuse Ms Berger. The site advises trolls not to "call for violence, threaten the Jew b---h in any way. Seriously, don't do that". But it goes on to encourage calling her "a Jew, call her a Jew communist, call her a terrorist, call her a filthy Jew b---h. Call her a hook-nosed y-- and a ratfaced k---. "Tell her we do not want her in the UK, we do not want her or any other Jew anywhere in Europe. Tell her to go to Israel and call for her deportation to said Jew state."
Advice on the easiest ways to set up anonymous Twitter accounts and email addresses to limit traceability is also available on the website. It posts hundreds of racist articles targeting black people, Muslims and Jews. Ms Berger received around 400 abusive messages on Twitter last week. Many carried the hashtags and images created by the American site, which urged trolls to join "Operation: Filthy Jew Bitch". The campaign against the Liverpool Wavertree MP was set up last Monday, hours after Merseysider Garron Helm was jailed for sending her abusive messages. Helm's imprisonment was heralded as an "important precedent", but it is now clear that his abuse was merely the tip of an iceberg.
The JC understands the Labour shadow cabinet member has received death threats amid the series of "deeply threatening" messages. She has not commented on the abuse, but friends said she was feeling isolated after the "relentless" storm of offensive tweets. "Luciana is sickened by what's flashing up on her phone on a minute-by-minute basis," said one. "It's hard for her being the focus of something so sinister and global and relentless." A coalition of security groups, police and Twitter have been investigating the source of the messages and have shut down some accounts. The operation against her is being orchestrated by the racist, white nationalist website Daily Stormer. It is run by Andrew Anglin, who has previously been filmed at Berlin's Holocaust memorial mocking victims of the Shoah and questioning the number of Jews who were murdered. The site promotes use of the #HitlerWasRight hashtag.
It provides what is effectively a resource pack of racist images which it advises trolls to use to "flood" Ms Berger's Twitter account. Among the images are those of the MP next to Labour's Jewish leader Ed Miliband with a yellow star with the word "Jude" superimposed on their heads. The call to action concludes by urging abusers to use the hashtags #FilthyJewB---h and #FreeGarronHelm on every tweet targeting Ms Berger. "We will not bow to Jews. We will not be silenced by Jews. We will not allow Jews to destroy the nations that our ancestors spilled blood on to build on this sacred land." When Twitter began to block the tweets late last week, Daily Stor-mer users began posting Ms Berger's email address on internet forums. A website claiming to be Britain's "number one nationalist newspaper" also highlighted Helm's conviction.
The Daily Bale, run by "nationalist" Joshua Bonehill-Paine, said that as a former director of Labour Friends of Israel, Ms Berger was a supporter of "institutional state child murderers", a "money grabber" and a war criminal. The JC understands police are investigating the comments. Ms Berger's parliamentary colleagues and members of the Jewish community have responded by posting messages of support online. Lord Wood, a Labour peer and adviser to party leader Ed Miliband, wrote on Twitter: "The vile antisemitic abuse of Luciana Berger online only succeeds in uniting everyone in her support and in revulsion against those behind it." Baroness Royall, Labour's leader in the Lords, tweeted: "Luciana Berger is a terrific MP, friend and colleague - a very fine woman. The racist abuse against her must stop. It's abhorrent."
Board of Deputies vice president Jonathan Arkush tweeted: "Racist abuse of Luciana Berger is nauseating and disfigures our country. Perpetrators should expect to go to prison. We value and support her." The case was raised in Parliament on Wednesday, with Commons Speaker John Bercow condemning the abuse as "despicable and beneath contempt".
© The Jewish Chronicle
Internet trolls are among the worst specimens the human race can offer. But they are not a reason to nod through another restriction on personal freedom
By Nick Cohen
26/10/2014- No one has tested my commitment to liberalism so sorely as Edinburgh University’s Feminist Society. I know I should believe in freedom of speech and changing minds with arguments, not punishments, and all the rest of it. And, trust me, I do. Or rather I did, until the moment Edinburgh’s feminist students said they wanted to kick the Socialist Workers party out of their campus. The BNP of the left has had a malign influence on public life far beyond its numbers. In the universities, it has been at the forefront of thuggish demands that there must be “no platform” for fascists or supporters of Israel or, it seems, anyone else it disagrees with. The desire to censor has reached the absurd state where the academic left has banned women’s rights campaigners, who have upset transsexuals, and admirers of Friedrich Nietzsche, who have upset students who had not read him but know he was a bad person.
After this disgraceful record, it is worth enjoying the plight of the SWP for hours – maybe weeks. The censor faces censorship. The fanatics who have screamed down so many others could be screamed down themselves. No one can deny that Edinburgh’s women have good reason to go after the Trots. Like priests in the Catholic church and celebrities in light entertainment, the leaders of a Marxist-Leninist party are men at the top of a hierarchy that demands obedience. Last year, a succession of women alleged that senior figures in the party had demanded their sexual compliance. Rather than tell them to take their cases to the hated “capitalist” courts, the SWP set up its own tribunals. The alleged victims said it subjected them to leering questions worthy of the most misogynist judge about their sex lives and alcohol consumption, then duly “acquitted” the “accused”.
Eleanor Brayne-Whyatt of the Edinburgh Feminist Society has a point when she says that universities will show they do not tolerate “rape apologism and victim blaming” if they order the SWP to leave. Even if you want to differ, you may find the task of contradicting her beyond you. We have reached a state where arguing that a speaker has the right to free speech is the same as agreeing with his or her arguments. If you say that racist or sexist views should not be banned, you are a racist or rape apologist yourself. Your opponents then go further and accuse you of ignoring the “offence” and “pain” of the victims of racism and sexism have suffered and turn you into an abuser as well. With remarkable speed this double bind knots itself around its targets. Defend a repellent man’s right to speak and you become that repellent man and his victims, real or imagined, become your victims too. Small wonder so many keep quiet when they should speak up.
Observer readers may not care, as most modern prohibitions on speech are – to put it crudely – instances of leftwing censorship of prejudiced views. If so, you should notice how easy the right finds it to march in step alongside you. Chris Grayling, a Tory bully boy, announced last week that he would quadruple the maximum jail sentence for internet trolls who spread “venom” on social media or, rather, he fed an old story from March to a naive and punitive media. Even though internet trolls are among the worst specimens the human race can offer up for inspection, there are many reasons not to nod through yet another hardline restriction of personal freedom. Interest groups like nothing better than exploiting the law. We’ve already seen supporters of the McCanns, who were understandably aggrieved by the abuse the family received online, turn into troll catchers. They collected a dossier and passed it to Sky News and the police. The hunters unmasked one of the McCanns’ tormentors as Brenda Leyland, who took her own life within hours of her exposure, a reminder that many trolls are mentally ill and need treatment rather than prison.
Meanwhile, as the free speech campaigners at English Pen reminded me, the white right and far right have learned from the left and can be as politically correct. Their most recent success was to demand that the police prosecute one Azhar Ahmed from Dewsbury. He admitted posting a Facebook message two days after the killing of six British servicemen in Afghanistan: “All soldiers should die and go to hell,” it read. A disgusting statement, no doubt, but put in different terms, the belief that British troops should not be in “Muslim lands” is a political sentiment, not a criminal act. The court nevertheless found him guilty of the criminal offence of making a “grossly offensive communication”. The prosecutors did not say that he was inciting violence against British troops, simply that he was offensive. Two can play at that game. The Islamist religious right can respond in kind and demand prosecutions for Islamophobia, and before you know it we will be off on a cycle of competitive grievance.
Only last week, the authorities recalled Tommy Robinson, the former leader of the extreme right English Defence League, to prison – apparently for tweeting that he planned to criticise the police. I carry no brief for the man, but his detention feels all wrong. It would be far better if social media sites and newspapers stopped inciting people’s ugliest instincts by allowing them to post anonymously. It would be better still if politicians reformed a law that is alarmingly vague. The state can charge citizens for words that are “grossly offensive,” as Azhar Ahmed found. No government should be allowed to get away with such a catch-all charge. Every sentiment beyond the blandest notions “offends” someone. “Offensive” is a subjective term, which is wide open to political manipulation by loud and vociferous interest groups and the government of the day.
The only respectable reason for banning organisations or punishing individuals is if they incite violence against others. Unless feminists can prove that the SWP promotes rape as a matter of party policy – and I don’t think they can – they remain free to despise it, harangue it and oppose and expose its many stinking hypocrisies, but they have no moral right to order it off campuses. I know I am going to regret writing that last sentence. Indeed, I am regretting it already. But it remains the case that a country where it’s a crime to be offensive is a country where everyone can try to ban everyone else.
© Comment is free - The Guardian
Editors' Note: This story includes references to hate speech and other language that readers may find offensive.
26/10/2014- In September, a group of black women penned an impassioned letter to the people who run Reddit entitled: "We have a racist user problem and reddit won't take action." See also: Reddit: A Beginner's Guide
Posted by the username of pro_creator, who serves as a moderator on the subreddit /r/blackladies, it was cosigned by the moderators of more than 60 other subred-dits. "Since this community was created, individuals have been invading this space to post hateful, racist messages and links to racist content, which are visible until a modera-tor individually removes the content and manually bans the user account," the message said. "reddit admins have explained to us that as long as users are not breaking sitewide rules, they will take no action," the letter added. Therein lies the issue. Reddit has a hate speech problem, but more than that, Reddit has a Reddit problem.
A persistent, organized and particularly hateful strain of racism has emerged on the site. Enabled by Reddit's system and permitted thanks to its fervent stance against any censorship, it has proven capable of overwhelming the site's volunteer moderators and rendering entire subreddits unusable. Moderators have pled with Reddit for help, but little has come. As the letter from /r/blackladies mentions, the bulk of what racists perpetrate on the site is within Reddit's few rules. And the site's CEO has made clear, even through criticism surrounding high-profile events like the celebrity nude leak, that those rules are not going to change.
This has put the front page of the Internet in a tenuous position. Having just completed a funding round, the site is poised to begin monetizing. That will mean convincing advertisers to put adds next to its user-generated content. It is a situation in which an unstoppable force meets an immovable object. Hate speech on Reddit is proving uncontainable while Reddit refuses to change. The situation has left moderators — essential cogs in the site's operation — as the site's last line of defense against some of the darkest parts of the Internet. It is a battle they are losing.
Down with the upvotes
It's just not that hard to manipulate Reddit. Motivated racists have proven capable of affecting everyone from smaller groups like /r/blackladies to huge subreddits like /r/news, which has more than 3.9 million subscribers. Reddit relies on a democratic “upvote” and “downvote” system that surfaces or buries content and com-ments. It’s a system that can be gamed by motivated groups. Allied redditors can vote en masse to push content and comments to the top of subreddits, a move known as "brigading." This is frowned upon — but it’s not technically against the rules. The site also allows users to quickly create anonymous accounts. Bands of anonymous, racist users can completely overrun smaller subreddits, which is what happened to /r/blackgirls, a predecessor to /r/blackladies.
“Our sub was created after a previous sub we'd frequented was overrun [by] hate groups,” pro_creator said in an email to Mashable. The user requested anonymity out of fear of “doxxing,” or the public disclosure of personal information online. The abuse “would come in waves as they grew upset with being rejected and banned.” Racist redditors had previously congregated at /r/n*ggers, a subreddit that was eventually banned for its open attempts to brigade other subreddits, including /r/ black-girls. A year and a half later, /r/blackladies, which bills itself as "designed specifically to be a safe space for black ladies on Reddit," is dealing with the same problem. Moderators are growing weary.
In addition to the upvote and downvote system, moderators, know as “mods,” are also a key part of Reddit. These unpaid volunteers regulate each subreddit, some of which have millions of subscribers. They have the power to block comments and ban users from their particular parts of the site. In the face of the types of organized attacks that hate groups have mounted on subreddits large and small, those tools are woefully inadequate, moderators say. Tyler Lawrence, a moderator of a variety of subreddits including /r/news, said that consistent and coordinated attacks have caused him to consider drastic action. This has become such a huge issue in /r/news alone that I've at multiple points conside-red outright closing comment sections to prevent hateful brigading from racist communities within Reddit," Lawrence told Mashable in an email.
Moderators’ pleas have almost entirely fallen on deaf ears. Reddit’s commitment to remaining as open as possible is well documented. Most recently, Reddit CEO Yishan Wong penned a defense of the site’s lack of action concerning its role in disseminating leaked celebrity photos. Moderators who spoke with Mashable are fatalistic about the site’s future. If Reddit was built in part by the darker corners of the site, why would it change now? “There's no desire to address the various -isms that have grown to dominate the site, so it doesn't seem like it will be resolved any time soon. Which is unfortunate, because the attitudes displayed by a good number of Reddit's target demographic are firmly on the wrong side of history,” pro_creator wrote. “The site is positioning itself as a playground for racists and misogynists. And if racism and sexism are paying the bills, why would they move against it?” Reddit declined to respond to questions on this topic.
Reddit at its core is a group of communities. The site's structure and format — relying on the voting system to elevate or bury content and comments — made it the ideal place for users with any number of interests to connect. Reddit now hosts thousands of sections, known as subreddits, and served more than 170 million unique users last month. Censorship is the site's mortal sin, even when being applied to the most odious content. This laissez faire ideology is an ingrained part of the plat-form, lending it a certain legitimacy. All are welcome and governed by the same rules. This led to the site playing host to a certain amount of racism and hate speech. Racism on the Internet preceded Reddit, and it will exist if the site ever goes away. But there was a relative peace among the various groups, which operated under something of an unspoken detente. You stay in your corner, we stay in ours.
That is until the 2012 shooting of Trayvon Martin by George Zimmerman. The Zimmerman trial really stands out in my mind. It served as a rally point for racists every-where," said Logan Hanks, a former Reddit programmer, in an email to Mashable. "This manifested on Reddit as a lot of new racist memes popping up here and there, drama around racists squatting on the 'TrayvonMartin' subreddit to mock the African-American community, and an uptick in bullying directed at minority subreddits," he said. "It became a prime opportunity to mock and harass minorities on Reddit." Since then, a battle has raged between Reddit's corps of volunteer moderators and racist activists. "After the Zimmerman trial, they were briefly dispersed, but never entirely gone, and this year they've returned as strong and bold as ever," Hanks said.
Racism is nothing new to the Internet, but rarely has it been so organized and on a platform that can quickly put it in front of millions of users. Numerous moderators who spoke with Mashable for this story say that hate groups are coordinating to disrupt large, mainstream sections of the site and occupy others. Moderators can delete posts and comments that violate subreddit rules. Some have taken screenshots of attacks in hopes of providing evidence to admins — Reddit employees that help run the site — and spurring them to take action. Examples can be found here, here and here. There's also evidence of plans to take these efforts to Twitter. Lawrence, the moderator, sent the following screenshot as an example of the type of action that he has had to deal with on a near-daily basis.
Many subreddits have their own rules, enforced by moderators. It is up to them to regulate content and comments with limited tools. They can block users and delete comments, but these efforts are sometimes not enough. While Reddit has 65 employees, it relies on thousands of unpaid mods. "The tools available to mods mainly offer limited reactive approaches, so they have to monitor submissions 24 hours a day to remove slurs and ban each new account created specifically to bully them," said Hanks, who was known to be a particularly active admin during his time at Reddit. "Whenever they were hit by a particularly hard deluge they would escalate to us, and sometimes we were able to stem the tide briefly," Hanks said. "If things get too bad, they have to close their subreddit until the bullies and trolls forget about them and move on."
It's also a strain on the mods. Ryan Perkins, a moderator of several subreddits, said in an email that he had lost count of the number of racist commenters he has had to ban. "This makes moderating any reasonably large subreddit with an eye towards being inclusive actually quite a lot of very emotionally and mentally taxing work," he said. Racism is only one type of hate speech on Reddit. The site has seen similar battles surrounding misogyny and more recently the GamerGate fiasco. Reddit has taken some action against organized hate groups. Banning the original hub for anti-black hate speech was a big step, but one that lacked much impact. Banning either a subreddit or a user is among the most aggressive moves that Reddit administrators can take. It also barely changes anything. New subreddits are easily formed, and new usernames created.
The moderators Mashable spoke with pointed to “the Chimpire,” a group of subreddits that had become the new hub for hate speech on Reddit. Two moderators associated with the Chimpire told Mashable through Reddit’s messaging system that brigading was forbidden in their subreddits and denied organized attempts at vote manipulation.
No help in sight
Successful platforms that began with a spirit of openness have learned to quickly change as they attempted to turn into successful businesses. Facebook and Twitter decided, whether as part of a moral or business decision, that freedom of expression on its platforms has limits. Tumblr cracked down on porn. Reddit recently announ-ced a $50 million round of funding. It has been eight years since Condé Nast parent company Advance Publications bought Reddit, and it’s no secret that the site is try-ing to figure out how to monetize. The recent celebrity leak just about coincided with news of the fundraising round, putting the site in an awkward position. In this case, Reddit took action. A subreddit called /r/TheFappening that had been created to host the leaked pictures was eventually banned. That move drew no shortage of criticism within Reddit for a perceived double standard.
"The core in this case is the same as the core in the celebrity hacking scandal, except in that instance, they only removed the subreddits once they received significant media coverage and legal pressure," said Lawrence, the /r/news moderator. Reddit is walking a fine line. The site is trying to be tough on content that could harm its prospects while also catering to its users that demand Reddit retain its anything-goes foundation. In the calculus between Reddit’s ideals, its business, its users and its moderators, the site seems to have decided that it can most afford to lean on the moderators. This has left them frustrated and angry, but still redditors for now. “We are here, we do not want to be hidden,” the letter on /r/blackladies concluded, “and we do not want to be pushed away.”
Facebook’s new chat app hopes to resurrect the glory days of Microsoft Chat, bringing message boards to the iPhone
24/10/2014- Facebook has released a new iPhone app, Rooms, that allows users to create near-anonymous chat rooms like those from the mid-1990s internet relay chat (IRC) systems. Rooms does not require a Facebook account to use – only an email address to re-login if switching between devices. The app connects users in a pseudo-anonymous fashion to chat about almost anything, away from the main Facebook experience, and is almost a recreation of IRC - but with Facebook’s terms and condi-tions applied.
Developed in 1988, IRC allowed users to connect anonymously across the internet and exchange simple text-based messages. Unlike message boards, IRC did not rely on a website and browser; instead users installed an app on their computers, such as MS Chat, and connected directly to a server. Later files could be transferred, creating a direct connection between users which marked the beginnings of peer-to-peer filesharing. To join a Room, users scan a 2D barcode, which can be shared publicly or privately to invite only a small selection of people to chat. Moderators of each room can filter content requiring approval to post and ban anyone, blocking their device from re-joining. Unlike the original message boards, it’s not “anything goes”; Facebook’s community standard guidelines will apply, banning abusive behaviour and the sharing of certain types of material like child abuse images.
Rooms can also have an age rating, although bypassing the age gate is as simple as taping the “Yes, I’m over 18” button. Age is not verified, however. Rooms does not require a Facebook account to use, only an email address to log in again if switching between devices. The app, which is presently iPhone-only, is the latest from Facebook’s Creative Labs, responsible for Facebook’s Paper and Slingshot apps among others, and marks another Facebook app divorced from the core Facebook social network. Josh Miller, former chief executive of the discussion site Branch and now Facebook product manager, acknowledge the debt to older text-based chat sys-tems, saying Rooms was “inspired by both the ethos of these early web communities and the capabilities of modern smartphones.” In a blog post, Miller said: “One of the magical things about the early days of the web was connecting to people who you would never encounter otherwise in your daily life … Forums, message boards and chatrooms were meeting places for people who didn’t necessarily share geographies or social connections, but had something in common.”
‘Be whoever we want to be’
Rooms attempts to replicate that scenario, where users can chat about anything using a distinct username for each room. The purpose isn’t to be anonymous, but users are not limited to their real name – they can call themselves whatever they would like. “One of the things our team loves most about the internet is its potential to let us be whoever we want to be,” said Miller, whose stance over real names in Rooms seems very different from that of chief executive Mark Zuckerberg on Facebook. “It doesn’t matter where you live, what you look like or how old you are – all of us are the same size and shape online. “That’s why in Rooms you can be “Wonder Woman” – or whatever name makes you feel most comfortable and proud,” Miller said.
Each Room can contain text, images and videos, with the topic determined by the room creator. The service brings 1990s chat rooms into the 21st century with the ability to add cover photos, change the colour scheme and look of buttons in the room, create pinned messages and set whether content shared in the room can be linked to from the outside world.
Start up and make things
The app was developed by the London branch of Facebook’s Creative Labs, which was set up to enable a section of Facebook to operate like a technology startup, taking risks and trying things that the social network could not. Its primary focus has been smaller, single-purpose apps, fitting in with Facebook’s push to unbundle its apps and services from the main “big blue” Facebook app, and increasing the pace of development and iteration within these separate apps. The free app is iPhone-only – currently ranked as two-stars out of five on the App Store – although an Android Rooms app is planned for early 2015.
© The Guardian
23/10/2014- A little over a year after a French court forced Twitter to remove some anti-Semitic content, experts say the ruling has had a ripple effect, leading other Internet companies to act more aggressively against hate speech in an effort to avoid lawsuits. The 2013 ruling by the Paris Court of Appeals settled a lawsuit brought the year before by the Union of Jewish Students of France over the hashtag #UnBonJuif, which means “a good Jew” and which was used to index thousands of anti-Semitic comments that violated France’s law against hate speech. Since then, YouTube has permanently banned videos posted by Dieudonne, a French comedian with 10 convictions for inciting racial hatred against Jews. And in February, Facebook removed the page of French Holocaust denier Alain Soral for “repeatedly posting things that don’t comply with the Facebook terms,” according to the company. Soral’s page had drawn many complaints in previous years but was only taken down this year.
“Big companies don’t want to be sued,” said Konstantinos Komaitis, a former academic and current policy adviser at the Internet Society, an international organization that encourages governments to ensure access and sustainable use of the Internet. “So after the ruling in France, we are seeing an inclination by Internet service providers like Google, YouTube, Facebook to try and adjust their terms of service — their own internal jurisprudence — to make sure they comply with national laws.” The change comes amid a string of heavy sentences handed down by European courts against individuals who used online platforms to incite to racism or violence.
On Monday, a British court sentenced one such offender to four weeks in jail for tweeting “Hitler was right” to a Jewish lawmaker. Last week, a court in Geneva sentenced a man to five months in jail for posting texts that deny the Holocaust. And in April, a French court sentenced two men to five months in jail for posting an anti-Semitic video. “The stiffer sentences owe partly to a realization by judges of the dangers posed by online hatred, also in light of cyber-jihadism and how it affected people like Mohammed Merah,” said Christophe Goossens, the legal adviser of the Belgian League against Anti-Semitism, referring to the killer of four Jews at a Jewish school in Toulouse in 2012.
In the Twitter case, the company argued that as an American firm it was protected by the First Amendment. But the court rejected the argument and forced Twitter to remove some of the comments and identify some of the authors. It also required the company to set up a system for flagging and ultimately removing comments that violate hate speech laws. Twitter responded by overhauling its terms of service to facilitate adherence to European law, Twitter’s head of global safety outreach and public policy, Patricias Cartes Andres, revealed Monday at a conference in Brussels organized by the International Network Against Cyber Hate, or INACH. “The rules have been changed in a way that allows us to take down more content when groups are being targeted,” Cartes Andres told JTA. Before the lawsuit, she added, “if you didn’t target any one person, you could have gotten away with it.”
The change went into effect five months ago, but Twitter “wanted to be very quiet about it because there will be other communities, like the freedom of speech community, that will be quite upset about it because they would view it as censorship,” Cartes Andres said. Suzette Bronkhorst, the secretary of INACH, said Twitter’s adjusted policies are part of a “change in attitude” by online service providers since 2013. “Before the trial, Twitter gave Europe the middle finger,” Brokhorst said. “But they realized that if they want to work in Europe, they need to keep European laws, and others are coming to the same realization.”
According to Komaitis, the Twitter case was built on a landmark court ruling in 2000 that forced the search engine Yahoo! to ban the sale of Nazi memorabilia. But the 2013 ruling “went much further,” he said, “demonstrating the increasing pressure on providers to adhere to national laws, unmask offenders and set up flagging mechanisms.” Still, the INACH conference showed that big gaps remain between the practices sought by European anti-racism activists and those now being implemented by the tech companies.
One area of contention is Holocaust denial, which is illegal in many European countries but which several American companies, reflecting the broader free speech protections prevalent in the United States, are refusing to censure. Delphine Reyre, Facebook’s director of policy, said at the conference that the company believes users should be allowed to debate the subject. “Counter speech is a powerful tool that we lose with censorship,” she said. Cartes Andres cited the example of the hashtag #PutosJudios, Spanish for “Jewish whores,” which in May drew thousands of comments after a Spanish basketball team lost to its Israeli rival. More than 90 percent of the comments were “positive statements that attacked those who used the offensive term,” she said. Some of the comments are the subject of an ongoing police investigation in Spain launched after a complaint filed by 11 Jewish groups.
But Mark Gardner of Britain’s Community Security Trust wasn’t buying it. “There’s no counter-speech to Holocaust denial,” Gardner said at the conference. “I’m not going to send Holocaust survivors to debate the existence of Auschwitz online. That’s ridiculous.”
© JTA News.
Ill-will, incompetence or indifference. In which category does the inactivity of the Czech Police with respect to racist threats and verbal attacks belong?
22/10/2014- The failures of the criminal justice authorities result in making it possible for incitement to racism and threats to be made with impunity in the virtual realm, especially on social networking sites. Zdenìk Ryšavý, director of the ROMEA organization, recently became the target of such threats. More and more Czech citizens are personally experiencing this every day. People are becoming the victims of online threats because of their alternative opinions, religion, skin color, or - in the case of the director of ROMEA - because they refuse to agree with incitements to racism or to participate in disseminating xenophobic opinions.
When people fear for their lives, it is natural for them to turn to the police for help and protection, as the police motto goes. However, after experiencing bureaucratic obstacles and the time it takes to write up various documents and requests or make official statements, many realize the futility of seeking such police assistance; while rank and file detectives in the police departments do their best to help, their dependency on the often absurd instructions given them by police command ties their hands.
Incitement to murder
On 17 February a Czech-language Facebook page was launched with hateful content and an unambiguous name: "We Demand the Public Execution of the Executive Director of Romea, o.s., Zdenìk Ryšavý" ("Poadujeme veűejnou popravu výkonného űeditele Romea o.s. Zdeòka Ryšavého"). In addition to other texts inciting violence against a particular group, on 28 February the following discussion post also turned up there: "Not only will Zdenìk Ryšavý and his daughter have to pay with their blood, but so will Tomáš Bystrý, Jarmila Baláová and the dubious artist and perverted homosexual David Tišet" [sic, the correct spelling is Tišer - editors]. A Facebook user appearing under the name Gabriel Zamrazil then posted: "I totally agree. He deserves death.... Let me do it."
This commentary indicated a readiness to personally commit a crime or to otherwise ensure its realization. Ryšavý reported the page to Facebook as hateful and demanded that it be removed. "We immediately reported the page and called on our fans to do the same," Ryšavý told news server Romea.cz. Facebook sent a response within moments. "We have checked the page you reported as containing hateful language or symbols and found it does not violate our Community Principles," read the answer. This is the automatic reply that Facebook sends out within just a few minutes in such cases.
Ryšavý, afraid for his own life and for the security of his family, filed a criminal report on 5 March about the facts indicating that the making of criminal threats (Section 353 Act No. 40/2009, Coll.), incitement to commit a crime (Section 364) and approval of a crime (Section 365) had all been perpetrated. The presumption also exists that the people who supported these Facebook threats by clicking the "like" button (another 27 people) have committed the felony of approving of a crime. The police response that followed could have been a model for an absurd tragicomedy about how the rule of law works, one that should be screened in police academies as an example of how police officers and the state prosecutor are definitely not supposed to proceed when fulfilling their obligations. Ultimately, what helped the case was publicizing it; most probably, when the perpetrator learned from the media that a criminal investigation was underway, he got scared and erased the Facebook page himself.
Lost in translation
"The unwillingness of the Police of the Czech Republic to pursue serious verbal crimes like this is alarming," said Klára Kalibová, a lawyer who directs the In IUSTITIA organization, which participated in writing up the criminal report. The correct URL address of the Facebook page was included in that communication. Police had to first have the text of the report translated into English, and it then underwent approval according to a so-called Telecommunications Service Monitoring protocol, in accordance with the Czech Criminal Code, after which it was sent by the Police Presidium to the country at issue. In the first phase, that was Ireland, which is where Facebook has its European branch.
Not only did that entire procedure take several months, but the Czech Police sent the wrong URL address to Ireland. "Understandably, they wrote back from Ireland that the URL address was wrong and needed correction," Kalibová comments, adding, "but [the Czech Police] didn't correct it - instead they issued an absurd decision that was not based on the truth, claiming that they had not managed to find the perpetrator and that the case was being postponed." After some time, there was nothing left to do but to resubmit the motion to the police, again with the correct URL address. The police were repeatedly called upon to communicate with Face-book.
In the interim, however, an internal methodological instruction for the Police of the Czech Republic took effect according to which officers must first consult every-thing with the state prosecutor, who will decide on how to proceed. This, of course, meant that the excruciating process of the criminal investigation was far from over. "One state prosecutor, whom I will not name, but who is presented as a leading specialist in extremism, by the way, has already shelved several cases of verbal crimes, saying they are allegedly not serious and are covered by freedom of speech protections," Kalibová said. Those cases have involved, for example, right-wing extremists from the National Resistance, or Patrik Banga's criminal report filed against a journalist who invented and published a "news" story about Romani people allegedly robbing a collection that had been taken up for flood victims. "In Zdenìk Ryšavý's case, a police officer consulted it with [the state prosecutor] and she decided not to file charges. She allegedly insisted in her decision that in her experience, the Americans would not pursue this," Kalibová said.
The excuse of freedom of speech in the USA
What is absurd about the state prosecutor's approach in this context is the fact that she has argued in her decision that freedom of speech is extensive in American legislative practice. The state prosecutor's interpretation of that information is that US law tolerates these kinds of threats. That claim is dubious to say the least, because death threats against a specific individual are prosecutable in the USA, just as they are in the Czech Republic. It is mainly dubious in another sense: The state prosecutor either does not know or does not want to know that she was supposed to have been turning in this case not to the USA, but to Ireland, where EU legislation applies.
She is, therefore, involuntarily participating in creating de facto impunity for verbal crimes committed in a racist context in the Czech Republic. What is paradoxical is that according to our information, the Irish branch of Facebook responsible for Central Europe is friendly and helpful when it comes to intervening against such excesses, but of course they need the correct information to do so, and the Police of the Czech Republic, and indirectly the state prosecutor, basically were incapable of supplying it. "I was in contact with Irish Facebook's head of public relations for Central Europe, who said that if the police can prove this to her, she would cooperate with them. She told me: Have them write it up properly and we will be happy to oblige," said Kalibová, "but the Czech police officers, of course, did not respond to that."
Calls for murder illegal in US too
Kalibová believes this points to a serious systemic problem in addressing hate crime in a cybercrime context, because Europe cannot be toothless in its cooperation with the United States, and the clarification of specific crimes should not have to depend upon whether Czech police officers speak English or not. The state prosecu-tor's key argument, that the case of Zdenìk Ryšavý falls under the protection of freedom of speech as it is interpreted in the United States, is doubly moot. Even if the case were to fall under American legislation (and not Irish law, as it actually does), any call for the specific murder of a specific person is clearly illegal in all of these systems. "This is extremely serious misconduct by the criminal justice authorities and it is endangering the security of a specific person and his family," Kalibová stresses; she is considering using her final enforceable procedural tool, that of a complaint to the supervising Prosecutor's Office, which could order the state attorney to proceed in accordance with the Criminal Code.
Grist to the mill of the xenophobes
Giving the excuse that threats to publicly execute a Czech citizen and his family cannot be prosecuted by referring to the practically unlimited freedom of speech in the United States of America is unacceptable for two reasons: Such an excuse not only contravenes the facts, it mainly contributes to a false legal analysis and reinforces Czech racists and other extremists in the illusion that their behavior is tolerated by society and the state. This is particularly dangerous in a situation where blogs, the media, and social networks are abuzz with incitements to hatred.
Such lack of action further disseminates the feeling that calls for violence against ethnic minorities, or against those whose opinions differ from ours, are generally tolerated. In this context, the futile, long-term, strenuous efforts of this author to contact those responsible at the Police of the Czech Republic for a statement on this issue is symptomatic of a bigger problem; if the Czech Police provide us a statement after this piece is published, we will be glad to publish it.
21/10/2014- Destructive Creations, the Polish studio behind the upcoming game Hatred, has been accused of being neo-Nazi sympathizers and anti-Islamic xenophobes because of the organizations and people that they "like" on Facebook. Their game is, in their own words, over-the-top violent and purposefully insensitive to "social justice" themes. Yesterday, CEO Jaroslaw Zielinski spoke with Polygon about his feelings over the accusations and promised more clarification. Today, individual members of the development team made personal statements. In a blog post titled "The First Storm Resisted" Zielinski and others formally responded to the accusations made against them. "My great-grand father was killed by Gestapo," writes Zielinski. "Some members of my family were fighting against nazi occupation in the Polish underground army called 'Armia Krajowa'. My forefathers suffered greatly because of totalitarian regimes, so who the fuck would I be if I'd truly support any of Nazi activists?
"The hateful title I'm working on (where virtual character hates virtual characters), doesn't have any connection to what I truly believe and think, there is a real-life outside, you know? Maybe you should try it? I will never ever again respond to any of those accusations, this is my ultimate statement." "Nazi Germany is responsible for killing 6 million people in Poland," writes Marcin Kazmierczak. "Half of them were Jews, half of them Polish. My family suffered many losses during the World War II. Anybody accusing me for being a follower of said ideology should really think twice before doing so and consider reading some books on the topic. ... Values like pluralism, democratic opposition and the right to manifest one's own views shouldn’t be called ‘the lack of tolerance’. Finally regarding my attitude towards gays let me just say that I have a few gay friends that I deeply respect as people and have no problem with their sexual orientation."
"In response to repeated allegations against me," writes Jakub Stychno, "I’d like to state that I’m opposed to all totalitarian ideologies. The t-shirt that I’m wearing on our team picture refers to National Polish Army troops, that in 1945 refused to lay down arms and continue fighting against the new invader, to regain independent Poland. They did so because they’ve rightly anticipated Soviet security service repressions against Poland's already demilitarized army. I would also like to emphasize that until the year 1945 those troops were actively fighting against the Third Reich occupation. Those soldiers are Polish national heroes and as such deserve commemoration." CEO Jaroslaw Zielinski also states that while "we knew that our reveal will cause some shitstorm" his team did not expect such a wide or vocal reaction.
"Many can call us 'attention whores,'" Zielinski continued. "Well, we try to get world's attention to our product and as you can see — it worked perfectly. ... We wish to thank all of our haters and all upset press for a great marketing campaign they've done for us. "A week ago, we were a little company from the middle of nowhere, just some guys making some game. Today everyone heard about 'Hatred' and us. All thanks goes to those who were trying to harm us (with no desired effect, what a pity)."
Edit: The original version of this story listed the company name as Destructive Games. It is in fact Destructive Creations. Additionally, we've cleaned up the quoted sections of copy for readability.
By Katie Engelhart
21/10/2014- Last Thursday, at a public hearing about the “right to be forgotten” in central London, Google Executive Chairman Eric Schmidt had a bit of trouble pronouncing the names of the eminent Europeans with whom he shared a stage. But he tried his best. And he muddled his way through. It’s an apt metaphor for the way that one of the world's most powerful companies has been struggling in the wake of a ruling by the European Court of Justice (ECJ) in May on the so-called “right to be forgotten.” The court ruled that Google (and other search engines) must allow individuals to erase certain results that appear on web searches of their names—when the linked-to information is “inadequate, irrelevant, or excessive.” The court’s reasoning: Normal people have a right to be forgotten online. Reaction to the ruling bordered on hysterical. Depending on your view, the ECJ has either safeguarded individual privacy or heralded the slow death of the free and fair Internet in Europe. MailOnline publisher Martin Clark said that de-linking was “the equivalent of going into libraries and burning books you don’t like.”
When a site is “forgotten” on Google, it’s not actually deleted at the source, or even erased from all internet searches—but it does disappear from searches of the individual requester’s name. Now, if you Google search for a name in Europe, a notification appears at the bottom of the search page: “Some results may have been removed under data protection in Europe.” But the court was vague in its definition of “inadequate, irrelevant, or excessive” data. As a result, Google has been reluctantly cast in the role of pan-European judge and jury of the internet's collective memory, responsible for deciding (behind closed doors) what constitutes the continent’s public interest. Shortly after the May ruling, an evidently pissed-off Schmidt cobbled together an “Advisory Council on the Right to be Forgotten,” which includes an Oxford ethics philosopher, a former German justice minister, and Wikipedia boss Jimmy Wales. Google followed that up by launching a road trip. Last Thursday’s event was one of seven town-hall style meetings being held by the company across Europe.
Some have dismissed the tour as a PR stunt, and suggested that the company is engaging the public only to show up the clusterfuck that has been born of the ECJ ruling. If that’s true, well, mission accomplished. On Thursday, I went to one of Google’s public meetings in London to find out who does and does not have the right to be forgotten on the internet. Four hours later, I left feeling sure of one thing: Implementing the ECJ’s decision is going to be really, really hard. Schmidt began the day by discussing some more clear-cut cases. A victim of physical violence wanted references to the assault removed from web searches of his/her name. Google said OK. A pedophile wanted recent data about his conviction de-linked. Google said nuh-uh. So far, so simple. But in other cases, lawyers have wavered. Google has struggled with the case of an adult who wanted reference to a teenage drunk driving incident de-linked and the case of a former member of a far-right party who no longer holds extreme political views.
In deciding whether or not to de-link, Google must consider how “relevant” online data is, taking into account factors like “time passed,” the “purpose” of the information, and the role that the data subject plays “in public life.” Google must balance “sensitivity for the person’s private life” with “the public interest.” And it must determine if linked-to data is “inadequate” or “excessive.” But what does all that it even mean, at a practical level? What terrible things can you do and then have expunged from the internet’s collective memory forever? Let’s start with a fairly likely scenario. You’ve been recorded or photographed doing something that the internet deems hilarious at your expense—enthusiastically making out with someone who looks really bored, dancing really energetically and embarrassingly, that kind of thing. If the web has tied that meme to your name, do you have the right to hide from the digital public?
Gabrielle Guillemin of the nonprofit Article 19 suggested that embarrassment is not a good enough reason to request de-linking. But Google Advisory Council member Peggy Valcke, a law professor in Belgium, suggested that it could be. And anyway, argued Oxford University philosopher Luciano Floridi, another Advisory Council member, “Embarrassment comes in degrees. Social embarrassment becomes social stigma becomes losing your job… Do you we have a way of understanding when embarrassment, discomfortm and unpleasantness become harm?” Does the calculation change when the data involves a child? Or an otherwise vulnerable person? It didn’t really get cleared up. What if the source of the embarrassing material is you? Say you posted an emo selfie on MySpace ages ago and now it’s ruining your nascent cage-fighting career. Schmidt conceded that things get tricky when requesters themselves published the data that they now want de-linked. Recently, a media professional in Britain asked Google to erase links to “embarrassing content” that he himself posted online. Google said no.
What about if you’ve done something more serious? Say you’d rather everybody didn’t know about all that embezzlement you got caught doing at your last job. Panellists agreed that de-linking information on things like criminal convictions would depend, in part, on whether the requester is a public figure. But how do we define a “public figure”? David Jordan, the BBC’s director of editorial policy and standards, introduced the hypothetical case of a voluntary school board member—a guy who's "famous" evaluating the quality of school lunches. Is this man a public figure? And so, does all his data belong in the public domain? Evan Harris, a former member of UK Parliament and now associate director of the Hacked Off campaign, suggested that people might ask for information about prior fraud to be de-linked, then later run for public office. By extension, is everyone’s data in the public interest on the grounds that we’re all potential future elected officials or important people? Again, it wasn’t made clear, but to stand a better chance in that election, you should request that Google forgets before printing your campaign posters.
Already, many de-link requests have come from criminals. Schmidt gave the real-life example of a convicted criminal served his time and who now wants reference to the conviction erased from search results. Should old convictions be “forgotten”? How old is old enough? This was also left—you guessed it—unclear. Increasingly, advocates on both sides of the line are joining together to issue a common plea that these critical decisions be made in European courtrooms rather than in Google boardrooms. They also insist that Google’s decisions be subject to external review. Currently, there is no appeals process for content publishers who disagree with a Google de-linking decisions. That may change, and soon. European regulators are already at work, beefing up the continent’s data protection policy, with an eye to codifying the right to be forgotten.
Philosophy aside, Google is faced with a logistical nightmare. The company has reportedly hired dozens of lawyers and paralegals to deal with de-link requests on a case-by-case basis. “It’s not obvious to me that this can ever be automated,” said Schmidt on Thursday. Already, Google admitted to errors—and has re-linked some of the half a million de-link requests that they have fielded since May. And yet, for now, there remains a simple way to maneuver around this new European internet. Going to google.com (rather than, say, google.co.uk or google.fr) transports European internet searchers to virtual America—and thus gives them access to the entirely “remembered” internet that they once knew. On Thursday, Schmidt was asked whether European searchers should simply start using the .com site. “I am not recommending that,” he said, with a wry smile.
Thousands of Facebook users 'liked' the post, featuring a picture of Lynda with All Creatures Great and Small co-star Christopher Timothy
24/10/2014- Lynda Bellingham’s tragic death has been exploited on Facebook by far right extremists, it emerged last night. Britain First encouraged people to like and share a picture of the Loose Women star minutes after her death was announced on Monday. Thousands of Facebook users 'liked' the post, featuring a picture of Lynda with All Creatures Great and Small co-star Christopher Timothy. However, many would not have been aware that the photo was being spread by Britain First, an ultra-right campaign group. Its supporters use the Britain First Facebook page to call for British Muslims to be “wiped out” and non-whites deported. Formed from former BNP and EDL members, Britain First made headlines this year by invading mosques and threatening imams.
Men in the group’s paramilitary-style uniforms pushed their way into several mosques in England and Scotland. Founder of Britain First, Jim Dowson, later quit the group over its “unchristian” paramilitary-style “mosque invasions”, saying they were “provocative and counterproductive”. He added that they were attracting “racists and extremists” to the organisation, which has taken over from the British National Party and the English Defence League as the biggest far-right threat in the UK. Mr Dowson, from Belfast, left the BNP in 2010 to form a “Christian” group opposing the rise of radical Islam. But he told the Mirror he had pulled the plug on the group’s funding, closed their office in Belfast and severed all links.
He described the mosque invasions as “unacceptable and unchristian”, adding: “Most of the Muslims in this country are fine."They are worried about extremists the same as us. So going into their mosques and stirring them up and provoking them is political madness and a bit rude.” Matthew Collins, of anti-racist group Hope not Hate, said: “It is the most dangerous group to have emerged on the far right for several years.” But a Britain First spokesman told The Sun: “We do this regularly when British celebrities pass away. We pay our respects.” Brave Lynda lost her battle with cancer at the weekend after the disease spread from her colon to other parts of her body. She died in her husband Michael's arms on Sunday, aged 66.
© The Daily Mirror
22/10/2014- A 21-year-old British man was sentenced to four weeks in jail for sending an anti-Semitic tweet to a Jewish member of Parliament. Garron Helm pleaded guilty Monday to sending the offending message to Labour Party member Luciana Berger. In addition to the jail sentence, Helm was ordered to pay Berger $128. The tweet, which called Berger a “communist Jewess,” showed a photograph of her with a Holocaust yellow star photoshopped onto her forehead and the words, “You can always count on a Jew to show their true colours eventually.” It had the hashtag “Hitler was right.” Helm’s home contained Nazi memorabilia and a flag for an extremist right-wing group called National Action. “This sentence sends a clear message that hate crime is not tolerated in our country,” Berger said in a statement. “I hope this case serves as an encouragement to others to report hate crime whenever it rears its ugly head.”
© JTA News.
Representatives From Google, Twitter and Others to Meet with Cameron Advisers on Thursday
22/10/2014- The U.K. government is intensifying efforts to enlist the help of large technology companies such as Twitter Inc. and Facebook Inc. in combating extremist content online amid growing concerns about terrorist threats. Representatives from the companies, which also include Google Inc. and Microsoft Corp. , are due to meet with policy advisers for British Prime Minister David Cameron on Thursday to discuss how they can reduce ways for terrorists to recruit and spread their messages online, according to government officials. While the large technology companies have been generally cooperative, British officials say, thorny issues remain. Among them: what to do about material authorities consider extremist and want removed but that isn’t necessarily illegal, such as some videos of sermons by radical preachers or posts by extremists encouraging Westerners to join the fight in Syria. Privacy considerations are another challenge in instances where U.K. authorities have asked technology companies to hand over details of the people posting the content, such as names, usernames, email addresses and Internet protocol addresses, which can help identify a person’s general location.
Thursday’s meeting, which is due to take place at Mr. Cameron’s official residence at Downing Street, will be chaired by Jo Johnson, head of the prime minister’s policy unit. A spokeswoman for Mr. Cameron said the purpose of the meeting is to discuss “what we can do collectively in this area.” She added that the big technolo-gy companies have been collaborative in working with the government to remove terrorist and extremist material, though they have raised some concerns in general about data protection. Facebook, Google, Twitter and Microsoft declined to comment on the Downing Street meeting. Big technology companies say they are generally responsive to government requests in removing terrorist-related content and many have policies against posting violent or threatening content.
Technology companies have tried to push back against some of the requests to remove content or turn over user data, though. Internally, some technology executives say they are worried that censorship techniques more common in countries such as Russia and Turkey could become more generalized as governments grant themselves more power. Some companies are skeptical about handing information about their users to governments, particularly if the user hasn’t done anything illegal, said Michael Clarke, director at the Royal United Services Institute, an independent think tank on defense and security. “It’s a very delicate relationship at the moment,” he said. Still, the companies say they seek to work with government. In the U.K., for instance, Google handed over information in response to 1,100 of the more than 1,500 requests it received from the government, according to data released by the company. French police officials say many companies have dedicated pages that allow law enforcement organizations to send requests directly to the firms for information such as names, email addresses, credit card billing information and other information.
Sophisticated and prolific use of social media for propaganda purposes has been a hallmark of Islamic State, the militant group that has captured large stretches of territory in northern Iraq and Syria. Extremists have posted content ranging from images of killings to promotional-type videos intended to lure young Westerners to fight. The concern for many European countries, including the U.K., France and Belgium, is that the material will serve to fuel the already large numbers of citizens going to fight with extremist groups overseas—and that they will be more likely to take part in terrorist activity when they return. U.K. authorities say, on average, five people a week travel from Britain to Syria and Iraq to fight and there has been a sharp increase in the terror-related arrests at home. On Wednesday, police arrested a man and a woman separately on suspicion of terrorist activity as part of separate Syria-related investigations.
As a result, the U.K. and other governments are stepping up efforts to delete content and track down the authors of extremist content online. London’s Metropolitan Police, known as Scotland Yard, says it has been removing around 1,000 pieces of such content from the Internet each week, most of which is related to Iraq and Syria. This includes videos of beheadings and other killings, torture and suicides. “Dealing with material which may be described as extremist, but does not obviously infringe (upon) U.K. terrorist legislation, is more difficult,” a senior U.K. government security official said. “We have proposed to companies that they consider seriously whether this material is consistent with their terms and conditions.”
European Union officials met in Luxembourg earlier this month with representatives of Google, Facebook, Twitter and other companies to discuss ways to combat online propaganda from terrorist groups. France has recently beefed up its antiterrorism laws to allow, among other actions, authorities to cut off Internet access for people defending terrorism and websites labeled “terrorist.” The measures also permit wider terrorist surveillance online. But some specialists in counterterrorism question the effectiveness of governments increasing reliance on censorship and filtering to counter online extremism. Ghaffar Hussain, managing director of London think tank Quilliam Foundation, said such moves tend to be costly and potentially counterproductive. He said a more effective method is producing content for online initiatives that counter extremist ideas, such as parody videos making fun of recruits militant group Islamic State. “To simply shut the debate down doesn’t allow any progress to be made on the counter-extremist front,” Mr. Hussain said.
© The Wall Street Journal.
Violent hooligans, backed by right-wing extremists, have teamed up against a new enemy: Salafists. For months now, they have lashed out online - and now they're taking to the streets.
18/10/2014- It began on Facebook, where anti-Islam soccer fans have been venting their anger in online forums for months now. But lately, in German cities, like Essen, Nuremberg, Mannheim, Frankfurt and Dortmund, hostile and extremely violent hooligans, usually at odds with each other, have united against a new enemy: Salafists - a radical and militant branch of Islam. Their initiative, currently known as Ho.Ge.Sa. - "Hooligans gegen Salafisten" ("Hooligans against Salafists") - has seen its profile repea-tedly blocked by Facebook, but it always reappears under another name. It's here that the group is stoking the flames against the hard-line Salafist movement. Next stop: a demonstration planned for October 26 in front of the Cologne Cathedral.
The current mood and the protests organized by Kurds across Europe are giving hooligans and right-wing sympathizers the chance to "apparently demonstrate against the Salafists, but really only to express their own Islamophobia," Olaf Sundermeyer, a journa-list and author, told DW. "We are 'hooligans against Salafixxxx.' Together, we are strong," reads the group's Facebook page. They see themselves as "a movement that has brought together hooligans, ultras, soccer fans and ordinary citizens in a common fight against the worldwide 'Islamic State' terror campaign and the nationwide Salafist movement." In Facebook posts and on banners at their demonstrations, they call their group the "resistance" against "the true enemies of our shared homeland." The latest protest in Dortmund drew around 400 people. "On 26.10.2014 in Cologne, we will significantly increase this number of participants," a moderator recently announced on the site. "Peaceful, unmasked and without rioting."
'Salafists are the greater evil'
These slogans have actually served to bring together opposing hostile fan bases, who usually meet up before and after sports events to fight each other. Gunter A. Pilz, an expert on fan behavior from the Sport University in Hanover, calls this phenomenon "a temporary fighting alliance." However, he said that this coalition will only last as long as the common enemy: the Salafists. Sundermeyer, who points out that anti-Islam attitudes are widespread in the soccer fan scene, said there's a risk that ex-treme right-wing groups will be tolerated because the brutality of "Islamic State" militants in Syria and Iraq is proof to many that Salafists are the greater evil. In an interview with German public radio Deutschlandfunk, Sundermeyer said that "Hooligans against Salafists" is still a relatively small group, but stressed that it could attract more followers - even those with less radical viewpoints. Soccer, he said, is the ideal environment to radicallize and recruit young people to the extreme right-wing cause. Officially, though, the league has distanced itself from the right-wing extremist movement.
Mobilizing apolitical hooligans and soccer fans
But there's an obvious overlap with the neo-Nazi scene: Ho.Ge.Sa. is backed by Dominik Roeseler, a member of the rightwing Pro NRW party who sits on the Mönchen- gladbach city council. He plans to be at the demonstration in Cologne. Roeseler is considered to be quite extreme and is, like all right-wing party members, under observation by German security officials. And there are further connections: At the protest in Dortmund, many shirts, jackets and banners were adorned with neo-Nazi symbols. The next day, a post on the Facebook group backtracked, saying that "unfortunately, we have found out that many neo-Nazis came to this event. We want to once again make it clear that we are not political."
There doesn't even seem to be a consensus over Dominik Roeseler among the Ho.Ge.Sa. members. A few days ago, they announced that they had parted ways with him. But one thing is certain: the Cologne demonstration is being organized by right-wing political officials. Is Ho.Ge.Sa., therefore, an attempt by right-wing extremists to drum up new members from within the ranks of hooligans and extremist soccer fans? At the most recent count, the number of Ho.Ge.Sa. fans had risen to more than 16,000. "We continue to grow, the media can hound us all it wants. This time, you will not be able to stop us," wrote a follower on the site. Until recently, soccer associations, clubs and other fans had been able to keep the hooligans in check, said Sundermeyer. Now, however, faced with the threat posed by the Salafists, the cause of the right-wing extremists is seeing increasing support.
© The Deutsche Welle.
17/10/2014- In Italy, in 2013, for the first time ever, online racial discriminations exceeded those recorded as part of public life and in the workplace. More than ¼ of the cases (26.2%) refers to the mass media (compared to 16.8% in 2012). For a total of 354 cases. These are some of the figures, already widespread by the Italian National Bureau against Racial Discrimination), and reported in the “Third White Paper on racism in Italy” by Lunaria. The work, nearly three years after the publication of the second one, has monitored, analyzed, studied and summarized the multiple forms of xenophobia in this country.
Download Lunaria, Cronache di ordinario razzismo - Terzo Libro bianco sul razzismo in Italia, 2014 (PDF)
© West Info
B’nai B’rith says Etsy, Ebay, Amazon, Sears and Yahoo! guilty of allowing users to sell offensive items
17/10/2014- International Jewish organization B’nai B’rith demanded several online retail outlets Wednesday to enforce policies against users selling “hateful parapher-nalia,” The Times of Israel reported Thursday. According to B’nai B’rith web retailer Etsy had “456 swastika-themed items...available for sale, as were 479 Hitler-themed items, 13 Ku Klux Klan-themed items, and one racist, Jewish caricature candlestick listed specifically under the topic ‘anti-Semitic.’” B’nai B’rith said Ebay, Amazon, Sears Marketplace and Yahoo!, were also guilty of allowing users to sell offensive items on their sites. Sears then removed a swastika ring from the roster of items offered for sale, the Jewish Telegraphic Agency reported. The item description quoted in the report read "this Gothic jewelry item in particular features a Swastika ring that’s made of .925 Thai silver.” It then featured the following curious disclaimer: “Not for Neo Nazi or any Nazi implication. These jewelry items are going to make you look beautiful at your next dinner date.”
According to JTA, the item also was for sale on Amazon.com, though it is listed currently as unavailable. Sears issued an apology in a statement and on Twitter:
“Like many who have connected with our company, we are outraged that more than one of our independent third-party sellers posted offensive items on Sears Market-place,” the company said in a statement. “We sincerely apologize that these items were posted to our site and want you to know that the ring was not posted by Sears, but by independent third-party vendors.”
© i24 News
These days social media allows strangers and their opinions into our homes at all times of the day or night – but only if we allow it to
By Jade Wright
17/10/2014- It’s not every morning that I’m described as a fascist and ‘a silly young hack who resorts to insults at the first provocation’. Not before before I’ve finished my toast, anyway. Admittedly I am quite strict about separating my vegetarian fry up from my boyfriend’s carnivorous version, but most mornings are fairly peaceful in our house – until either of us picks up our phones and looks at Twitter. This week I spotted a message from a bloke (at least I think it’s a bloke, but there was no picture), which read: “Just read your June article in the Echo about Britain First. You are the reason people re-post their stuff. Wake up!” That one story, which I wrote in response to people sharing Britain First’s D-Day posts on Facebook, is still the best-read column I’ve ever written. I don’t know why, but it still gets re-posted and read every week, and I still get plenty of abuse from far-right supporters about it, as well as some nice comments too.
This bloke had clearly taken exception to me pointing out that Britain First are a right-wing political party and street defence organisation who encourage people to share their posts to spread their message. He didn't like me warning people against re-posting things without checking what they are. He said: “The issue is that people like YOU are wilfully ignoring why people like me turn to the far right. Only they give us a voice... We agree with your multicultural hogwash or you dismiss us as fascists. YOU are the fascist.” I laughed so hard I almost spat my tea out. Boyfriend looked crossly across the table, briefly distracted from his plate full of sausages and bacon. We try not to spend our rare time at home together arguing with strangers on Twitter. We have a no-phones-at-mealtimes rule.
But this was too funny for me not to respond. The man, who said he was part of the far right, was using fascism as an insult. That’s like me accusing someone of being a ‘lefty’ as a bad thing. He didn’t seem to realise that fascism is a form of authoritarian nationalism – the very thing he claims to support. Presumably he thought it was just a catch-all insult for anyone whose opinions he disagreed with. My response was probably a bit mean, looking back. I made fun of his insult and his poor use of grammar. I told him to come back and debate when he’d read his history books. This prompted the “just a silly young hack who resorts to insults at the first provoca-tion” tweet.
He’s not that far wrong – I am silly and I quite liked being described as young – but then I came to my senses, put down my phone and picked up my knife and fork. Time was when I had to leave the house to be insulted by a stranger (rather than insulted by someone I know, which happens all the time). These days social media allows strangers and their opinions into our homes at all times of the day or night – but only if we allow it to. I’m putting down my phone.
© The Liverpool Echo
In late September and early October every year, hundreds of thousands of new and returning students journey from their family homes to university campuses across Britain. This includes over 8,500 Jewish students who, in the addition to the usual pressures associated with resuming university life, are having to consider what this summer’s record spike in anti-Semitic incidences will mean for them in the coming year.
13/10/2014- At JW3 in north-west London, The Times of Israel spoke with Ella Rose, president of the Union of Jewish Students (UJS), the peer-led body which represents British Jewish students and is a confederation of 64 Jewish societies (JSocs) from across Britain’s universities. As president, Rose is responsible for representing the interests of students to the wider community as well as setting the strategic goals and objectives for the UJS during her one-year term. During our interview, we discussed how the UJS has prepared for the new university year, as well as issues of anti-Semitism on campus and the place of Israel advocacy in the union’s work.
Tell us what you and the UJS have been doing in the past two weeks and what you’ve found.
This is my first day in the office in about two weeks! Last night, I slept on the floor of a freshers’ dorm in Bristol – which was awful. We’ve been doing our campus visits, about forty visits in two weeks between the eight members of our program staff, going to all the different freshers’ fayres, freshers’ events. For example, yesterday I went to visit Bath JSoc for their freshers’ fayre and then over to Bristol for their freshers’ barbecue. We’ve been building relationships on campus, making sure they’re comfortable going onto campus, that they can sign people up and just being a friendly face and a helping hand. Jewish students are getting on with their lives. At Bristol last night, there were around 150 people at the barbecue, which is fantastic. There was very little Jewish life there four or five years ago. Now, they’re one of the biggest JSocs in the country and that’s because people created a welcoming Jewish life, other people hear about it and they come along. I think there are about sixty kids from JFS [a Jewish secondary school in north London] at Bristol now. It’s brilliant.
Given the summer we’ve had in terms of heightened anti-Semitism in the UK, what has the UJS been doing in preparation for the start of the new university year?
We were worried. There’s rising anti-Semitism and campus is a microcosm, so what you see in our communities is often reflected on campus. But, we’ve had really strong and positive start to term. As far as I am aware, we’ve not had any incidences where people have felt uncomfortable because they’re Jewish. We had a leadership and political training summit at the beginning of September and we talked about these issues and we said, ‘This might be an issue, this is what you should do, this is what you should think about preparing for campus.’
We started a campaign called #keepitkosher, with the tagline ‘Snap It, Send It, Stop It’, and it’s about stopping online anti-Semitism because that’s where some of the students would feel it more strongly, and we work with the CST [on that]. It’s about creating a safe space for Jewish students and students feeling that there is someone there to support them. But I went to Nottingham, I never experienced anti-Semitism when I was there and I’m pretty sure everyone knew I was Jewish because it’s not something I keep quiet about. I believe it’s an incredible time to be a Jewish student and I don’t believe that will change this year.
What are your plans for the coming year concerning Israel advocacy and creating safe spaces on campus to discuss Israel?
On Israel, we are unified but not uniform. We are a union of Jewish students but that doesn’t mean we expect anyone to have the same uniform opinion within that. We do not mandate what individual JSocs do: some choose to be involved in Israel debate, some don’t. It’s important to recognize that while the majority of Jewish students do have a connection with Israel as part of their identity, all identities are multi-faceted and none of them are the same.
Having said that, we do have mandated policies that are voted on every year at the UJS conference. We proudly support the two-state solution, we proudly stand against anti-Semitism, we also stand against BDS. At Sussex last year, for example, which is seen to be a very left-wing university, a BDS resolution failed because Jewish students took their own initiative and said that an academic boycott would be unacceptable. This policy isn’t imposed on JSocs but, as a union, we are opposed to BDS and will combat the delegitimization of Israel and work with our communal partners to do so.
What did you think of the discussion earlier this year about whether the UJS’ mandated policies on Israel exclude anti-Zionist students from JSocs?
It was a really interesting discussion and it stemmed from a debate we had at the UJS conference about how we do Israel. UJS is an inclusive space: we are cross-communal, peer-led, and representative, and I would hate to not to be to able to include anyone because of their beliefs. It’s difficult because you have some students who are anti-Zionist and some for whom Zionism is part of their Jewish identity and if they didn’t get Zionism at a Jewish society, they would feel like they were missing something. It’s two Jews, three opinions – it’s impossible. I’m not convinced it’s something you can ever completely solve. I would want an open and inclusive space and it’s up to students within that to have the conversation.
When did you become involved with UJS?
I started university in 2011 and I decided that I was going to sign up to women’s football and JSoc. I did play women’s football but I was part of a team that lost 24-0, which is approximately a goal conceded every three minutes, which is quite impressive and very tragic. That was around the time I gave up – obviously I wasn’t a very good striker. So, I got really involved in JSoc when I gave up football. I ran for the campaigns committee and was involved in their Israel work and then got involved in the UJS because of this.
Two years ago, Alex Green [a former president of the UJS] put the idea of running for president into my head. I was on the UJS National Council, went on a trip with the EUJS to the UN Human Rights Council in Geneva, and I loved it. It was different, interesting, fun and all about peer leadership which is a value I grew up with in BBYO and I loved that idea that you could empower someone to do things themselves rather than just doing it for them.
What are your ambitions for your term as president?
I ran on a platform of accountability and representation and strengthening the functions we already have. One thing that’s already gone live is that we have a feedback form on our website because, as a first year [student], it’s really intimidating to call someone who works at the UJS. It shouldn’t be for them to feel like they have to make that move, we should be accessible to them, and through the feedback form people can have an instantaneous connection to the union. Another priority is improving student services, including our liberation networks [a women’s, LGBT+, and disabled students’ network] which I feel can really grow over the next year. They’re relatively new, started in 2011, and I still think they have a way to go before they can enact change on the ground. Even if it’s just a social tool, these networks are really important as a space to help people come together, although I think they can be much more than that.
What major campaigns or initiatives is the UJS running at the moment?
Jewish Experience Week, which actually started last year, and it was unbelievable. You had thirty different campuses, and around 300 Jewish student leaders reaching around 3,000 non-Jewish students, talking to them about what it means to be Jewish. We had Jewish students telling people, ‘Did you know that there are Ethiopian Jews, Sephardic, Ashkenazi, Irish and Indian Jews?’ That we’re not a uniform body. I’m so excited to see that re-run this year and I think it’s going to be even stronger in its second year. UJS hadn’t done something that big campaigns-wise in around seven years and I’m so proud of Maggie Sussia [UJS campaigns director] for pulling that off.
© Times of Israel
14/10/2014- Dutch internet companies are coming under pressure from the government to censor comments and place limits on freedom of speech, the Financieele Dagblad said on Tuesday, quoting industry campaigners. Justice ministry officials are asking providers and hosting companies to remove websites from the internet without any legal basis, industry representatives told the FD. ‘They are making us responsible for deciding if something is against the law,’ said Michiel Steltman, director of the Dutch Hosting Provider Association (DHPA). ‘We rent web space and platforms. But because the justice ministry can’t trace the tenant, they dump the problem on us.’
In particular, providers are critical of the government’s plan to tackle radicalisation and jihadism which involves curbing the spread of ideas supporting violence. The paper did not give any examples of sites which have been shut down or videos which officials have requested be removed. But Steltman quoted the recent example of a video of a group of men sitting around a campfire firing guns and shouting 'allahu akhbar'. 'Have they just killed someone, are they angry that someone has been killed or have they killed a goat for a party?' he said. Alex de Joode, company lawyer with the Netherlands' biggest hosting provider is also criticial. 'We are not about checking ages and censorship,' he said. 'The government has the right legal instruments to remove content but chooses not to use it when it comes to claims of jihadism.' Dutch counter terrorism chief Dick Schoof said in a reaction that he understands the providers position but that ‘I believe they should assist efforts to counteract jihadist radicalisation within the legal limits’.
© The Dutch News
Justice ministers are struggling to balance the right to freedom of expression and the right to be forgotten in the EU’s data protection reform bill.
10/10/2014- The political debate on Friday (10 October) in Luxembourg surfaced following a ‘right to be forgotten’ ruling in May against Google by the European Court of Justice (ECJ). In the ruling, the Court concluded it was reasonable to ask Google to amend searches based on a person’s name if the data is irrelevant, out of date, inaccurate, or an invasion of privacy. Google has so far received 143,000 requests, related to 491,000 links, to remove names from search results. The ECJ decision only affects search requests based on a person’s name. The content at source remains untouched. But critics like Wikipedia founder Jimmy Wales described the decision as "one of the most wide-sweeping internet censorship rulings that I've ever seen".
Others say it produced clarity on issues of jurisdiction but did not go far enough in explaining how Google – or other data controllers – should handle people’s requests to have their names swiped from search engine results in the first place. "We can't leave it up to those who run search engines to take a final decision on the balance between these different fundamental rights," said Austria's justice minister. Ireland, where Google has its European head quarters, also doesn't like the idea. For the European Commission, the ECJ ruling does not pose a problem with right to be forgotten in the draft bill. It notes the right is already included in the proposed regula-tion along with an exception on the freedom of expression.
EU justice commissioner Martine Reicherts also noted the EU’s main privacy regulatory body, the “Article 29” Working Group, is coming up with operational guidelines for big companies like Google on how best to put the court’s decision into practice. “This will strengthen legal certainty, both for search engines and individuals, and will guarantee coherence,” she said
Not everyone is convinced.
The justice ministers differed on to what extent the ruling will affect the EU data protection regulation currently under discussion at member-state level. The heavily lobbied bill, which was tabled in early 2012, is set for adoption next year but has run into problems among national governments. The EU Italian presidency is hoping to reach an agreement by the end of the year in order to start formal talks with European Parliament. On Friday, the ministers managed to come to a general agreement on parts of the text in terms of international data transfers and exempting businesses with fewer than 250 employees. But member states like the UK still want to downgrade the bill into a directive, a weaker legal instrument compared to a regulation. "We need to be careful about creating rights that are not deliverable in practice as well as wider regulatory burdens," the British minister said.
As for the ECJ ruling, the question remains if additional rules or clarifications based on the court’s judgment should be inserted into the bill. Germany, Luxembourg, Poland, Portugal, the UK and others oppose referencing the court’s judgment in the bill. “To us this could be a dangerous precedent for the future and could perhaps negatively affect the freedom of speech,” said Poland’s justice minister. Instead, Germany wants more text in the bill to guarantee the freedom of expression by lifting the article from Charter of Fundamental rights and inserting “it into our data protection regulation.” Lithuania backs this idea. France also expressed reservations, noting that the right to be forgotten cannot be an absolute right. “How can we respect our citizens right to be forgotten without standing in the way of the freedom of expression and the freedom of the press at the same time?,” said the French minister. Spain, which brought the case against Google in May, backs the ECJ ruling and says it is "no way incompatible" with the right to the freedom of expression and information.
© The EUobserver
8/10/2014- The website of the Czech Helsinki Committee (ÈHV) has been targeted for attack by "nationalist" hackers from the White Media group. The hackers publicly announced on their own website that they attacked the human rights organization as part of their annual "Week against Anti-Racism and Xenophilia", which began on 28 September. In addition to the ÈHV's website its Facebook profile was attacked, as was the personal Facebook profile of director Lucie Rybová and her personal email account. The Brno branch of Amnesty International in the Czech Republic was hacked as well.
"It's alarming how defenseless you are in such a situation," Rybová told news server Romea.cz. While negotiations with Facebook regarding the blocking of the profiles and creating new slogans took place fairly quickly, negotiations with the operator of her email account and the operator of the ÈHV website are remarkably problematic, according to Rybová. "The operator of the Czech Helsinki Committee's website, the Forpsi server, says it has never encountered such a situation. We reported the hacking to them and asked them to post a text on the site explaining why the pages are not available, but all that shows up there is the message "inoperative", thanks to which we seem unreliable. It looks like we haven't paid for the domain, and it is also harming us in other areas, including our clients - they can't access our contact information so they can't call the counseling center," Rybová said.
In addition to the organization not being able to fully focus on some of its activities because of its non-functioning website, ÈHV also cannot now report online about those activities, which is usually a frequent obligation with respect to projects. Addressing the situation with the stolen email account is even more complicated. The hackers have stolen Rybová's password to her personal email account on Seznam and have changed it. "I have to prove the email is actually mine, using the same online form as when you forget your password. I have done it three or four times and nothing happens. When I call the hotline they refer me back to the online form and are unable to connect me with anyone who can handle my situation or even temporarily block the account," she explained to Romea.cz.
While Seznam has taken a passive approach to the situation for several days already, the neo-Nazis have continued to enjoy unfettered access to Rybová's personal email account. ÈHV is considering filing a criminal report against the hackers. Even that, however, will not be easy, because while the racist and xenophobic content of the White Media website violates Czech law, its domain is registered with a web hosting company in California and is subject to the laws there. Those laws are much more benevolent when it comes to freedom of speech, including the dissemination of hate, than are laws in the Czech Republic.
A "private" dinner between tech firms and government officials from across the EU is to take place on Wednesday.
7/10/2014- The purpose of the meeting is to discuss ways to tackle online extremism, including better cooperation between the EU and key sites. Twitter, Google, Microsoft and Facebook will all be attending in Luxembourg. Governments are becoming increasingly concerned over how social media is being used as a recruitment tool by radical Islamist groups. Further details about the meeting could be shared by the EU later on Wednesday ahead of the dinner taking place. It will be attended by ministers from the 28 EU member states, members of the European Commission and representatives from the technology companies. The European Commission said: "There is strong interest from the European union and the ministers of interior to enhance the dialogue with major companies from the internet industry on issues of mutual concerns related to online radicalisation."
In particular, it said the meeting would focus on:
@ "the challenges posed by terrorists' use of the internet and possible responses: tools and techniques to respond to terrorist online activities, with particular regard to the development of specific counter-narrative initiatives"
@ "internet-related security challenges in the context of wider relations with major companies from the internet industry, taking account due process requirements and fundamental rights"
@ "ways of building trust and more transparency"
The BBC understands this is the second time since July that the firms have been called in to discuss possible measures. However a notable absence at the meeting will be Ask.fm, a social network believed to have been extensively used as a recruitment tool for radical Islamist groups. The firm was owned by Latvian brothers Ilja and Mark Terebin, but in August was bought by the American company behind Ask.com. The site's new owners told the BBC: "Ask.fm has not been invited. "If we had known about it, we would have attended for sure."
Representing the UK government at the meeting will be security minister James Brokenshire. "We do not tolerate the existence of online terrorist and extremist propaganda, which directly influences people who are vulnerable to radicalisation," he told the BBC. "We already work with the internet industry to remove terrorist material hosted in the UK or overseas and continue to work with civil society groups to help them challenge those who promote extremist ideologies online. We have also made it easier for the public to report terrorist and extremist content via the gov.uk website." The government's Counter Terrorism Internet Referral Unit (CTIRU), set up in 2010, has removed more than 49,000 - pieces of content that "encourages or glorifies acts of terrorism" - 30,000 of which were removed since December 2013.
Details on the EU dinner are sparse.
But there is increasing concern over the role social media plays in disseminating extremist propaganda, as well as being used as a direct recruitment tool. However, there is also a significant worry that placing strict controls on social networks could actually hinder counter-terrorism efforts. "The further underground they go, the harder it is to gleam information and intelligence," said Jim Gamble, a security consultant, and former head of the Child Exploitation and Online Protection Centre (Ceop). "Often it is the low level intelligence that you collect that you can then aggregate which gives you an analysis of what's happening." Mr Gamble was formerly head of counter-terrorism in Northern Ireland. There were, he said, parallels to be drawn. "There's always a risk of becoming too radical and too fundamentalist in your approach when you're trying to suppress the views of others that you disagree with. "In Northern Ireland, huge mistakes were made when the government tried to starve a political party of the oxygen of publicity. I would say that that radically backfired."
Current estimates put the number of British citizens recruited to fight for radical Islamist groups in Syria and Iraq at more than 500. Mr Gamble said the recruitment process focused on singling out those who looked most susceptible. "They identify the isolated, the lonely, those people who have perhaps low self-esteem, and are looking for something, or someone." Ask.fm's site hosted several discussions regarding the practicalities of getting to Syria or Iraq. Many of these discussions remained online for a considerable amount of time - some for several weeks.
However, in an interview with the BBC, Ask.fm said it had had few requests from governments to take such material down. "In the past 18 months we've only received about a dozen requests from law enforcement," it said. "Sometimes these issues are really hard to discover when you've not got the full concept of what's going on outside the social network that you run. "We really do want to forge partnerships with law enforcement to be able to take meaningful action on this." In a statement, a spokeswoman for the Met Police said 1,100 pieces of content that breached the Terrorism are removed each week from various online platforms - approximately 800 of these are Syria/Iraq related.
Update 09/10/14: EU commissioner Cecilia Malmström and Italian interior minister Angelino Alfano - who both hosted the dinner - have issued a statement. It reads: "The participants discussed various possible ways of addressing the challenge. It was agreed to organise joint training and awareness-raising workshops for the representatives of the law enforcement authorities, internet industry and civil society."
© BBC News
Let's talk about nude photo leaks and other forms of online harassment as what they are: civil rights violations
By Danielle Citron
7/10/2014- Over the past few weeks, a prominent—and nearly all female— group of celebrities have had their personal accounts hacked, their private nude photos stolen and exposed for the world to see. Friday brought the fourth round of the aggressive, invasive, and criminal release of leaked photos. Whether the target is a famous person or just your average civilian, these anonymous cyber mobs and individual harassers interfere with individuals’ crucial life opportunities, including the ability to express oneself, work, attend school, and establish professional reputations. Such abuse should be understood for what it is: a civil rights violation. Our civil rights laws and tradition protect an individual’s right to pursue life’s crucial endeavors free from unjust discrimination. Those endeavors include the ability to make a living, to obtain an education, to engage in civic activities, and to express oneself—without the fear of bias-motivated threats, harassment, privacy invasions, and intimidation. Consider what media critic Anita Sarkeesian has been grappling with for the past two years. After Sarkeesian announced that she was raising money on Kickstarter to fund a documentary about sexism in video games, a cyber mob descended.
Anonymous emails and tweets threatened rape.
In the past two weeks, Sarkeesian received received tweets and emails with graphic threats to her and her family. The tweets included her home address and her family’s home address. The cyber mob made clear that speaking out against inequality is fraught with personal risk and professional sabotage. Her attackers’ goal is to intimidate and silence her. Revenge porn victims face a variant on this theme. Their nude photos appear on porn sites next to their contact information and alleged interest in rape. Posts falsely claim that they sleep with their students and are available for sex for money. Their employers are e-mailed their nude photos, all for the effort of ensuring that they lose their jobs and cannot get new ones.
Understanding these attacks as civil rights violations is an important first step. My book Hate Crimes in Cyberspace explores how existing criminal, tort, and civil rights law can help combat some of the abuse and how important reforms are needed to catch the law up with new modes of bigoted harassment. But law is a blunt instrument and can only do so much. Moral suasion, education, and voluntary efforts are essential too. Getting us to see online abuse as the new frontier for civil rights activism will help point society in the right direction.
Danielle Citron is the Lois K. Macht Research Professor & Professor of Law at the University of Maryland Francis King Carey School of Law. She is an Affiliate Scholar at the Stanford Center on Internet and Society and an Affiliate Fellow at the Yale Information Society Project. Her book, Hate Crimes in Cyberspace, was recently published by Harvard University Press.
In the past twelve months racist attacks in Northern Ireland have increased by 50%.
6/10/2014- In the early hours of Sunday morning yet another home was attacked in South Belfast – an attack that the PSNI described as a ‘hate crime’. A bottle was thrown and smashed the living room window of a house owned by a Bangladeshi family on Ulsterville Avenue and a car owned by a Kuwaiti family was set alight. The attacks have been widely condemned by politicians from across the political spectrum. Where do the attitudes that provoke these hate crimes originate and why are racist attitudes seemingly on the increase? A few hours before the latest attack a Facebook user in South Belfast posted this video. The video has been viewed more than 10,000 times and numerous comments have been posted in support of the man responsible.
The Facebook user subsequently attempted to defend his actions seemingly oblivious to the fact that – regardless of the circumstances – verbally and racially abusing a fellow human being in broad daylight would be regarded by most as unacceptable. The true nature of his motivations are perhaps best summed up by one of his own comments on the original video thread. On Saturday 4th October Shankill Leisure Centre permitted the use of a hall to celebrate Eid al-Adha, one of the most important festivals in the Islamic calendar. The loyalist Facebook page Protestant Unionist Loyalist News TV picked up on the news with predictable results: A torrent of racist commentary followed the original post – all unchallenged by the administrators of the page.
An even more sinister Facebook page has seen significant growth in recent days. The subtly named N.I. Resistance Against Islam so far has 735 followers and users have posted a selection of choice comments. Stung by criticism the page administrators have banned anyone who dares to challenge their racist mindset and have set up a closed group where no doubt the select few who share their warped views can interact in private (the administrators of the page are visible on some browsers). In all cases the posts and pages responsible have been reported to Facebook and complainants have received the stock response that such activity does not contravene “community standards.”
“Facebook does not permit hate speech, but distinguishes between serious and humorous speech. While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition”. However Facebook adds the caveat that “because of the diversity of our community, it’s possible that something could be disagreeable or disturbing to you without meeting the criteria for being removed or blocked”. So in effect Facebook and not civil society is the final arbiter of what is or is not ‘hate speech’.
In a society that is already riddled with sectarianism and where there is clear evidence that Facebook has been used to stir up sectarian tension in the past, is it not incumbent on the organisation to act swiftly and remove posts that would be viewed as ‘hate speech’ in every day society? There are those that would argue that such action would be a form of censorship and an attack on free speech but surely social media giants such as Facebook and Twitter have a social responsibility to prevent the spread of dangerous views that can lead to attacks such as this in August 2014?
© Slugger O'Toole
6/10/2014- jugendschutz.net continuously analyses how right-wing extremists try to attract young internet users and takes action against endangering or harmful content. Furthermore, jugendschutz.net focuses on prevention and develops concepts to give young people encouragement to critically deal with right-wing extremism on the internet. This report informs about the work and findings of jugendschutz.net in the field of right-wing extremism on line in 2013.
6/10/2014- The U.S. Supreme Court opens a new term Monday, but so far the justices are keeping quiet about whether or when they will tackle the gay marriage question. Last week, the justices met behind closed doors to discuss pending cases, but when they released the list of new cases added to the calendar, same-sex marriage was nowhere to be seen. But that really doesn't mean very much. About 2,000 cases have piled up over the summer, each seeking review on all manner of subjects. So when the court met last week to sift through all that, there really wasn't enough time for the justices, as a group, to focus on the same-sex marriage cases. With a big issue like this, and multiple appeals before the court, the justices need to decide which cases are the "best vehicles" (as it's known in the trade) for review. Indeed, all of the vehicle talk prompted one media wag to comment last week that all of the flossy lawyers, each pointing to their own case as the best vehicle, sounded more like car salesmen than Supreme Court advocates.
With seven cases currently before the court, the justices will likely pick just one or two to hear. They might, as Justice Ruth Bader Ginsburg suggested earlier this fall, even wait for more cases. Right now, the only cases pending before the court are lower court decisions favoring the right of same-sex couples to marry. But a Sixth Circuit Court of Appeals panel, which heard arguments last August in Ohio, sounded as if it might go the other way. If it does, that would provide the kind of traditional conflict the Supreme Court looks to resolve. Truth be told, with both sides already pressing the court to act, most court observers think the justices will want to take the plunge sooner rather than later. For now, though, all is speculation.
This term will mark the 10th year that John Roberts has served as chief justice. Without a doubt, the court has grown dramatically more conservative since his appointment. But, as Brianne Gorod of the Constitutional Accountability Center observes, the question is: "What role has John Roberts played in this movement?" Is he "strategically and deliberately leading the court to the right?" Kendall asks, "Or is it, as some have suggested, the 'Kennedy Court' or even the 'Alito Court'?" Justice Anthony Kennedy is often referred to as the "swing justice," and has written many of the court's major 5-to-4 opinions. Justice Samuel Alito is far more conservative than the justice he replaced, Sandra Day O'Connor, and has cast many votes and written major opinions that have shifted the court in a more conservative direction. The issues on the docket this term range from race and religion cases, to pregnancy discrimination, and even to threats on Facebook.
But once again the court, responding to challenges brought by conservatives, has chosen to delve into some elections issues that had been thought long settled. In a case from Arizona, the court could prevent the increasing use of citizen commissions to draw congressional district lines. Arizona, California and some other states have, in one way or another, used these commissions to take the redistricting issue out of the hands of self-interested state legislatures. But in Arizona, where the independent commission was enacted by referendum, the Republican-controlled Legislature is now challenging the practice as unconstitutional. In a case that could dramatically alter the way judicial elections are conducted, the court will decide whether states that elect judges can bar judicial candidates from personally soliciting campaign contributions. Of the 39 states with judicial elections, 30 have such bans. The test case is from Florida, where the state Supreme Court upheld that state's ban on the grounds that allowing judicial candidates to personally solicit campaign contributions would raise questions about their impartiality on the bench. Those challenging the ban say it violates their free speech rights.
Another free speech case involves the question of what constitutes a threat on Facebook. The facts are pretty hairy. Anthony Elonis was convicted of making threats against his estranged wife and an FBI agent. His posts said things like, "I'm not going to rest until your body is a mess, soaked in blood and dying from all the little cuts." Soon he moved on to suggest that he might make "a name" for himself with a school shooting. "Hell hath no fury like a crazy man in a kindergarten class. The only question is ... which one?" At that point, a female FBI agent paid him a visit, which provoked a post in which he said that he'd had to control himself not to "slit her throat, leave her bleeding from her jugular in the arms of her partner." At Elonis' trial, the judge instructed the jurors that to convict, they had to conclude that this was not merely exaggeration. His Facebook posts needed to be statements that a reasonable person would interpret as a serious expression of an intention to inflict bodily injury. Elonis contended that he was just mimicking rap songs — indeed, he often linked to songs with his post. He argued that he should not be convicted without actual proof that he intended to threaten, intimidate or harm.
The intent standard that Elonis argued for might make it much more difficult to win a conviction for making illegal threats. But whatever rule the justices come up with, observes University of Virginia law professor Leslie Kendrick, it will likely apply not just to Facebook and Twitter, but to all forms of communication — including people speaking face to face or publishing in the newspaper. In other words, says Kendrick, when crafting a rule, the justices will ask if the standard "is going to chill people who engage in speech that is borderline but ultimately protected." Protected, that is, by the First Amendment guarantee of free speech. Most court experts seem to believe that Elonis may win because of the culture of today's social media. "The context of rap music these days suggests that what Elonis put out there really isn't all that unusual for what's going on on Facebook and what's going on in the popular culture," says professor William Marshall of the University of North Carolina School of Law.
After all, the current Supreme Court may be viewed as conservative, but it has, with little or no dissent, already upheld a fair amount of "fringe speech" — whether it's crush videos, demonstrations at military funerals or the sale of violent video games to kids. Not everyone, however, agrees that the Facebook threat case is in the same category. Former Solicitor General Gregory Garre notes that Elonis' posts "ticked off all the boxes" — domestic violence, school shootings, violence against a federal officer. Garre says he "wouldn't be surprised if [Elonis' Facebook posts] struck the justices as something very problematic." A different part of the First Amendment — the free exercise of religion — is at issue in two cases involving federal statutes. One case tests whether retailer Abercrombie & Fitch illegally discriminated against a Muslim woman when she was denied a job because her headscarf conflicted with the company's dress code. The other case tests Arkansas' refusal to allow a Muslim prisoner to wear a short beard for religious purposes.
The prisoner sued under a federal law aimed at shoring up prisoners' religious rights. Interestingly, in this case, the prisoner has the backing of a wide variety of corrections officials and organizations, plus the federal government. The federal prison system and 43 states allow beards, largely because it is much easier to hide weapons and other contraband in clothes, hair and body cavities. There is a similar coalition of strange bedfellows in a pregnancy discrimination case before the court. Anti-abortion and women's rights groups have joined together to urge the court to require employers to treat pregnancy the same way other temporary disabilities are treated on the job. In this case, a UPS driver asked for light duty, carrying less than 20 pounds, during the latter part of her pregnancy. But the company refused, and she lost both her job and her insurance coverage.
The company contends that it had "no animus" toward the employee because of her pregnancy; her request for light duty just wasn't covered by either the provisions of federal disability law or the union contract. She argues that she should have been covered under the 1978 federal law barring discrimination based on pregnancy. The case is very important for businesses because pregnancy accommodations cost money. But it's very important to women too, observes Emily Martin of the National Women's Law Center. "Lots of women with some sort of work limitation arising out of pregnancy face similar issues — especially women in low-wage jobs that are often more physically demanding," she says. The first case the court hears on Monday is one that amazes former Solicitor General Paul Clement, who wants to know: "How in the world did we go 225 years and not have this issue decided?" The issue is whether police may make a traffic stop based on a mistaken understanding of the law, and then use evidence from a subsequent search to convict the car's occupants of a crime.
Other controversies to look forward to include cases that involve racial gerrymandering and Medicaid funding, and a major housing discrimination case that could make it harder to prove discrimination. The court will even be tackling a case about fish — yes, fish! It's an obstruction of justice case that, depending on your point of view, involves either the deliberate concealment of illegal fishing or a classic example of prosecutorial overreach. More to come on that later.
He calls himself Montero. But that’s all that’s known about him – that and his “vicious” anti-Semitic posts on Twitter. And his hate speech has been singled out as one reason that the country’s Jewish community is on high alert during Yom Kippur this weekend.
4/10/2014- Montero, says Mary Kluk, the Jewish Board of Deputies national chairman, “is probably one of the most vicious individuals we have ever come across”. He is untraceable. His Twitter account leaves no clue as to who he is, what he does or who he works for. On Thursday alone, he posted 50 anti-Semitic pictures, and he regularly makes reference to the board, she reveals. These messages include: “F*** the Kikes,” and “Jew parasites should all be killed and wiped off the earth.” Others profess: “Keep calm, kick a kike,” and “I like my Jews like I like my bread… toasted.” “I support Isis and all other Muslim freedom fighters who kill Jews… Every Jew they kill is one less I have to kill.” Synagogues around the country have increased their security – and are now guarded by 24-hour security teams, concrete barriers and the Joburg Metro Police, who have closed roads during worship.
Joburg metro police spokesman Wayne Minnaar says the board approached the traffic police to “assist with security and road closures during the holy month”, though he could not confirm what threats the Jewish community faced. Kluk claims that since the recent war in Gaza, “this anti-Semitic rhetoric has reached levels unseen for many decades. We are concerned about an increased security risk to our community over the high Holy days. “What is particularly alarming is his (Montero’s) ability to tweet anti-Semitic images with untold venom. He talks about personally killing Jews and supporting the work of Isis,” she said. “This is an individual who we feel deems thorough investigation as he violates the constitutional laws of this country.” Groups he subscribes to include New age Nazi, Notorious anti-Semites and Neo-Nazi Monsters. Brigadier Neville Malila, the provincial police spokesman, says they have not received any complaints by, or threats to the Jewish community. “The deputy provincial commissioner as well as the provincial CPF (community policing forum) chairperson are in constant liaison with the Jewish board to discuss security issues.”
Moulana Ebrahim Bham, the secretary-general of the Council for Muslim Theologians, believes it is “alarming” the Jewish community perceived itself to be under an increased security threat from “jihad terrorism”, as stated by Chief Rabbi Warren Goldstein. “As the Jewish community beefs up safety and security around shuls and tips its members off on precautions, it’s only proper that any credible reports of threats be brought to the attention of the relevant national authorities. “By the choice of his words, the rabbi’s claim places the source of this threat on the doorstep of Muslims,” said Bham. “As a Muslim community, we are not aware of any such condemnable plots of potential attacks on South African soil. “It is therefore important that the rabbi should be careful with his language that is prejudicial and likely to incite violence against members of the Muslim community. “It’s our sincere hope that this development will not again lead to situations where clandestine Zionist-linked security agencies start to harass innocent civilians at public facilities, as has happened before, even when those targeted did not pose any danger to anyone.”
On Thursday, the Jewish Board met President Jacob Zuma and a high-level government delegation where it briefed him on “the sharp rise in anti-Semitic activity in South Africa, including threats and intimidation against the Jewish community and its leadership”. Zuma, said the board, “stressed that his government remained committed to combating such prejudice. He further emphasised the need for there to be harmony between people of different backgrounds and opinions” in the country. Referring to a Twitter post cited earlier by Kluk that “Hitler was right, pity he didn’t finish off all Jews”, Anneli Botha, a terrorism expert at the Institute for Security Studies, believes the Jewish community’s reaction to these social media messages is “a bit extreme”. “The reality is that there are many people with anti-Semitic views in the country, and it’s sad that’s the case, but to heighten security based on messages on social media, that might be taking it a bit far.”
© The South African Independent
On the Morning of Rosh Hashannah a petition signed by 10,306 people arrived at Facebook Headquarters asking the company to change their policy on Holocaust denial. Facebook’s current position on Holocaust denial is that “the mere statement of denying the Holocaust is not a violation of our policies”. They justify this by treating the Holocaust not as a unique tragedy in human history, but as just another historical event, and they say they won’t prohibit Holocaust denial because they “recognize people’s right to be factually wrong about historic events”.
By Andre Oboler
3/10/2014- A letter from Facebook outlining their position is on the public record as part of a report in online antisemitism published by the Israeli Government last year. In recent times Facebook has moved away from the inflexible application of generic rules and has reversed their position across a whole range of issues. The new approach is much more strongly based on common sense and meeting reasonable public expectations about community standards. The arrival of the new petition is a timely call for Facebook, and its founder Mark Zuckerberg, to reflect and reconsider their position on Holocaust denial, which remains an open wound to not only the Jewish community but all civil society more broadly. The existing policy simply cannot be sustained in light of the way Facebook in 2014 responses to similar concerns.
In May 2013, after two years of regarding content that made light of rape as “humorous”, and therefore “acceptable” on Facebook, the company relented and agreed that misogyny was not acceptable under its community standards. At the time Facebook stated that “it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate”. This is another positive example of Facebook changing its approach to meet users’ expectations. It’s a pity it took two years and a major campaign, including loss of significant advertising, to make this happen.
A few months ago Facebook quietly lifted a ban on pictures of breastfeeding women. The ban was considered a form of gender-based discrimination by some women’s groups. The ban dates back to 2008 and news of a major effort to enforce it was announced by the same spokesperson and at the same time as news of Facebook’s position of permitting Holocaust denial on their social media platform. Michael Arrington wrote a very powerful article about the hypocrisy of these policies, the article was called “Jew Haters Welcome At Facebook, As Long As They Aren’t Lactating”. It seems half the issue has been solved, and the problem we are left with is simply “Jew Haters Welcome at Facebook”. It’s time that was addressed.
In recent days Facebook has reversed course over an effort to close the profiles of members of the LGBT community on the basis they were not using their ‘real names’. As David Campos explained, “for many members of the LGBT community the ability to self-identify is a matter of health and safety. Not allowing drag performers, transgender people and other members of our community to go by their chosen names can result in violence, stalking, violations of privacy and repercussions at work.” In this case Facebook recognised the damage their approach was causing and reversed course. Holocaust denial too is dangerous, it helps rehabilitate Nazi groups and facilitate their recruitment drives.
The problem of users posting Holocaust denial on Facebook was first raised at a meeting of the Global Forum to Combat Antisemitism in February 2008, it was one of the primary examples of “antisemitism 2.0”. Facebook’s unwillingness to tackle this problem gained major media attention from early 2009. Their position is so out of touch with global public expectations that it has led to international meetings in which Facebook has been questioned, a protest letter from Holocaust survivors organised by the Simon Wiesenthal Center, a grassroots protest outside Facebook’s offices, efforts to resolve the issue through cooperation by the Inter-Parliamentary Coalition to Combat Antisemitism, and many other initiatives from organisations, communities, individuals and companies. Facebook has grown as a company, and it has also matured, but this one issue is holdover of social media history.
The new petition is the result of dedicated work over three years and comes from the administrators of the closed Facebook group “Ban ALL Holocaust Denial Pages and Groups from Facebook” who also operate a Facebook page with just shy of 22,000 supporters. The decision to close the petition and send it to Facebook at this point in time was a choice and I believe it was a good one. Facebook’s response to the LGBT issue shows they are now taking public concern more seriously and are able to check themselves and reverse course when needed. The change of policy in respect to pictures of breastfeeding months shows that even old well established positions can be changed.
As Facebook improves the way it deals with sensitive topics and community expectations, the lack of resolution on the Holocaust denial problem is a weight that grows heavier. Holocaust denial should not be a sacrificial goat, blessed by Facebook, and sent into the wilderness to placate those demanding the sort of free speech which costs others their dignity and safety. This Yom Kippur, it’s time for those at Facebook to reflect, reconsider, and yes, repent. It’s time those Holocaust survivors who wrote to Facebook in 2011 to receive a new answer, while at least some are still alive to receive it. It’s time this issue was put to bed.
Dr Andre Oboler is CEO of the Online Hate Prevention Institute and co-chair of the Online Antisemitism Working Group of the Global Forum to Combat Antisemitism.
© The Online Hate Prevention Institute
Google grapples with the consequences of a controversial ruling on the boundary between privacy and free speech
3/10/2014- Sometimes a local spark can cause a global fire. In 1998 La Vanguardia, a Spanish daily, ran an announcement publicising the auction of a house to pay taxes owed by Mario Costeja González, a lawyer. The event would have been consigned to oblivion had the newspaper not digitised its archives a few years later. Instead, it came first in Google’s results for searches for Mr Costeja’s name, causing him all manner of professional problems. When the online giant refused to remove links to the material, Mr Costeja turned to Spain’s data-protection authority. The case ended up in the European Court of Justice (ECJ), which ruled in May that Google must remove certain links on request. The ruling has established a digital “right to be forgotten”—and forced Google to tackle one of the thorniest problems of the internet age: setting the boundary between privacy and freedom of speech.
The two rights had coexisted, occasionally uneasily, offline. But online, border skirmishes have become increasingly common. “It’s like two friends who don’t always get along, but are now being confined to one room,” says Luciano Floridi, a professor of philosophy and the ethics of information at Oxford University. Complicating matters is a transatlantic split. America allows almost no exceptions to the first amendment, which guarantees freedom of speech. Europe, not least because of its experiences of fascism and communism, champions privacy. The ECJ’s ruling was vague. Even if information is correct and was published legally, the court said, Google (or indeed any search engine) must grant requests not to show links to it if it is “inadequate, irrelevant or no longer relevant”—unless there is a “preponderant” public interest, perhaps because it is about a public figure. With no appeal possible, Google went to work. It helped that it already had a procedure for removing links to copyrighted material published without permission. Just a few weeks later it had put a form online for removal requests.
The firm’s dozens of newly hired lawyers and paralegals have their work cut out. Between June and mid-September, it received 135,000 requests referring to 470,000 links. Most came from Britain, France and Germany, Google says. It will publish more detailed statistics soon. Meanwhile numbers from Forget.me, a free website that makes filing removal requests easier, give a clue to the sort of information people want forgotten. Nearly half of the more than 17,000 cases filed via the service refer to simple personal information such as home address, income, political beliefs or that the subject has been laid off. Nearly 60% were refused. If the material is about professional conduct or created by the person now asking that links to it be deleted, removal is unlikely. Requests relating to information which is relevant, was published recently and is of public interest are also likely to fail.
Many of the decisions look quite straightforward. Google has removed links to “revenge porn”—nude pictures put online by an ex-boyfriend—and to the fact that someone was infected with HIV a decade ago. It said no to a paedophile who wanted links to articles about his conviction removed, and to doctors objecting to patient reviews. In between, though, were harder cases: reports of a violent crime committed by someone later acquitted because of mental disability; an article in a local paper about a teenager who years ago injured a passenger while driving drunk; the name on the membership list of a far-right party of someone who no longer holds such views. The first of these Google turned down; the other two it granted. The process is “still evolving” says Peter Fleischer, Google’s global privacy counsel. A Dutch court recently decided the first right-to-be-forgotten case, upholding Google’s refusal to remove a link to information about a convicted violent criminal. After more appeals have been heard by data-protection authorities and courts, the firm can adjust its decision-making. The continent’s privacy regulators are working on shared guidelines for appeals.
Another steer will come from an advisory council set up by Google itself. Its eight members include Mr Floridi; Jimmy Wales, the founder of Wikipedia; a journalist at Le Monde, a French paper; and a former director of Spain’s data-protection agency. It has already held four public meetings in as many European cities, with three more to come before it reports back to Google early next year. One question asked at the meeting in Paris on September 25th was how users should be made aware of the fact that the results of a search have been affected by the ruling. Currently, a notice at the bottom of the results page says that “some results may have been removed”, which perhaps defeats the purpose by raising a red flag. Another was how publishers should react. In Britain newspapers published articles about the fact that Google no longer linked to previous articles, again bringing to prominence information that the firm had found merited being forgotten.
More broadly, many wonder whether Google should remove links from searches everywhere, not just on its European sites. That would lead to a transatlantic row, but could also trigger a debate in America about why, for instance, American victims of revenge porn should not also be able to ask Google to stop linking to such content. Some have dismissed Google’s advisory council and its tour through Europe as a public-relations exercise. “Google is trying to set the terms of the debate,” said Isabelle Falque-Pierrotin, the head of France’s data-protection watchdog, last month. Predictably, those involved see it differently. Asked why he joined Google’s council, one of the members said: “Because it’s terribly interesting.” As the virtual world’s boundaries are redrawn, it matters who gets to hold the pen.
Clarification: Google displays "some results may have been removed" at the bottom of the results page for any search in Europe for a name (unless it is that of a public figure), not just those for names of people whose removal requests have been granted.
© The Economist
Berlin-based SoundCloud, which allows anyone to share audio files online, plays host to huge numbers of jihadi accounts and postings supporting the Islamic State (Isis). But the uploads do not contravene German law and are not being caught by the startup's moderators.
2/10/2014- A search for the word “jihad” in Arabic on the site returned page after page of matches on Monday, although it was impossible to say how many track postings there were as SoundCloud's counter only goes up to 500. Many feature amateur images from Middle Eastern conflicts, including men brandishing black Isis flags and Kalashnikov rifles, or embellished propaganda images of figures such as Osama bin Laden. There are also several accounts whose names are variations on Isis and Islamic State.
Commonly posted content includes Nasheed songs which have been used by Salafists to accompany propaganda videos. Three Nasheed “battle songs” by former Berlin rapper Denis Cuspert, who went by the name Deso Dogg before his conversion to radical Islam, were banned by the Federal Department for Media Harmful to Young Persons (BPjM) in 2012. Cuspert has since left Germany to fight for Isis in Syria and has become close to the group's leader Abu Bakr Al-Baghdadi, according to a dossier published recently by the Federal Office for the Protection of the Constitution (BfV). He is just one of almost 400 fighters believed to have left Germany for Syria since 2012.
Banned in Germany
In September 2014, the Interior Ministry took the drastic step of banning Isis in Germany. This means that the group and its symbols are illegal and any activities under-taken on behalf of the group, including publicizing or supporting it, are forbidden. Activities supporting Isis are punishable under the criminal law's section 89a, “Preparation of a serious violent act that endangers the state”. If Cuspert, for example, were to return to Germany and publish pro-Isis propaganda, he could be prosecuted under the law. But the nature of the internet causes a problem for authorities when regulating content posted to platforms like SoundCloud, which hosts its content on Amazon Web Services servers, all of which are physically located outside of Germany.
'The internet isn't German'
“The internet isn't German, and most of the sites which contain this content are not hosted in Germany,” a BfV spokeswoman told The Local. “Of course we try to have these things removed. We flag things up to them [social networks], the police can do that too if a crime has been committed.” But she added court cases were only likely to be brought under the Isis ban against individuals or companies who upload propaganda in Germany. “The point of contact is an act committed within Germany,” an Interior Ministry spokeswoman confirmed. “It's not about whether it's a German company, but where the servers are located."
© The Local - Germany
2/10/2014- The owner of a well-reviewed Bushwick coffee shop took to Instagram on Wednesday to tell the world that just about the only thing worse than a bad coffee is a greedy Jew. Why is this new, artisanal coffee shop (simply known as the Coffee Shop) mad at the People of the Bagel? Because they’re gentrifying, silly, and pushing out real Bushwick residents, like proprietors of fancy coffee shops. Hello, pot. Meet kettle.
Of course, that might not be the real reason, as his Instagram screed is barely intelligible, reading, in part:
My stubborn Bushwick-oroginal neighbor is a hoarder and a mess- true.. and he's refused selling his building for lots and lots of money. His building and treatment of it makes the hood look much less attractive and I would like him to either clean up or move along. BUT NOT be bought out by Jews however, who in this case (and many cases separate- SORRY!) function via greed and dominance. A laymen's terms version of a story would simply be- buying buildings, cutting apartments in half, calling them luxurious, and ricing them at double. Bushwick IS rising and progressing, and bettering, but us contributing or just appreciating this rise and over all positive change do not want to be lumped with greedy infiltrators.
Further clues are found on the shop’s Facebook page, where owner Michael Avila posted a video praising ultraorthodox Jews for opposing Zionism. “I love LOVE these Jews [smiley emoticon],” he wrote. “These men have the right idea.” On his personal Facebook page, he acknowledged the controversy, writing, “Sometimes I cause a little trouble just because I know I can handle it. I'm pretty good with the fine line so I go for it.” (There's expanded anti-Jewish ranting, too.) Yeah, that “fine line” post seems like hubris now. Silva explained himself to DNAinfo, saying, "I think they [Jews] took it personally even if it doesn’t to apply to them. Sometimes I feel misunderstood. I’m fine with being misunderstood. I’m quite used to it. I don’t really mind." (This post originally featured a photo of Avila with Giovanni Finotto, a man Avila identified as his mentor. Finotto has vehemently disavowed Avila's comments, saying, "Regardless of excuses he has made, claiming that he was misunderstood, his behavior is completely inexcusable.")
Disagreeing with Zionism is one thing. Expanding your views into a rant about greedy Jews in Brooklyn? Quite another. And that’s unfortunate, since by most accounts, the coffee was good. Now it just smells like decaf. And anti-Semitism.
© New York Magazine
Could Artificial Intelligence Root Out Online Hate?
2/10/2014- Last week, the Anti-Defamation League released a list of “Best Practices” to counter hate speech on the Internet. Sober and serious, it includes suggestions like “Share knowledge and help develop educational materials and programs that encourage critical thinking in both proactive and reactive online activity” and “Respond to user reports in a timely manner.” It even advises to try “comedy and satire when appropriate.” Google’s executive chairman, Eric Schmidt, hopes there might one day be a more exciting option for dealing with hate speech: artificial intelligence.
“AI systems may ultimately allow us to better prioritize and better understand how to rank and deal with evil speech,” Schmidt told JTA in a phone interview. Schmidt, who was presented last Friday with the ADL International Leadership Award, said Google’s current philosophy is for its search engine to mirror what is available on the Internet as accurately as possible. Google searches are based on an algorithm that is content neutral, so the prospect of nudging aside hate speech would mark a shift.
“It’s a very tight line to walk because we are against filtering and we are against censorship, so you have to be careful here,” Schmidt said. Even without invisible anti-hate bots, Schmidt said the Internet makes it easier to track and counter hate — and to identify hateful people, if necessary — and thus is a greater tool in defeating hate rather than spreading it. Of course, identifying hate speech via computer will be plenty difficult given how often humans disagree over what is or isn’t hateful. And given the prevalence of existing concerns about privacy and tracking, AI-enhanced search engines will probably add another layer of complexity to such debates rather than resolving them. Who knows? They may even provide some fodder for comedy and satire. When appropriate, of course.
© The Forward
Facebook has agreed to make changes to the way it works, after locking the accounts of a number of drag queens because they weren’t using their “legal names”.
1/10/2014- The social network has been under fire over the policy, after it last month began locking the accounts of users with noticeable drag names. Following protests the company agreed to temporarily reinstate some drag performers’ profiles , but previously insisted the policy itself would remains unchanged. However, at a meeting with the San Francsico drag community organised by Supervisor David Campos today, Facebook representatives said the ‘flawed’ policy had hurt people, and would be changed. Mr Campos said: “The drag queens spoke and Facebook listened! Facebook agreed that the real names policy is flawed and has unintentionally hurt members of our community. “We have their commitment that they will be making substantive changes soon and we have every reason to believe them. “Facebook apologized to the community and has committed to removing any language requiring that you use your legal name. “They’re working on technical solutions to make sure that nobody has their name changed unless they want it to be changed and to help better differentiate between fake profiles and authentic ones.”
Drag artist RuPaul had previously weighed in to the controversy, saying: ” it’s bad policy when Facebook strips the rights of creative individuals who have blossomed into something even more fabulous than the name their mama gave them.” Facebook’s Chief Product Officer, Chris Cox, updated his page with a lengthy apology which read: “I want to apologize to the affected community of drag queens, drag kings, transgender, and extensive community of our friends, neighbors, and members of the LGBT community for the hardship that we’ve put you through in dealing with your Facebook accounts over the past few weeks. “In the two weeks since the real-name policy issues surfaced, we’ve had the chance to hear from many of you in these communities and understand the policy more clearly as you experience it. We’ve also come to understand how painful this has been. We owe you a better service and a better experience using Facebook, and we’re going to fix the way this policy gets handled so everyone affected here can go back to using Facebook as you were.
“The way this happened took us off guard. An individual on Facebook decided to report several hundred of these accounts as fake. These reports were among the several hundred thousand fake name reports we process every single week, 99 percent of which are bad actors doing bad things: impersonation, bullying, trolling, domestic violence, scams, hate speech, and more — so we didn’t notice the pattern. “Our policy has never been to require everyone on Facebook to use their legal name. The spirit of our policy is that everyone on Facebook uses the authentic name they use in real life.
“We see through this event that there’s lots of room for improvement in the reporting and enforcement mechanisms, tools for understanding who’s real and who’s not, and the customer service for anyone who’s affected. These have not worked flawlessly and we need to fix that. With this input, we’re already underway building better tools for authenticating the Sister Romas of the world while not opening up Facebook to bad actors. And we’re taking measures to provide much more deliberate customer service to those accounts that get flagged so that we can manage these in a less abrupt and more thoughtful way. To everyone affected by this, thank you for working through this with us and helping us to improve the safety and authenticity of the Facebook experience for everyone.”
© Pink News
By Raihan Ismail
1/10/2014- Following the national news and social media over the last fortnight, one might be led to believe that women wearing burqas and niqabs are as significant a threat to Australia's security as the alarming number of young men who have been caught by the spell of ISIS. The burqa kerfuffle seemed to escalate when Liberal Senator Cory Bernardi woke up to the news of anti-terror operations in Sydney and saw pictures of a veiled woman outside the raided houses. He responded on Twitter by referring to the burqa as a "shroud of oppression and flag of fundamentalism". Presumably Bernardi saw different news footage from me, as the woman displayed prominently in news photographs that I saw was wearing the niqab. The niqab is a face-covering veil, worn by a very small number of Australian Muslims, which leaves open a slit for the eyes. The burqa, on the other hand, even more rarely worn, has mesh covering the eyes. Whatever Bernardi saw or meant, his comments unleashed yet another firestorm of Islamophobia on its most fertile breeding ground: the internet.
Last week, after Bernardi's comments, I was interviewed by the ABC for an explanatory article on the burqa, the niqab, and my choice of garment, the hijab, which covers only a woman's hair, neck and shoulders. Bizarrely, when posted by the ABC on Facebook, the article received more comments than the ABC's reports on the anti-terror raids themselves. The comments section is sobering reading for anyone with any doubts about the perniciousness of Islamophobia in Australia. To give one example from among the comments, a self-described "maintenance planner" for Fortescue Metals Group in Perth stated: "It's Australia you came here for whatever reasons embrace our culture" [sic], and asked why minorities should be allowed to "influence our awesome country".
Twitter is another haven for Islamophobia. The ABC tweeted the article, accompanying it with the question "Why do some women wear the burqa, niqab or hijab?" A real estate agent from Frankston, Victoria, responded "Cause they are butt ugly". This real estate agent is one of over 800 on Twitter who openly follow a self-described mother, psychology student and cat lover from Perth, who tweets almost daily with missives such as "It's time practicing Islam in Australia is outlawed and all that [sic] practice it are charged and prosecuted", and diatribes against Islam as a "cult" of violence and paedophilia.
This could all be ignored, and it would almost be amusing, if it were not for the fact that Islamophobia is increasingly affecting real people in their daily lives. Last week, a mosque in Brisbane was spray-painted with the words "Get the f--k out of our country!" A teacher and a student at a Sydney school were reportedly threatened with a knife by an uninvited guest who asked whether it was a "Muslim school". Even in Canberra, an enlightened and educated town, I have been harassed on the streets and in shopping malls, from Woden, to Belconnen to Civic. Sometimes it is no more than a snarling look from a passer-by; sometimes it is the muttering of an epithet such as "terrorist"; on two occasions it has amounted to physical intimidation.
This is the real and ultimate manifestation of Islamophobia. It is practiced a small group of Australians, no more representative of Australia than ISIS sympathisers are of Muslims, but their actions are making Muslims – and women in particular – fear for their safety. The Islamophobic movement is not as small as we would wish. Nor is it hidden in the dark corners of the internet. Many online practitioners of Islamophobia can very easily be identified with full names, and their addresses and employers traced with a few short Google searches. Of course, the rampant Islamophobia should not obscure the presence of plausible and considered critiques of the burqa and the niqab. They are worn by a small minority of Muslim women. Most Muslims consider the garments to be the result of an unnecessarily strict interpretation of the religion's modesty requirements, grounded more in culture than in the text of the Quran or the teachings of its principal prophet, Muhammad.
Those concerned with women's rights suggest, with some force, that some women might wear the burqa or the niqab due to oppression from male relatives, especially husbands. But this is not sufficient reason to ban the wearing of the garments. Where they are worn because of oppression, any ban would simply result in the women concerned remaining house‑bound, while women who wear the garments as a genuine personal choice would find their religious freedoms curbed by the state. Laws banning the burqa or niqab in limited places, or requiring their removal for identification and security reasons, may have more merit. But it needs to be demonstrated that people wearing the garments pose a genuine security risk, and that the laws would be effective in addressing that risk. Without that justification, off-the-cuff calls by politicians to ban the garments, whether generally or in limited circumstances, do no more than inflame the internet hordes. The effect of this practice on Australian Muslims is real.
Dr Ismail is an Associate Lecturer in Middle East politics and Islamic studies at the Centre for Arab and Islamic Studies (The Middle East and Central Asia) at the Australian National University.
© The Sydney Morning Herald
A 27-year-old man has been handed a €7,200 fine and a one year prison sentence for inciting hatred and re-engagement in National Socialist activities.
26/9/2014- Korneuburg Regional Court convicted the man after he confessed to posting countless Nazi and xenophobic comments and content online. The prosecutor noted that he had trivialised the Holocaust and had an ‘88’ tattoo on his back, which stands for HH, or Heil Hitler. The prosecutor said he would not bother reading out any of the man’s postings as “any normal person would find them disgraceful”. When questioned the 27-year-old admitted that he had extreme right-wing views and said that he had developed an aversion to immigrants, Jews, Muslims and Africans since being at school. He also admitted possessing illegal weapons purchased in the Czech Republic. The 27-year-old already had a criminal record after being involved in violent brawls. His defence argued that he had been unemployed for some time and in his frustration had become influenced by right-wing propa-ganda. He said that he has since had most of his tattoos removed, or altered into Hawaiian symbols, and was a “changed man”.
© The Local - Austria
25/9/2014- Facebook has been the center of controversy many times, but this may be the first time that their changing of the rules may hit them where it hurts. LGBT+ users who are shocked, saddened and offended by Facebook's new "real name" policy are flocking to a new network: Ello. If you haven't heard of Ello before this week, you're not alone. Just this morning my Facebook timeline blew up with friends offering invite codes for what I assumed was a new Gilt-like shopping site, and what turned out to be a new and friendlier social network, which would allow anyone who wanted to be a part of it be who they wanted to be, complete with the name they've chosen for themselves.
Ello's uptick in popularity comes from Facebook's new decree that everyone on the site must now use their real name. For some, like me, this isn't a problem. I use my real name for everything (because I am fairly histrionic). For others, those who are better known by their drag names, those who are concerned about being stalked and those who don't want to be found under their real name, this is a huge problem. Facebook claims that the new policy (which requires all users to register under the name which appears on their ID and not under GIRL YOULOOKINGFINE) is meant to keep the community safe, but The Daily Dot points out that it may also be a way for making performers migrate from personal profiles to fan pages in an effort to make more coin for the site's already overflowing coffers. And, according to Sister Roma, a sister of Perpetual Indulgence who's been very vocal about the new rules, using your legal name might even be dangerous or traumatizing for some.
This issue is discriminatory against transgender and other nonconforming individuals who have often escaped a painful past. They've reinvented themselves or been born again and made whole, adopting names and identities that do not necessarily match that on their driver's license. Enter Ello, the Facebook alternative that's less icky than Google+, ad-free and willing to let you be the person you've always wanted. Well, with carefully chosen photographs and status updates, of course. According to The Daily Dot, more and more users have been flocking to the site and, after an influx of radical faeries, Ello's creator says that the site is having a huge surge in registrations from those in the LGBT+ community. Ello is refreshingly simple. According to creator Paul Budnitz. The Daily Dot reports that the social network's abuse team can quickly respond to users and that the network takes any form of harassment very seriously. "You don't have to use your 'real name' to be on Ello. We encourage people to be whoever they want to be," Budnitz said. "All we ask is that everyone abide by our rules (which are posted on the site) that include standards of behavior that apply to everyone. We have a zero tolerance policy for hate, stalking, trolls, and other negative behavior and we'll permanently ban and nuke accounts of anyone who does any of this, ever."
Awesome! No wonder people are migrating. But how capable is Ello of handling even a small percentage of users? It's definitely not big yet, but as word spreads, how long before it's also inundated with more users than the abuse team can handle. And how long before Ello's creators decide that ad revenue isn't just desirable, but possibly necessary? As for Sister Roma, she'll continue fighting Facebook's new policies. A protest is scheduled for October 2nd.
Major Internet Companies Express Support for Initiative
23/9/2014- The Anti-Defamation League (ADL) today announced the release of “Best Practices for Responding to Cyberhate,” a new initiative that establishes guide-posts for the industry and the Internet community to help prevent the spread of online hate speech. The Best Practices initiative is the outcome of months of discussions and deliberations by an industry Working Group on Cyberhate convened by ADL in an effort to develop a coordinated approach to the growing problem of online hate speech, including anti-Semitism, anti-Muslim bigotry, racism, homophobia, misogyny, xenophobia and other forms of online hate. Members of the Working Group included leading Internet providers, civil society leaders, representatives of the legal community, and academia.
As participants in the Working Group, representatives of Facebook, Google/YouTube, Microsoft, Twitter, and Yahoo have expressed support for ADL’s efforts. They were among those who offered advice to ADL in the formulation of the Best Practices, and the final document embodies some of their own current practices. In conjunction with today’s announcement, these companies are taking new steps to remind their own communities of their policies regarding online hate and how users can respond when they encounter it.
“We challenged ourselves collectively to come up with effective ways to confront online hatred, to educate about its dangers and to encourage individuals and communities to speak out,” said Abraham H. Foxman, ADL National Director and co-author, with Christopher Wolf, of Viral Hate: Containing Its Spread on the Internet. “The Best Practices are not a call for censorship, but rather a recognition that effective strategies are needed to ensure that providers and the wider Internet community work together to address the harmful consequences of online hatred. This is an opportunity for the Internet community to present a united front in the fight against cyberhate.”
“It is our hope the Best Practices will provide useful and important guideposts for all those willing to join in the effort to address the challenge of cyberhate,” said Christopher Wolf and Art Reidel, ADL leaders and co-chairs of the Working Group. “We urge members of the Internet community to express their support for this effort and to publicize their own independent efforts to counter cyberhate. We believe that, if adopted widely, these Best Practices could contribute significantly to countering cyberhate.”
The Best Practices call on providers to:
Take reports about cyberhate seriously, mindful of the fundamental principles of free expression, human dignity, personal safety and respect for the rule of law.
Providers that feature user-generated content should offer users a clear explanation of their approach to evaluating and resolving reports of hateful content, highlighting their relevant terms of service.
Offer user-friendly mechanisms and procedures for reporting hateful content.
Respond to user reports in a timely manner.
Enforce whatever sanctions their terms of service contemplate in a consistent and fair manner.
The Best Practices call on the Internet Community to:
Work together to address the harmful consequences of online hatred.
Identify, implement and/or encourage effective strategies of counter-speech — including direct response; comedy and satire when appropriate; or simply setting the record straight.
Share knowledge and help develop educational materials and programs that encourage critical thinking in both proactive and reactive online activity.
Encourage other interested parties to help raise awareness of the problem of cyberhate and the urgent need to address it.
Welcome new thinking and new initiatives to promote a civil online environment.
ADL has long played a leading role in raising awareness about hate on the Internet and working with major industry providers to address the challenge it poses. In May 2012, the Inter-Parliamentary Coalition for Combating Anti-Semitism (ICCA), an organization comprised of parliamentarians from around the world working to combat resurgent anti-Semitism, asked ADL to convene the Working Group on Cyberhate, including representatives of the Internet industry, civil society, the legal community and academia, with a mandate to develop recommendations for the most effective response to manifestations of hate and bigotry online.
In the coming weeks, ADL and industry leaders will be urging others in the Internet community to join in this effort. A number expressed support for the initiative on its launch today. “Facebook supports ADL’s efforts to address and counter cyberhate, and the best practices outlined today provide valuable ways for all members of the Internet community to engage on this issue,” said Monika Bickert, head of global policy management at Facebook. “We are committed to creating a safe and respectful platform for everyone who uses Facebook.” “Every day, millions of people post content on YouTube, Blogger, and Google+. In order to maintain a safe and vibrant community across our platforms, we offer tools to report hateful content, and act quickly to remove content that violates our policies,” Google said in a statement. “We support the ADL’s continued efforts to combat hatred online.”
“Microsoft is committed to providing a safe and enjoyable online experience for our customers, and to enforcing policies against abuse and harassment on our online services, while continuing to keep freedom of speech and free access to information as top priorities,” said Dan Bross, Senior Director of Corporate Citizenship at Microsoft. “The Best Practices document is a tool that can foster discussion within the community and advance efforts to combat harassment and threats online.” “Twitter supports the ADL’s work to increase tolerance and raise awareness around the difficult issue of online hate, the company said in a statement. “We encourage the internet community to seek diverse perspectives and keep these best practices in mind when dealing with difficult situations online.” "Yahoo is committed to confronting online hate, educating our users about the dangers and realities, and encouraging our users to flag any hostile language they may see on our platform," Yahoo said in a statement. "As a member of ADL's Working Group on Cyberhate, we support ADL's efforts to promote responsible and respectful behavior online."
More information on the Best Practices is available on the League’s web site at www.adl.org/cyberhatebestpractices.
© The Anti-Defamation League
Members of Australia's Muslim community have set up a Facebook page to track religious hatred and discrimination.
22/9/2014- Amid increasing anti-Muslim sentiments coupled with anti-terror police raids, a Facebook page has been launched to track Islamophobia in Australia, encouraging the Muslim minority to report attacks on them. "We have been hearing about a recent surge in incidents of Islamophobia but unfortunately there has been no formal register to record the incidents," the page, Islamophobia Register Australia, said in a post seen by OnIslam.net. Launched last week, the page was followed by one of the biggest anti-terror raids in Australian history in which 15 people were arrested from north-western Sydney. The page, that has attracted more than two thousand followers, urged Australian Muslims to report incidents through sending a private "message" to the page or by emailing it at firstname.lastname@example.org. Details like full name, street address, city, state, post code, email address, contact phone number are required to submit the report, along with the details of the incident.
Victims of the anti-Muslim sentiments also have to select the category of the attack from a list of Islamophobia incidents categories provided by the page. A few days after launching the page, several Islamophobic attacks were reported by Facebook users. The anti-Muslim attacks include "a mosque being defaced in Queensland, a senior scholar and member of the Australian National Imams Council detained for over 2.5 hours at Sydney airport, direct threats issued against the Grand Mufti of Australia," the page said. "Lakemba Mosque and Auburn Mosque from anonymous members of the Australian Defence League, women in hijab verbally abused in the streets of Sydney, at shopping centers and whilst driving. "Countless examples of social media vitriol targeting Muslims." The page itself became a target for hate messages and Islamophobic posts since its creation on September 16.
Bracing to enact the new controversial anti-terror measures, the Australian premier Tony Abbott said that "Australians must accept a reduction in freedom and an increase in security for some time to come”. Addressing the parliament on Monday, September 22, Abbott urged Australians to back a shift in “the delicate balance between freedom and security”. “I can’t promise that hideous events will never take place on Australian soil, but I can promise that we will never stoop to the level of those who hate us and fight evil with evil,” Abbott was quoted by The Guardian. Away from freedom restrictions trends, other voices have called for fostering "integration" in the Australian community. "I believe in bringing people of different races, different religions, to this country but once you're here you've got to become part of the mainstream community," former Prime Minister John Howard told the Seven Network.
Premier Colin Barnett has taken a different direction, choosing to assure Muslims in Western Australia that they were welcome in the state. "Australia and is a very welcoming country and a very peaceful country," Barnett was quoted by Sky News. "And the vast, vast majority of Muslims living in Australia are peace-loving, hard-working." Muslims, who have been in Australia for more than 200 years, make up 1.7 percent of its 20-million population. In post 9/11-era, Australian Muslims have been haunted with suspicion and have had their patriotism questioned. A 2007 poll taken by the Issues Deliberation Australia (IDA) think-tank found that Australians basically see Islam as a threat to the Australian way of life. A recent governmental report revealed that Muslims are facing deep-seated Islamophobia and race-based treatment like never before.
© On Islam
16/9/2014- The Central Council of Jews in Germany has called on police to more thoroughly combat manifestations of anti-Semitism online. Speaking in an interview for the Bavarian newspaper Passauer Neue Presse, Dieter Graumann, the head of the Jewish organization, said many authors of anti-Semitic content use their real names online. The Council is convening a demonstration in Berlin this Sunday. "It would not be difficult at all to indict [the online anti-Semites]. Detectives must intervene consistently," Graumann said. "Sometimes the extent and the lasciviousness of the incitement against us on the blogs makes me sick. Neither I nor other Jewish people in Germany have ever experienced being targeted with so much hatred and resentment here," Graumann claims.
The Central Council of Jews in Germany is convening a demonstration in Berlin this Sunday to draw attention to manifestations of hatred toward Jewish people. Speakers at the Brandenburg Gate will include German President Joachim Gauck, German Chancellor Angela Merkel, chair of the German Conference of Bishops Cardinal Reinhard Marx, and Nikolaus Schneider, president of the council of the Evangelical Church in Germany. Thousands of people are expected for Sunday's demonstration, which will also be attended by other celebrities and politicians, including Vice-Chancellor Sigmar Gabriel, the chair of the Social Democrats. German Interior Minister Thomas de Maizière told news server Bild-online he will attend the demonstration because he wants "Jewish people to be glad they live in Germany."
One of Sunday's expected speakers, World Jewish Congress chair Ronald Lauder, warned Europeans yesterday that they subject their countries to the risk of tarnished reputations when they vote for ultra-right politicians. He also expressed concern that Islamists would attempt to use every means possible, particularly the internet, to incite hatred, and noted the threat posed by radicalized Muslims returning from Iraq and Syria. The Associated Press reported that the May elections to the European Parliament brought success to ultra-right parties, particularly in France. "One extremist representing a country puts the whole land in a negative light," Lauder told the AP.
According to the Deutsche Presse-Agentur, during Israel's offensive in the Gaza Strip this summer, demonstrations were held in Germany and other European countries during which verbal attacks on Israel and against Jewish people were repeatedly heard. Lauder said he believes anti-Jewish demonstrations have been attended by just a small percentage of European Muslims; what he finds very disturbing are "political agitators on the side of Muslim extremists who are doing their best to exploit every means possible, especially through the internet, to incite people." "We want to show people that we will not let ourselves be terrorized, we will not let them take our courage from us. The message is: Jewishness has a future in Germany," Graumann said of the upcoming demonstration in Berlin.
Supervisors at Facebook have come to a brief cease-fire with drag performers, agreeing to meet with a handful to discuss their policy requiring users to use legal names on profiles.
15/9/2014- Facebook reached out to San Francisco-based Sister Roma shortly after she announced a planned protest on her Facebook page. Roma said Facebook representatives spoke with her, agreeing to meet with her and members of the drag community. She wrote: “Just got off the phone with Supervisor David Campos and representatives from Facebook. They have agreed to meet with us and members of the community for an open dialogue regarding their legal name policy.” For now, Roma says the protest has been cancelled. Last week Facebook deleted and made inactive hundreds of profiles for users who used names that did not match the name on their driver’s license or birth certificate
Sister Roma, of the drag house Sisters of Perpetual Indulgence brought the controversy to social media when she posted to her Facebook account. She wrote: “In light of the new demand by Facebook that we use our ‘real’ names I am considering shutting down my personal page to concentrate on my ‘FAN PAGE.’ I update it an [sic] interact as much as I can but I detest the idea of having a fan page. I have friends not fans.” An online petition, started by Seattle-based Olivia De Grace, urged Facebook to relent on the policy, as it was hurtful to performers. She wrote: “We build our networks, community, and audience under the names we have chosen, and forcing us to switch our names after years of operating under them has caused nothing but confusion and pain.” At this time, Facebook has not released a statement as to whether they intend to change the policy.
© Pink News
14/9/2014- In response to recent comments posted on the social messaging app Yik Yak, students and faculty gathered again Tuesday outside of Waterfield Library in one of several scheduled events targeted at creating a campus-wide discussion on racism. Following Sept. 4’s peaceful protest against the shooting of Michael Brown, anonymous posters from around Murray took to Yik Yak posting racist comments regarding the gathering, some suggesting possible violent retaliation. In addition to signs regarding the recent events in Ferguson, Mo., newly-made signs with “stop racism” and “end racism” written on them were held by protesters Tuesday.
Arlene Johnson, senior from Sikeston, Mo., said she was shocked by some of the comments written after the first protest. Johnson made headlines as a freshman in 2010 when she spoke out against one of her professors, Mark Wattier. According to Johnson, Wattier said that her tardiness to a class compared to the actions of slaves. “And (Wattier) said the slaves never showed up on time, so their owners often lashed them for it,” Johnson told the Murray Ledger & Times in 2010. “It just hurts so bad to see that this still exists,” she said.
Kesia Casey, junior from Hartford, Ky., like Johnson, attended both protests. She said no one should ever feel comfortable posting comments such as those on Yik Yak, even if they’re anonymous. “A lot of people think the solution to this problem is to keep quiet and it will go away,” Cacey said. “The solution, I believe, is to talk about it and become comfortable talking about it because it’s not going away.” A number of events are being planned to do just that – create rhetoric between students, faculty, staff and the administration about racism on campus. These events include a forum about racism and the state of black students on campus presented by The Black Student Council, activities during International Education Week and a “teach-in” organized by select faculty.
David Pizzo, associate professor of Humanities and Fine Arts, attended the rally this week and said that while the protest isn’t going to make an immediate impact, it is key to generating momentum and to keep the discussion on racism going. “We don’t think we’re going to end racism by holding signs,” Pizzo said. “But people are noticing and stopping. One thing I can promise you is if people don’t do this, it’s just going to disappear.” Since the initial posts on Yik Yak, Murray State administration has responded by forwarding screenshots of the offending posts to the University attorney, director of Equal Opportunity and the University chief of police.
SG Carthell, director of Multicultural Affairs, said his office is focused on finding out where “gaps” with tolerance and acceptance are at Murray State. He said the comments on Yik Yak aren’t necessarily caused by a gap in institution policies or how the University is run, but caused by the environments that people come from and are exposed to. “Some of it comes from misinformation, some of it comes from just direct negative influences and biases,” Carthell said. He said administration, including himself, has an obligation to be seen at these types of protests and events on campus. “We have a responsibility to be here to show that, number one, (students) know that they’re safe, but two, that they know we’re hearing them and we listen to them,” Carthell said.
© Murray State Uni News
There have been multiple reports that Facebook is forcing gay and transgender users to use their legal names instead of their online personas or chosen names, as part of a crackdown on pseudonyms, despite the danger this poses to some users
12/9/2014- Not that long ago, it looked as though Facebook might be softening its previous stance on real names, with comments from CEO Mark Zuckerberg that suggested he saw the value of anonymity in some cases — and at the same time, the social network has expanded the number of gender-related selections users have to choose from. Despite those moves, however, some gay and transgender users say the site is forcing them to use their birth names or have their pages blocked. According to the website Queerty, the network has been ordering gay users who registered using their drag personas to either set up a fan page or change to their legal name, and has been asking them to send copies of birth certificates and driver’s licenses to verify their identity. Queerty said it was alerted to the crackdown by Sister Roma, the drag persona of a gay man named Michael Williams, who has been forced to change his account to his given name.
What’s odd about the move is that Facebook put together a significant PR campaign earlier this year to promote the fact that it had changed the gender-related menu choices for users, offering more than 50 options for the gay and transgendered — something it said was done after much consultation with gay and transgender advocates. In one article, a trans Facebook engineer named Brielle Harrison even talked about how important this option was for people like herself. Taylor Hatmaker at The Daily Dot says reports have been emerging from a number of gay communities that other users who registered under drag personas like Sister Roma are also being forced to change their names or risk losing their pages. Although setting up a fan page is an option, Hatmaker — who is gay — points out that this isn’t appropriate for many users, and that forcing them to do so or risk being shut out of Facebook altogether is unfair:
Presumably, Facebook wants to shoehorn these personal identities into Pages, like the ones brands and celebrities use. But for queer users more interested in keeping up with friends and building community than collecting followers, it’s an extremely poor fit. Facebook is making an implicit judgment call here, operating off of the hunch that an account in question is not the “true” identity of the user, which is an inappropriate position to begin with.
As Hatmaker and others like ZDNet columnist Violet Blue have noted, pseudonymity is not just a convenience for many gay and transgender users, but is something they are in many cases compelled to use because of threats of violence, or because revealing their identity could put their jobs at risk. Forcing them to use legal names essentially means forcing them not to use Facebook. As Jillian York of the Electronic Frontier Foundation pointed out during a discussion of the topic on Twitter, the action against Sister Roma and others may not be a sign that Facebook is actively targeting gay men or drag queens, but could be a result of complaints from those who do want to target those individuals, which Facebook then has to pursue. In any case, she says, the policy is unwise. Facebook and Google+ were both involved in a “real names” crackdown several years ago, saying their networks were designed for real identities and that pseudonyms made bad behavior more likely to occur. Google has since given up on its real-name policy for Google+, but it seems Facebook is still pursuing that goal — even though it may drive some users away.
Page littered with racial slurs, makes fun of people who are disabled or different
8/9/2014- A Facebook page that makes fun of Winnipeggers with photos has outraged several people, including a local activist that works with the homeless, but the page's administrator is adamant about keeping it running. The "People of Winnipeg" page has more than 17,000 members and contains posts that range from mocking people with disabilities to making racial slurs about multiple ethnic groups. Many of the images posted on the page show people passed out on the street and worse. "People [are] making fun of the homeless and the drunk people and just disgusting pictures that are developing," Althea Guiboche, who hands out food to the city's homeless every week, told CBC News on Monday. "I can't stand for that. I can't even believe Winnipeg people are taking part in that."
The page's administrator, Ricky Paskie, said he and the other people behind "People of Winnipeg" do their best to take down anything that could be deemed offensive, but with more than 200 posts a day that can be difficult. "If we do find that it's racist or indecent for people, it will be deleted," Paskie said. "That's not our goal … to make fun of anyone [who is] mentally ill, homeless." But Paskie noted that there are also plenty of posts that are positive about the city. "Our intent [is] just to show the funny things and the things you see in Winnipeg … whether it's someone dancing at the bus stop or a guy wearing a gorilla suit," he said.
'It's not right'
Britt James said she was mocked by people on the "People of Winnipeg" page after someone uploaded a photograph of her and another woman inside a medical clinic. "All I could think of was, 'How could you take a photo of somebody in a medical clinic?' You know, how is this even funny?" she said. "To have people you don't know publicly make fun of you, it's not right." James said the page's administrators initially would not remove the photo, and Facebook told her the image did not breach any of its terms. She said she took matters into her own hands by contacting the original poster's workplace and threatening legal action. The photo has since been removed, but James said the damage has already been done. "The majority of things, if you actually look on there, are hurtful, they're spiteful, they're rude. It's harassing, it's bullying," she said. Representatives from Facebook did not return calls seeking comment.
Facebook user responds
Jesse James, who has been a part of the Facebook group for more than a year, wrote a response to critics of the page saying, "This group is about posting pictures of people who are out in public doing crazy things that may seem very unrealistic but are very real. "The group is about finding the funny moments that are right in front of us every day because we live in a crazy city. This group is not here for people to be racist." But Britt James said she wants the Facebook page removed or completely overhauled. She plans to meet with a lawyer. Guiboche, also known as the Bannock Lady, said she has been personally attacked on the Facebook page for speaking out against it, but she believes the page does not have to be shut down as long as it stops being a forum for racism and hurting the homeless. "It just goes to further dehumanize them," she said. "We don't need them publicly parading our homeless around for public comments. Why don't they just step up and help them instead?"
© CBC News
By Denise Oliver Velez
7/9/2014- How many times have you clicked on a news article or a blog piece, or watched a YouTube, only to find that the comments section attached to it is a slime pit of some of the most vile racist comments imaginable? Too often, your response may be like mine has been at times—you shrug and decide, "I don't read comments." You click away, and move on to the next story, web page or video. You don't have to deal with the racism, because it's no longer on your screen. Back in December 2012, I wrote a post, Ending racism—one person at a time. Contrary to recent opinions I've read, as a black person in the United States, I don't think racism has gotten worse, nor do I believe we are "post-racial."
What I do believe is that the relative anonymity of the internet has allowed many people to express their racism openly, rather than behaving one way in public spaces where someone from the group they hate may be present. Now they can give full rein to thoughts that might garner them public censure or worse if this crap was said face-to-face. The fact that elected officials, and the Teapublican Party touts its racism openly, with little or no pushback from its membership and only hypocritical "nopologies" when busted, has given license to "racists run wild" in cyberspace. Follow me below the fold for more.
Very few of the major traditional online news media sites have good comment moderation . The New Times is probably better at moderation than most, and the most egregious spew never makes it through the waiting period there for posting. Sometimes things do slip through though, but it is fairly easy to flag garbage, and the response, in my experience, is swift.
Recently while reading a Times piece on Michael Brown's murder and the ensuing events in Ferguson, I saw a comment, recommended by readers, that fit into the "but … but … but … Mike Brown was a thug" genre of post. It was a repost of a vile email and post making its way through racist networks that purported to show an arrest record for a Michael Brown. The problem is that the Michael Brown with the record was not the same Michael Brown whose life was cut short by Officer Darren Wilson. This is not to say that it makes one whit of difference if the deceased Michael Brown had or didn't have an arrest history—nothing in a person's background should excuse being executed. I'd already seen a debunking here on Daily Kos, and did some checking on my own. Weeks later, Michael Brown's lack of a record is now in the news and yet the smear campaign continues. When I read that story, and saw a recommended reader comment citing the false email, I flagged it, and when I checked back about an hour later it had disappeared. One small victory in a sea of cyber-hate.
The phenomena of cyber-racism is an area that is currently being explored in academia. The book, Cyber Racism: White Supremacy Online and the New Attack on Civil Rights by Jessie Daniels, is a good place to start learning more:
In this exploration of the way racism is translated froCyber m the print-only era to the cyber era the author takes the reader through a devastatingly informative tour of white supremacy online. The book examines how white supremacist organizations have translated their printed publications onto the Internet. Included are examples of open as well as 'cloaked' sites which disguise white supremacy sources as legitimate civil rights websites. Interviews with a small sample of teenagers as they surf the web show how they encounter cloaked sites and attempt to make sense of them, mostly unsuccessfully. The result is a first-rate analysis of cyber racism within the global information age. The author debunks the common assumptions that the Internet is either an inherently democratizing technology or an effective 'recruiting' tool for white supremacists. The book concludes with a nuanced, challenging analysis that urges readers to rethink conventional ways of knowing about racial equality, civil rights, and the Internet.
In her book, Cyber Racism: White Supremacy Online and the New Attack on Civil Rights, Jessie Daniels discusses the common misconceptions about white supremacy online; it’s lurking threats to today’s youth; and possible solutions on navigating through the Internet, a large space where so much information is easily accessible (including hate-speech and other offensive content). Daniels claims that although no one can say for sure how many white supremacy sites there are on the internet, the number is definitely rising (especially after Barack Obama’s election in 2008), and a majority of them are fueled by people in the USA.
A review from the blog and website for the African and African American Studies course, "Exploring Race and Community in the Digital World," taught by Carla D. Martin, states:
Daniels lays out three threats that white supremacists pose online to the the world:
Threat 1: Internet provides easy access—she coins the term “globalization”—and hence, perpetuates ”translocal whiteness”: a white identity that is not bound by geography.
Threat 2: Some white supremacist sites, that subject minorities to the “white racial frame,” motivate danger and violence in real life.
Threat 3: Through the nature of its medium, the Internet tends to equalize all sites, rendering what used to be intensely personal and political views in the 1960’s into a modern-day matter of personal preference.
Some of you may know Professor Daniels' work from the blog, RacismReview, which she co-founded with sociologist Joe Feagin.
Hate speech on the internet has become an issue of global concern, addressed by the United Nations, and groups like the International Network Against Cyber Hate, which is sponsoring a conference in Belgium in October. While I've focused on online racism in this post, the same problem exists for sexism, misogyny, homophobia, transphobia, ethnocentrism, antisemitism, anti-Islam, anti-immigration, and all the other "isms" and haters. No one person alone can counteract the tidal wave of hate that swamps many websites. But each individual can help stem the flood. I frequently read grumbles here at Daily Kos about the moderation system. Frankly, at a site that gets an enormous amount of hits daily and thousands of comments, the incidents of racism that get a free pass here are minimal in comparison to other major sites. Trolls who make an account simply to spew are swiftly ban-hammered. The racist remarks from people who have a longer tenure here (yes they occur—no community, no matter how progressive, is immune to racism) are also fought against and hidden, and repeat offenders usually find themselves no longer welcome here. Does that mean we can't do better? No.
I believe that there are "more of us" than of them (the haters) but I also think that many of us have found it too easy not to push back, having determined that it is a thankless and/or futile task. Black Panther Party Minister of Information Eldridge Cleaver once said, "You're either part of the solution or you're part of the problem." I agree with Eldridge. Since I'm on the net every day, searching for news sources for articles and for material to use in my classes, I've set myself a daily quota of pushback. I do about 10 per day (not counting efforts here at Daily Kos). It doesn't take up a lot of my surfing/reading time. I don't really participate much in Facebook or Twitter, other than to push a "post" button from here, but there are news outlets with comments sections I do use frequently. I also use video a lot, both here and for school. As disgusting as comments are on YouTube, they are pretty easy to flag, and to vote down.
"A few keystrokes a day can drive racism away" is my new motto.
© The Daily Kos
by Kilian Melloy
4/9/2014- If you're familiar with a feeling of helpless rage and frustration at vile anti-gay postings at Facebook, reader comments sections of online news outlets, and discussion threads around the Internet, you know what it's like: It feels like the sheer hate from venom-filled comments hurled across the digital medium leaves you sore and bruised. It feels, in other words, not so different from a physical attack. It turns out that sense of harm isn't just imaginary. A new study indicates that minorities of all sorts -- including racial and sexual minorities -- suffer measurable harm when subjected to hate speech in social media.
The study is the work of researchers at Sapienza University of Rome and the Institut National de la Statistique et des Études Économiques du Grand-Duché du Luxembourg, a Sept. 3 posting at The Advocate reported. The study, titled "Online Networks and Subjective Well-Being," purports to "test [its] hypothesis on a representative sample of the Italian population," and finds a "significantly negative correlation between online networking and well-being." The study concludes that GLBTs and other minority individuals experience "anxiety, distress, and deterioration in trust" when exposed to hate speech in threads and posts online.
It's not just the case that members of minority groups are faced with hateful messages left for a general readership by bigots; just as bad, or worse, are the effects of minorities who speak up online and are targeted for hate speech. The researchers noted a tendency for the remove of cyber-speech to strip away the veil of civility, with hate messages taking on particularly virulence. "In online interactions, dealing with strangers who advance opposite views in an aggressive and insulting way seems to be a widespread practice, whatever the topic of discussion is," The Advocate quoted the report as saying.
The phenomenon of social media serving as a platform for anti-gay bullying among students has played a central role in the narrative about how GLBT youth suffer. But anti-gay animus affects adults, too. Furthermore, it's not necessary for sexual minorities to encounter undisguised hate speech online for their health to suffer; previous studies have uncovered evidence to suggest that simply living in an environment where one's legal status is called into question, such as states where marriage rights have been put to a popular vote via ballot initiatives, burdens GLBT individuals with higher levels of stress and anxiety.
But even in absence of such an animosity-charged political climate, low-level and pervasive anti-gay stigma can have similar effects. In 2011, a study from the Williams Institute at the UCLA School of Law concluded that "stigma and social inequality can increase stress and reduce well-being for LGB people, even in the absence of major traumatic events such as hate crimes and discrimination."
Kilian Melloy serves as EDGE Media Network's Assistant Arts Editor, writing about film, theater, food and drink, and travel, as well as contributing a column. His professional memberships include the National Lesbian & Gay Journalists Association, the Boston Online Film Critics Association, and the Boston Theater Critics Association's Elliot Norton Awards Committee.
© Edge on the Net
4/9/2014- Antisemitic reactions to this summer’s conflict between Israel and Hamas resulted in record levels of antisemitic hate incidents in the UK, according to new figures released by CST today. CST recorded 302 antisemitic incidents in July 2014, a rise of over 400% from the 59 incidents recorded in July 2013 and only slightly fewer than the 304 antisemitic incidents recorded in the entire first six months of 2014. A further 111 reports were received by CST during July but were not deemed to be antisemitic and are not included in this total. CST has recorded antisemitic incidents in the UK since 1984.
The 302 antisemitic incidents recorded in July 2014 is the highest ever monthly total recorded by CST. The previous record high of 289 incidents in January 2009 coincided with a previous period of conflict between Israel and Hamas. CST also recorded at least 150 antisemitic incidents in August 2014, making it the third-highest monthly total on record. The totals for July and August are expected to rise further as more incident reports reach CST. 155 of the 302 incidents recorded in July (51%) involved direct reference to the ongoing conflict in Israel and Gaza. All incidents require evidence of antisemitic language, targeting or motivation alongside any anti-Israel sentiment to be recorded by CST as an antisemitic incident.
101 antisemitic incidents recorded in July involved the use of language or imagery relating to the Holocaust, of which 25 showed evidence of far right political motivation or beliefs. More commonly, reference to Hitler or the Holocaust was used to taunt or offend Jews, often in relation to events in Israel and Gaza, such as via the twitter hashtag #HitlerWasRight. 76 of the 302 incidents in July (25%) took place on social media. CST obtained a description of the offender for 107 of the 302 antisemitic incidents recorded during July 2014. Of these, 55 offenders (51%) were described as being of south Asian appearance; 32 (30%) were described as white; 15 (14%) were described as being of Arab or north African appearance; and 5 (5%) were described as black.
There were 21 violent antisemitic assaults recorded by CST, none of which were classified as ‘Extreme Violence’, which would involve a threat to life or grievous bodily harm (GBH). None of the 21 assaults resulted in serious injury. There were 17 incidents of Damage & Desecration of Jewish property; 218 incidents of Abusive Behaviour, which includes verbal abuse, antisemitic graffiti, antisemitic abuse via social media and one-off cases of hate mail; 33 direct antisemitic threats; and 13 cases of mass-mailed antisemitic leaflets or emails. CST recorded 179 antisemitic incidents in Greater London in July 2014, compared to 144 during the whole of the first half of 2014. There were 52 antisemitic incidents recorded in Greater Manchester, compared to 96 in the first six months of the year. 71 incidents were recorded in other locations around the UK during July.
CST spokesman Mark Gardner said:
These statistics speak for themselves: a record number of antisemitic incidents, few of them violent, but involving widespread abuse and threats to Jewish organisations, Jews in public places and on social media. It helps to explain the pressures felt by so many British Jews this summer, with its combination of anti-Jewish hatred and anti-Israel hatred. The high proportion of offenders who appear to come from sections of the Muslim community is of significant concern, raising fears that the kind of violent antisemitism suffered by French Jews in recent years may yet be repeated here in the UK. CST will continue working with Police and Government against antisemitism, but we need the support of others. Opposing antisemitism takes actions not words. It is particularly damaging for public figures, be they politicians, journalists or faith leaders, to feed these hatreds by comparing Israel to Nazi Germany or by encouraging extreme forms of public protest and intimidation. Prosecutors also have their part to play. Those who have used social media to spread antisemitism are identifiable and should be prosecuted.
© CST Blog
4/9/2014- The Dutch security service AIVD has broken privacy laws in its research into social media and wrongly hacked into website forums to gather information on all users, regulators said on Thursday. The report by the security service regulator CTIVD says five hacking operations were not properly motivated and were therefore unlawful. The hacks were carried out on behalf of foreign security agencies, the NRC reported. In four other investigations, privacy regulations were broken disproportionately, the CTIVD said.
These were large general web forums without a radical or extremist tint. and the privacy of a large number of ordinary citizens was wrongly invaded. The CTIVD did not mention any forums by name but the NRC cited website Maroc.nl as a possible example. At the time the NRC first reported on the scandal, home affairs minister Ronald Plasterk said the hacking was within the law. In a reaction to the findings, Plasterk said he had instructed the AIVD to tighten up its procedures but stressed research into websites is necessary to allow the security service to do its job properly.
CTIVD39Toezichtsrapportsocialemedia.pdf (543,04 KB) (PDF full report in Dutch)
© The Dutch News
New laws urgently needed to protect vulnerable communities, Limerick academics conclude
2/9/2014- Ireland urgently needs new laws to protect vulnerable communities from hate crime, according to a report being launched today by University of Limerick experts. The study proposes the creation of new offences and the passing of longer sentences for assault, harassment, criminal damage and public order crimes motivated by hostility, bias, prejudice or hatred. “The absence of hate crime legislation in Ireland is a glaring anomaly in the European context, and indeed across the West,” the report states. “Without it, Ireland stands virtually alone in its silence with respect to protecting vulnerable communities from the harms of this particular form of violence.”
Labour Senator and legal academic Ivana Bacik, who will launch the Life Free From Fear report today, said the study showed hate crime was a “very real phenomenon in Ireland today”. The academic experts surveyed 14 non-governmental organisations which advocate for various groups of people including those with disabilities; ethnic minorities; religious minorities; the LGBT community and prisoners. Along with sexual and verbal abuse, they reported instances of physical violence and harassment, while negative use of the internet was also highlighted.
The report proposes fresh legislation to create four new offences all aggravated by hostility: assault, harassment, criminal damage and public order. Alongside the new offences, the introduction of a sentence enhancement provision is recommended under which hostility, bias, prejudice or hatred would be treated as aggravating factors in sentencing. “We propose that legislation be introduced as a matter of urgency,” the report states. The study also recommends amending the Criminal Law (Sexual Offences) Act 1993 to cover cases of sexual offences against disabled people.
It says Ireland should deal with the criminalisation of acts of a racist and xenophobic nature committed through computer systems by signing and ratifying the additional protocol to the convention on cybercrime. Ms Bacik said people in Ireland were targeted because of characteristics including sexual orientation, race, religion, disability and age. “The report shows that the current legal regime is incapable of addressing hate crime, and that legislative change is required. Crucially, the report also presents useful proposals for the appropriate legislative model, and this is particularly welcome,” she said.
The report acknowledges the difficulty in identifying specific communities that are potential victims of hate crimes. However, among the groups the report names as having historically been targets of abuse and discrimination in Ireland are the Traveller community; single mothers; non-Catholics and members of the LGB (lesbian, gay and bisexual) community. More recently, the report suggests, the categories of race, national origin, trans people and ethnic origin could be included. “The authors would regard this list as still incomplete however,” the report states.
The authors of the report are Jennifer Schweppe of the School of Law and Dr Amanda Hynes and Dr James Carr of the Department of Sociology at the University of Limerick. They are members of the Hate and Hostility Research Group (HHRG), which was set up by academics in the University of Limerick with the aim of initiating scholarship in the area in Ireland.
© The Irish Times