- Canada: Police investigation into racist Facebook pages continues
- UK: Kids 30 times more likely to be victim of hate crimes over internet
- UK: Portsmouth City Council criticised over racism investigation into Facebook posts
- Germany: Dangers that lurk on the Internet
- Google Says You Can Do What It Can't: Beat Online Anti-Semitism
- Cyberhate, anti-Semitism discussed at Jerusalem forum
- Israel Jails Palestinian Who Applauded Militant Attacks on Facebook
- USA: Study finds link between N-word Google searches, black mortality
- USA: How YouTube and FB ruined this principal’s chances of denying her racism
- Austria: Child abuse images deface Nazi Mauthausen camp website
- USA: Mother speaks out about Cyber-bullies
- Sexism and Racism in Video Games
- Steam Greenlight highlights hate-crime game, 'Kill the F*ggot'
- UK: Students schooled in cyber safety
- White supremacists stole my identity to spew hatred on the Times of Israel
- Czech Rep: Report: Last year saw rise in anti-Semitic incidents, on internet by 20%
- UK: Ku Klux Klan racist jailed again for far right rants on Facebook
- Czech Rep: Brno university opens centre for cyber attack defence training
- USA: Ignoring hate speech won't silence it (opinion)
- Poll: Most Canadians ignore hateful, racist internet speech
- WJC Latin America launches website to combat Holocaust denial
- YouTube Creators Questioned About Racism (opinion)
- Italy: Facebook blocks Salvini over 'gypsies' slur
- EU spends millions to make next Facebook European
- Facebook tracking said to breach EU law
- UK: Troll who abused disability campaigner reported to police
- UK: Newport man fined for posting racist comments on Facebook
- Slovakia: Government taking note of online extremism
- EU commissioners at odds over geo-blocking
- Canada: Facebook page targeting Winnipeg aboriginal people pulled down
- Website blocking in France; other anti-terrorist legislation in OSCE countries may curb free expression
- Why Facebook is finally baring all over social media standards
- European Imams begin campaign against Online Extremism
- U SA:Twitter users attack 'sell-out' black celebrities
- Internet inspires US far-right extremists to carry out 'lone wolf' terror attacks
- Facebook adds clarification to nudity and “offensive content” rules
- Facebook cracks down on hate speech in new community guidelines
- UK: 15-year-old boy held over tweet targeting Arsenal striker
- France blocks five sites for 'condoning terrorism'
- UK: 'Sinister' far-right website RedWatch calls for information on anti-Pegida marchers
- Canada: Cyberstalker left trail of terror, court hears
- Court overturns Dutch data retention law, privacy more important
- Nine out of 10 Dutch people use the internet on a daily basis
- Netherlands: Three fined for racist comments on Facebook about footballer selfie
- How Reddit Became a Worse Black Hole of Violent Racism than Stormfront
- Reddit CEO: aim of Silicon Valley sex discrimination suit to end 'boys' club'
- When the Internet Breeds Hate
- UK: We take Facebook racism allegations seriously – Portsmouth councillors
- UK: Nazi-affiliated website appeals for personal information about people 'Newcastle Unites'
- UK: Trolling: Internet ASBOs are NOT a deterrent, says criminal justice spokesman
- USA: FCC approves net neutrality rules, classifies broadband as utility
- USA: Preparing for a court battle, FCC is confident net neutrality will survive
- USA: The FCC's net neutrality rules: 5 things you need to know
- France - France prepares for war against online hate speech
- Majority of French are for measures against online hate speech
- Denmark: Free speech threatened, but not by Islamists (opinion)
A police investigation launched nearly two months ago into racist social media pages continues.
20/5/2015- Thunder Bay Police Service spokesman Chris Adams on Tuesday confirmed officers are still looking into a series of Facebook pages which have popped up, which police previously labelled as “extreme racism” and said appeared to target the city’s Aboriginal community “That investigation is still ongoing. Anything that involves an Internet based organization such as Facebook is going to take time,” he said. “It’s still on the radar for us.” While he did not want to speculate on the outcome, possible charges could include harassment or even hate crime.
It’s not a cut and dry case.
“In general, these are tough investigations because of the layers and the fact you’re dealing with organizations that are very much international in scope and you have to weight that against the right of free speech and the ability to be photographed when you’re out and about,” Adams said. “There are a lot of issues that are tied to it and the laws that are in place don’t always address those. Let’s face it, technology outstrips everything we’re used to right now and we’re still coming to grips on how to keep up with it.”
© TB News Watch
Manchester's school kids are more than thirty times more likely to become the victims of hate crimes over social media than in person.
17/5/2015- A Freedom of Information request submitted to Greater Manchester Police produced some harrowing statistics on reported hate crimes relating to social media throughout the region’s schools. It is a well-documented development that sites and apps like Facebook and Snapchat are the new battleground for young people, but in a society of increased inclusion and tolerance, these figures indicate a worrying trend. The findings showed that across the last 12 months, 45 direct hate crimes had been reported across Greater Manchester’s primary and secondary schools. In contrast, a staggering 1391 cases were reported which related to social media, including Facebook, Twitter, Snapchat, eBay, Instagram and even Tinder.
Andrew Bolland, from Stop Hate UK, told MM: “Our service has seen a growth in the use of social media as a means to direct hostility and abuse. “The underlying issue is that perpetrators believe that they are invisible and would be impossible to trace, not realising that it is regarded as seriously as direct abuse from a criminal justice perspective.” A hate crime is defined as any criminal offence which is perceived by the victim or any other person to be motivated by hostility or prejudice based on characteristics or perceived characteristics defined within five monitored groups, also known as ‘hate motivations’. These five groups are disability, race, religion, sexual orientation and transgender identity, but GMP also records crimes motivated by someone's perceived or actual alternative subculture identity. One of the concerns that naturally accompany hate crimes via social media as opposed to direct cases is the potential ripple effect of prejudicial messages to an almost infinite network.
Mr Bolland explained: “A big problem is the ‘mushroom of media’. If someone has a hundred friends, the abusive message spreads through a vast network of people. “It only takes a small number of people to see that message without a proper understanding of the issue, and suddenly it increases and escalates.” He suggested that those convicted of such crimes could be brought together with victims, so as to create a mutual understanding and to make perpetrators aware of the impact their behaviour has on people’s lives. But first and foremost, tackling the issue starts with preventative action and ironing out misguided prejudices through education. He added: “Action needs to be taken to educate younger people early on the issue of discrimination, particularly with regards to social media, and prevent these situations arising in the first place. “We need to try to reach them through schools, youth groups and so on.”
© Mancunian Matters
A council has come under fire for its handling of an investigation into racist messages which appeared on a taxi boss’ Facebook page.
15/5/2015- Viv Young, who has a share in Portsmouth cab firm City Wide Taxis, has been allowed to hold on to his hackney licence despite Portsmouth City Council being made aware of a stream of abusive messages which surfaced on his account:
I have no grievance with anyone. I have sponsored a little girl in Africa for years.
Taxi trade official and City Wide Taxis shareholder Viv Young
The News reported the shocking online posts to the authority, which included: ‘I was driving me cab today and picked up a tribe of, shall we say (not typically English coloured people). ‘I was wondering by having them in my cab, am I leaving myself open to catch malaria, cholera, dengi fever, and of course tics??????’ Another post of a man wearing a burkha while holding a can of lager was uploaded – with each getting likes from Facebook users – while another message called Muslim ‘nonces’. The council’s licensing team looked over the evidence presented to it, before a committee at a meeting decided no action would be taken against Mr Young, saying the Facebook messages were private and that no complaints had been made by members of the public.
At the hearing, Mr Young insisted his Facebook profile had been set to private – which The News knows not to be true as the messages were viewed by reporters before Mr Young deleted his account. Questions have now been raised as to why the council did not look further into whether the posts were public – and why it needed further complaints to take action. Jabeer Butt, deputy chief executive of The Race Equality Foundation said: ‘I don’t know a great deal about how the taxi trade is regulated, but part of those regulations states the safety of customers is paramount. ‘It seems odd Portsmouth City Council has managed to carry out an investigation where someone has repeatedly posted such state-ments about these customers, and concluded no action should be taken. ‘Clearly, there are some terrible things said on the internet and it’s become a haven for people to make very abusive comments.’
Lib Dem Councillor Gerald Vernon- Jackson said: ‘This is not acceptable. ‘It’s pushing the onus on to the victim to put in a complaint. ‘It’s not right people can be putting out racial abuse – then doing this is saying “it’s all right”.’ Mr Young, who received the support of other taxi drivers at the meeting, many of whom were from ethnic minority backgrounds, said some of the posts on his Facebook wall appeared as they had been shared by other users and had not directly been written by him. And Mr Young defended himself by making references to Nazi leader Adolf Hitler’s book Mein Kampf. He said: ‘Have you read Mein Kampf? If you don’t want to read it then don’t –it’s not compulsory. My Facebook (page) was personal – if you don’t like it then don’t look or delete me.’
In his statement at the meeting, overseen by Cllrs Hannah Hockaday, Lee Mason and Sandra Stockdale, Mr Young said: ‘This is a News witchhunt. ‘Some parts of the postings may have been taken out of context. ‘There were no poppy burnings or pig beheadings. I have no grievance with anyone. I have sponsored a little girl in Africa for years. ‘Portsmouth City Council councillors and officers come under the political umbrella and speak as such – I use industrial language. ‘I’m surprised and disappointed by reactions, this should never have got this far. ‘There have been no complaints from the public. I am one of the best drivers in Portsmouth. In my opinion there’s no case to answer.’
Cllr Hockaday, committee chairwoman said: ‘The comments made were private. Mr Young has friends from a diverse range of background. ‘We will renew his licence but will make it clear that if there’s any recurrence of this then we will revoke it in the future.’ She also said the comments were ‘disappointing’ and can be seen as disrespectful by members of the wider community.’ ‘You don’t have to wait for someone to complain to take action on this,’ she added. Cllr Hockaday was unavailable for comment following the hearing to explain the decision further.
© The Portsmouth News
Suicide pacts, dates to hunt down homosexuals or to torture homeless people: For children and teenagers, danger lurks not in the streets, but on the Internet, a German child welfare organization says.
13/5/2015- The Internet plays a huge role in most children's lives - also here in Germany. Young people these days are increasingly online with their smart phones, and often enough, they end up on websites that weren't necessarily designed for them. The German youth protection website "jugendschutz.net" has documented what youngsters are likely to come across on the Internet. The organization, which is linked to the Commission for Youth Media Protection (KJM), monitors the Internet for content harmful to minors. "Jugendschutz.net" gives parents and teachers guidance; for instance, in a newly updated booklet entitled "A net for children – Surfing without risk", commissioned by the Family Ministry in Berlin (BMFSFJ). The organization's 2014 annual report takes a close look at mobile communication, and points out the most problematic issues:
1. Risk of self-inflicted harm. Always on the lookout for cool things to do, even elementary schoolchildren get together to collectively swallow a mix of baking-powder and vinegar - trivialized as a dare, it's a lethal, explosive cocktail in the children's stomachs. Glamorizing anorexia is another fad, even to the point of virtual hunger Olympics: Who weighs even less?
2. Announcing suicide. In 13 cases, life-threatening situations forced "jugendschutz.net" to call the police. Sometimes, people seek partners for a suicide pact via the Internet; the young people also discuss procedure: whether it's better to jump from a tall building or in front of a train.
3. Sexual posturing. Photos and videos of teenagers in sexualized poses are particularly frequent. Such content is almost always generated through foreign servers, mainly Dutch, US and Russian. Often these pages can be deleted, but usually only after five days.
4. Sexualized violence via download. In 2014 alone, "jugendschutz.net" took action 1,168 times against cases of portrayals of sexual abuse in the net - significantly more than the previous year.
5. The lure of jihad. Political extremism also has a toehold on the Internet, and has long since discovered children as a target audience. Islamist videos do a brisk business: they use music and quick cuts, which comes across as professional and caters to what adolescents are used to seeing. In "Flames of War", the jihad is portrayed as an adventure for teenagers - this film and others are widely distributed via YouTube and Facebook.
6. Hunting down "the other". Videos that show violent attacks on gay or homeless people, or drug addicts, can easily get more than 100,000 clicks. Some show systematic torture. Clips by the neo-Nazi Okkupay-Pedofilyay movement, founded in Russia, are particularly notorious. The videos are circulated at a furious pace on Facebook, YouTube and via the Russian VK network.
7. Rightwing Muslim-bashing. Social media are a key platform for the far right, where they ridicule mainly Muslims, and where they liken them to athlete's foot and trash. The bashing follows a pattern: the more provocative the insult, the more clicks it gets. It's a snowball effect.
© The Deutsche Welle.
At Jerusalem Global Forum for Combating Anti-Semitism, Google policy chief explains how users are key in the fight against online hate.
13/5/2015- Internet giant Google is making strides to combat online anti-Semitism, but executives insist the most effective way to counter hate online is by activists creating an effective counter-narrative. Speaking in Jerusalem at the 5th Global Forum for Combating Anti-Semitism, Juniper Downs, the Senior Policy Council for Google US, said her company has teams working "24 hours a day" to locate and remove hate content on platforms such as YouTube. Offending items - including anti-Semitic propaganda - are regularly removed, and users responsible for "particularly egregious" posts are often banned altogether, she said. The sophisticated systems Google employs include a "turbo-charge flagging" system, which allows developers to identify "trusted flaggers" - users with a good track record of flagging-up inappropriate material.
That system is a crucial tool in ensuring the system can't be abused, as users will often simply flag items they just don't like; for example, Downs noted that the most-flagged YouTube video is a fairly innocuous music video by pop singer Justin Bieber, whose off-stage antics have made him a deeply unpopular figure, despite the video itself being totally inoffensive. Context is also key, she added. In the past, some media watchdogs and anti-hate groups - including the Middle East Media Research Institute (MEMRI), which monitors Arab and other Middle Eastern media - have found themselves subjected to temporary bans, after users flagged "hate speech" in videos which were in fact meant for the purpose of exposing incitement.
Downs said Google now ensures footage containing hate speech or other forms of anti-Semitism or racism aren't removed if they are included for "documentation or condemnation" of bigotry - an important distinction to avoid collateral damage against groups fighting to expose online hate. But while Google is working hard to remove online incitement, Downs also emphasized that ultimately, simply removing the videos was just one part of the fight against anti-Semitism and hate-speech. "We know that on our platform there are pieces of content which just go too far, and we take them down," she said. But more sophisticated anti-Semites have found ways of getting round the hate speech rules, by treading the grey line between racism and offensive - but permissible - material.
Moreover, removing hate-speech only deals with the tip of the iceberg, but doesn't address or combat the poisonous narratives which fuel the hate. "We’re troubled by the hate speech we find on our platform, but we're even more troubled by the fact that it represents sentiments which still exist today," she said. To tackle the problem at its root, users have to get more involved in actively countering the narratives of hate, Downs insisted. "Counterspeech is the most effective strategy in doing the real hearts and minds work," she stated. Paraphrasing Prime Minister Binyamin Netanyahu, who spoke at the opening evening of the Forum on Tuesday night, Downs compared what she terms "counterspeech" to lighting candles in the dark. "It's not a matter of lighting a single candle - we need millions of candles" to dispel the darkness of racism, anti-Semitism and other forms of bigotry online.
Educational videos providing facts and dispelling the arguments of anti-Semites is one tactic often effectively used, but Downs noted that some of the most popular anti-hate videos were those using satire. "Satire is a common and effective way of countering hateful views," she said. To help mobilize users, Google has been conducting offline events, bringing successful YouTube video producers to coach activists on how to be most effective in getting their message across. It hopes those events could ultimately empower users to do what the multinational technology behemoth can't - neutralizing the narratives of hate at their source and shutting down the demand for such material. The Global Forum for Combating Antisemitism is held every other year, under the auspices of the Foreign Ministry and the Ministry for Diaspora Affairs. It allows experts on anti-Semitism from around the world meet and share ideas over three days.
© Arutz Sheva
The biennial Global Forum for Combating Anti-Semitism issued statements recommending steps for governments and websites to reduce cyber hate, and for European governments to reduce anti-Semitism.
14/5/2015- “Given the pervasive, expansive and transnational nature of the internet and the viral nature of hate materials, counter-speech alone is not a sufficient response to cyber hate. The right to free expression does not require or obligate the internet industry to disseminate hate materials. They too are moral actors, free to pursue internet commerce in line with ethics, social responsibility, and a mutually agreed code of conduct,” read a statement issued Thursday night in Jerusalem by the Forum. Among the recommendations to Internet providers: to adopt a clear industry standard for defining hate speech and anti-Semitism; adopt global terms of service prohibiting the posting of such materials; provide an effective complaint process and maintain a timely and professional response capacity; and ban Holocaust denial sites from the Web as a form of egregious hate speech.
Recommendations to governments include: establishing a national legal unit responsible for combating cyber hate; making stronger use of existing laws to prosecute cyber hate and online anti-Semitism, and enhancing the legal basis for prosecution where such laws are absent; and adopting stronger laws and penalties for the prohibition of Internet materials promoting terrorism and supporting recruitment to terrorist groups. The forum also addressed the upsurge of anti-Semitism in Europe. “European institutions and governments need to take strong proactive steps to address the current outbreak of anti-Semitism in order to assure the continued vibrancy of Jewish communal life in Europe,” read a statement issued Thursday.
Among the recommendations for combating anti-Semitism: adopt a formal definition of anti-Semitism applicable throughout the European Union and its member states under law including reference to attacks on the legitimacy of the State of Israel and its right to exist, and Holocaust denial as forms of anti-Semitism; applying agreed standardized mechanisms for monitoring and recording incidents of anti-Semitism in all E.U. countries; taking urgent and sustained steps to assure the physical security of Jewish communities, their members and institutions; and directing education ministries to increase teacher training and adopt pedagogic curricula against anti-Semitism, and towards religious tolerance and Holocaust remembrance.
The three-day conference hosted a panel of prominent Muslim leaders and imams from Europe who came to speak out about anti-Semitism in Europe. The opening of the conference featured addresses by the mayor of Paris and the German justice minister.
© JTA News.
12/5/2015- An Israeli court on Tuesday sentenced a Palestinian for incitement and for supporting a terrorist organization based on Facebook posts that applauded militant attacks, his lawyer said. It was a rare case in which statements on social media were regarded as a crime. The defendant, Omar Shalabi, 45, a father of six from East Jerusalem, was sentenced to nine months in jail for 10 posts to his 5,000 friends and 755 followers that urged them to undertake “violent acts and acts of terrorism,” said the Hebrew-language indictment. Legal rights groups said it was unusual for an Israeli court to accept speech on social media as a basis for conviction. But they said that in recent months the Israeli police had detained several Palestinians from East Jerusalem and Arab citizens of Israel for incitement over comments made on their social media networks.
Mr. Shalabi’s posts included a photograph of a Palestinian man who was killed after he plowed his car into a group of pedestrians in Jerusalem, killing a baby. Four days after the attack, he wrote, “Hundreds of Jerusalem’s men are rising from their graves, and from under the hands of deprivation to cheer the soul of the martyr,” according to the indictment. Another post praised two cousins who had stormed into a Jerusalem synagogue in November, killing five men. Mr. Shalabi posted the photographs of the attackers and wrote: “Ask death to grant you life; glory is bestowed upon the martyrs.” “These posts motivated other Facebook users who shared the inciting contents with their friends and followers, who in turn supported the posts by pressing the “like” button,” the indictment said. “The mere use of this media, as the defendant has done, serves as a severe act, given the extensive circulation of the messages, as well as the ease with which these messages spread.”
Mr. Shalabi’s lawyer, Tariq Bargouth, said the basis for the conviction and punishment never established that Mr. Shalabi’s posts had encouraged any specific militant attack. There have been a series of so-called lone-wolf attacks in Jerusalem, in which Palestinian men, without any political backing or leadership, attack Israeli civil-ians or security officers. Avner Pinchuk, a lawyer with the Association for Civil Rights in Israel, which follows freedom of speech cases, said it was the first time he had heard that “incitement to terror in social media concluded in jail.” Majd Kayyal, the media coordinator for Adalah, an organization that pursues the legal rights of Palestinians in Israel, accused security services of a double standard, saying they had not cracked down on Israeli Jews for incitement to violence online. He said his organization had tracked officials from the police and ambulance services who had encouraged violence against Palestinians on their Facebook pages, without punish-ment.
Mr. Kayyal said he also feared government officials were using the word “incitement” too loosely, saying they had to “prove a relation between what was written, and an incident that happened in reality.”
© The New York Times
11/5/2015- There is a connection between Google searches for the N-word and regional black mortality rates, according to a study by a university professor published in PLOS One last month. “We found that areas with a greater proportion of searches containing the N-word had not only a higher black mortality rate but also a greater gap in the black-white mortality rate,” David Chae, the study’s lead researcher and an epidemiology professor, wrote in an email. Researchers analyzed mortality rates from leading causes of death among blacks, including heart disease, cancer, stroke and diabetes, and adjusted for other relevant factors such as age, sex, the percent of the black population, levels of education and poverty. They also examined the gap between black and white mortality rates in each of the 196 areas assembled by the National Center for Health Statistics.
According to the study, the researchers found that each one standard deviation increase in area racism equaled about an 8.2 percent increase in the black mortality rate. Measuring racial attitudes can be tricky, Chae wrote. Past methods have included surveys, but he wrote that those can yield subjective results, as people are more likely to self-censor their more “socially unacceptable beliefs.” “Racism is a public health issue,” Chae wrote. “This study adds to evidence that racism is a social toxin that increases susceptibility to disease and generates racial disparities in mortality. It also points to the utility of using Internet-search-based measures to monitor racism at the area-level and assess its impact on health outcomes.”
Rashawn Ray, a sociology professor at this university who teaches a class about modern perceptions of race, said Chae’s study provides a link between the words people use and prejudicial attitudes and behaviors. “This study shows that words actually matter, and words have detrimental effects on the life outcomes of blacks,” Ray said. “Because people who use racist words seem to also be likely to hold prejudiced attitudes and exhibit discriminatory behavior.” Though this state doesn’t indicate the same high level of racism like other areas in the country, , Chae’s study showed that racism still exists here too, said Stephen Thomas, health services administration professor and this state’s Center for Health Equity director.
The findings serve as evidence of racial discrimination across America, Thomas said. Racism persists in housing, employment and the criminal justice system, Chae wrote. But some forms of racism are more subtle — Internet searches might reflect the hidden instances of racism, Chae said. “In a democracy like ours, a country made up of people seeking freedom from around the world, you should not be able to predict my life expectancy by my zip code,” Thomas said. “You should not be able to predict my quality of life based on a geographic map of people Googling the word ‘n-----.’”
Thomas said he hopes Chae’s work helps people understand how prevalent racism is in the U.S. “The consequences of that word impacts people’s life and longevity,” Thomas said. “What Dr. Chae’s work shows to someone looking at that map is there are more Baltimore uprisings to come, more Fergusons to come unless we do something now to start ending racism in America.”
© The Diamondback
There are plenty of outright racists who proudly own their bigotry and hate — you can find them in any corner of the internet. And then there are those who seem to think they should be able to express their messed-up views, be taken at their word when they half-apologize or try to explain them away, and suffer no criticism or repercussions.
11/5/2015- For the latter group, the internet has made living their dream increasingly hard. Georgia high school principal Nancy Gordeuk is the latest example of that. Video of the commencement ceremony at TNT Academy, a small private school in Stone Mountain, Georgia, shows her mistakenly dismissing the crowd before the valedictorian's speech, and then saying, "Look who's leaving — all the black people!" as she tried to correct her error while a racially mixed group of attendees continued their exit from the venue. When the footage went viral, and she was criticized for being both racist and wrong, she promptly blamed her comments on Satan: "The devil was in this house," she told local news station CBS 46, "and he came out from my mouth."
If anyone was convinced that Gordeuk had actually been momentarily overtaken by evil forces, her son swiftly ruined that theory. He defended his mom in a Facebook post, writing "y'all ni**ers aren't talking about shit so if u got something to say come see me face to face," and "my moma not racist one bit she's done nothing but help kids so y'all need to get stories straight." His easy use of a racial slur certainly did not do the job of convincing anyone that Gordeuk's not racist.
Between YouTube and social media, explicit expressions of racism are becoming increasingly harder to get away with. Think of the Oklahoma frat boys expelled when a cellphone video captured them chanting, "There will never be a ni**er at SAE ... you can hang him from a tree, but he'll never sign with me; there will never be a ni**er at SAE"; the congressional staffer who stepped down after news outlets published Facebook posts in which he likened his black neighbors to zoo animals; and the Ferguson, Missouri, municipal court officials who were fired after a search of their email revealed "jokes" based on dehumanizing stereotypes about African Americans.
There's no question that it can satisfying to watch someone who's expressed bigotry be publicly humiliated in the same way that their words humiliated others — that's a huge part of why the recent commencement disaster has made national news. But here's what would be even more gratifying: if these people stopped feeling so confident that explicit racism was something they could get away with — or, even better, if they worked as frantically to rid themselves of their biases as they do at their futile efforts to excuse and explain them.
Hackers have put child abuse images on the memorial website of the Mauthausen concentration camp in Austria, 70 years after the Nazis' WW2 defeat.
8/5/2015- The website was quickly deactivated by the company managing it, a message on it reads. The Austrian interior ministry is investigating the attack. Interior Minister Johanna Mikl-Leitner called it a "criminal, sick attack and deeply abhorrent". The Nazis killed more than 100,000 people at Mauthausen in 1938-1945. The camp - one of the most notorious and biggest in the Third Reich - was liberated by US troops in May 1945. On Sunday there will be commemorations of the liberation at Maut-hausen. The exact death toll is not known. Inmates, many of them Jews, were starved, tortured or gassed to death. Others died of exhaustion through hard labour. Many Soviet prisoners-of-war and Spanish Republicans were also among the victims. Many sub-camps were also set up near the main camp. Austrian officials suspect that far-right extremists may have carried out the attack, as such extremists have hacked several websites in recent months, Austria's Der Standard news website reports. The head of Austria's Mauthausen Committee - a group promoting human rights and democracy - condemned the hack. Willi Mernyi said it was "obnoxious and reveals the mindset of the hackers".
© BBC News
Cyber-bullying is a problem confronting countless young people.
8/5/2015- One parent knows this all too well. Her teenage daughter was the victim of cyber-bullies. What they did almost drove her 13-year-old daughter to commit suicide. The parent, a woman from Port St. Lucie whose first name is Jill, asked us not to use her last name. She told us her daughter was being bullied at school in person and on Instagram, where other students were posting photos of her with insulting, degrading comments. “I think it’s terrible. It’s basically a hate crime when you’re sitting there making fun of someone and belittling them,” Jill said. Her daughter goes to Oak Hammock K-8 School in Port St. Lucie. The Instagram account where the humiliating comments were posted was called “OakHammockHoes.”
When she realized what was going on, Jill went to the school, but she says they referred her to the police. About two weeks later, her daughter was so tired of the abusive comments on Instagram and so depressed that she tried to commit suicide by drinking bleach. She ended up in the emergency room. “What would you say to the parents of the kids who were doing this to your daughter?” we asked. “They need help. There’s something wrong with someone who feels like they have to make fun of someone else. It’s not right,” Jill explained.>
She has this advice for other parents, about how to protect their children from cyber-bulllying. “Get involved with what’s going on at school. Every day that your child comes home from school ask them questions. Get involved with your child. Get involved. You don’t know how bad it is until it happens,” Jill said. Her daughter has been released from the hospital and still goes to the same school, because there are only a few weeks left in the current school year. She plans to enroll her in a different school in the fall. Cyber-bullying is a crime. So far no one has been arrested.
© CBS 12 News
8/5/2015- Pong, which first appeared in 1972, revolutionized entertainment in a way not seen since television was first introduced. The game consisted of two players each controlling a rectangular panel that would move up and down the screen trying to deflect a ball from entering their goal. It was considered the ultimate test of skill against your friend, or whoever else was next in line at the arcade. Over the course of a few decades, the video game industry exploded with popularity. It went from utilizing a gigantic box in a crowded public room to operating on a slightly less gigantic box in the comfort of your living room. The rectangles evolved into plumbers, plumbers to gorillas, and somewhere along the line we found ourselves stealing cars and robbing banks in 1080p.
It’s been a long ride, and it doesn’t seem to be slowing down, especially with the rise of Virtual Reality in the form of facebook’s newly acquire Oculus Rift headset. However, as nice as new technology is, it seems that the more things change, the more they stay the same. Perhaps the most clear example of this is the lack of variety in terms of gender and race. It’s an issue that is ever prevalent in the major video game blockbusters, and has been since video gaming began (save for an Italian plumber and his brother). Whether it’s The Last of Us, the Call of Duty franchise, the Far Cry series, or the Metal Gear Solid series, they all of have one thing in common: A strong, white male lead role.
Now, that’s not to say there are no Hispanic or Black characters out there. Plenty of games include characters of various races, although not necessarily in the lead role. Most recently, we’ve seen Telltale games’ The Walking Dead host a dual lead role, Lee Everett, who wasn’t white; and even a female character, Clementine. Both characters have helped the series win a slew of awards, including a multitude of different “Game of The Year” awards. Following the overwhelming success of the first season of The Walking Dead, they produced a second season that starred an eight-year-black girl, who proved singlehandedly that there need be no standard for a leading role in movies, video games, or books.
Race aside, the video game industry has seen the leading characters who were women and other support characters for that matter be produced and displayed in an excessively sexual way. With anything from skimpy clothing to large breasts and butts being commonplace for female characters. Some major examples are Bayonetta, Street Fighter, Dead or Alive, Metal Gear Solid, and the well-known Tomb Raider series. Within the past few years alone, however, we have seen a change in the way the race and sex come into play with various video game leads in the way Clementine from The Walking Dead, Sheva from Resident Evil 5, and Ellie from The Last of Us. It’s been a slow climb towards the concept of equality in this particular entertainment medium. However, if the last couple blockbuster hits have indicated any-thing, it’s that you do not need to look a certain way to be the action hero.
© The Kaleidoscope, news site of Kishwaukee College
4/5/2015- In a new video, Jim Sterling highlights a game that is actually hate speech. Some people often claim a game is inappropriate or sends the wrong message, but a game called Kill The Faggot -- currently on Steam Greenlight -- has a pretty clear and concise message: LGBTQ+ individuals are different, weird and deserve to die. The description on Steam Greenlight doesn't mince any words on its intention to offend and appeal to the lowest common denominator within the community: "Hate gays ? Want to unleash your frustration with the "LGBT" community? Well now is your chance. Murder gays and transgenders, while avoiding killing straight people. Get as many points before time runs out!"
The game promises "mediocre 3D graphics, three levels of play, fully voiced lowbrow innuendos, and an "amazing soundtrack."
Kill the Faggot is the creation of Skaldic Games, which, according to its website, is a game development company from the Los Angeles area. Skaldic says that its game is part of another game called " The Shelter: A Survival Story." It is likely that by the time you read this story or shortly thereafter this game will be pulled from Steam Greenlight. For the time being, it can be found here (we've also archived it here for posterity). There's no doubt that Kill the Faggot violates Steam's submission rules for Greenlight in that it uses incendiary language and imagery meant to incite. Whatever Valve decides, we will continue to follow this story as it develops.
© Game Politics
Students have come face to face with the potential perils of social media in a series of hard-hitting workshops designed to keep them safe.
4/5/2015- Cyber safety experts spent the day at Darlington School of Mathematics and Science (DSMS) highlighting the potentially negative physical, social and psychological conse-quences of using the internet. About 120 Year 8 students were introduced to Andrea Jennings and David Duckling, of Harbour Support Services, an organisation that works in Darlington providing outreach programmes for the victims and perpetrators of domestic violence. They also worked with Durham Police cohesion officer Chris O’Brien and town centre beat officer Alice Turner looking at the impact of hate crimes. Durham Police neighbourhood policing team officer Kathryn Davies and beat officer David Gibson delivered the third workshop addressing inappropriate use of social media including sexting and how easy it was to fall foul of the law.
Cyber safety is a particularly poignant issue in Darlington, following the death of teenager Ashleigh Hall, who was murdered in 2009 by a man she met online. DSMS assistant head teacher Emma Hickerson said: “As a resource the internet is as incredible as it is dangerous and it is vital our young people know how to use it appropriately. “They live in a cyber-world and the speed of technological development is breath-taking. We have to make sure they are fully equipped to maximise the incredible benefits of the internet but also stay safe from the many pitfalls.” Mr Duckling explained that domestic violence could be physical, emotional, financial and sexual. It affected men, women and children and Harbour was there to support victims and work with perpetrators.
Students heard that hate crimes often turned prejudice and discrimination into persecution, hatred and destruction when society should be celebrating diversity. PCSO Gibson stressed the importance of young people keeping their online profiles private. “Older people often befriend younger people on the internet to exert control over them,” he said. “Often in chat rooms people are not who they say they are and could be paedophiles so you need to be 100 per cent sure of who you are talking to. “The impact on young people’s lives can be huge and it also affects their families and friends.” He urged students to either click the CEOP button on their computer if they had concerns or approach an adult they could trust. He warned that inappropriate images, even when they were taken as a joke, were likely break the law and anyone who sent them could find themselves charged with distributing indecent images. Students were also shown poignant videos covering a variety of cyber safety issues and hate crime scenarios.
© The Northern Echo
One day I awoke to a barrage of posts from strangers accusing me of racism for an article I didn’t write. Then I learned how to use social media to my advantage
By Josh Bornstein
5/5/2015- In the early hours of Friday 10 April, as I slept in Melbourne, American author Naomi Wolf was posting on Facebook to condemn me as “deranged”, “genocidal” and “psychotic”. Wolf and I have never met or communicated before. Regrettably, she was not alone. In the course of that night, I was on the receiving end of a battery of threate-ning emails from strangers, accusing me of base and hateful racism. My Twitter feed was filled with similar messages from all over the world. That Friday morning, I awoke to find myself caught in the middle of a social media storm. Hours before Wolf wrote on Facebook and in another part of the world, the Times of Israel, a publication with which I was also not familiar, published an article on its website containing a graphically violent and racist diatribe against the Palestinian people and calling for their “extermination”. The despicable article was attributed to me and was accompanied by my photograph. It was quickly disseminated in the hothouse that is Middle East politics and spread throughout the globe.
The barrage of threats that followed the article’s publication came predominantly from Europe, the US and the Middle East. One threat emanating from Little Rock, Arkansas, excoriated me as a “worthless piece of shite” and advised that I “would be dead soon”. Prior to receiving this missive, the sum of my knowledge about Little Rock was almost entirely derived from the autobiographical details of novelist Richard Ford and the Clintons. Those closer to home who know me are aware that I have never written a racist article in my life. On the contrary, I deplore racism and have been very vocal in support of strong laws against racial vilification and race hate. I have also criticised the Israeli government’s conduct towards the Palestinian people, most recently during the 2014 Gaza conflict. I had become a victim of identity theft.
That morning, having well and truly woken up and then worked out what had happened to me, I posted on Twitter in forceful terms to explain that I was not the author of the racist bile. This was the cue for highly agitated editorial staff at the Times of Israel to make some urgent attempts to speak with me. They had already torn down the article and, conscious of our different time zones, were waiting for me to wake up. By the time we spoke, they had already prepared an article to explain the “hoax” that had been perpetra-ted on their publication and on me. They sought my permission to name me and to include a short statement from me. The published article also apologised both to readers of the Times of Israel and to me.
It transpired that some weeks earlier, a person or group using my identity had made an online application to blog on the Times of Israel website. The application was checked and appeared authentic. Whatever process was followed, I did not receive any contact from the Times of Israel to verify my identity and blogging application. In the following weeks, articles that I had written and had published in the Guardian and other media organisations were posted on the Times of Israel site. The articles addressed wealth inequality in Australia, the success of business lobbyists in shaping public policy, the inhumane treatment of asylum seekers and other matters of political economy. My articles are easily accessible on the websites of media organisations that publish them and are also displayed on my personal website. Although I periodically write for various media outlets, I am not a blogger. A lawyer and occasional writer, yes. A blogger, no.
The editors at the Times of Israel thought it curious that an “Australian blogger” was posting articles on its site dealing with domestic political issues in Australia. On the other hand, a senior editor there told me that her father was a Jewish “labour lawyer” in New York and I was part of a rich Jewish tradition. In the weeks during which my real writing was published on an Israeli media website, I remained blissfully unaware. Then, having established an apparently respectable identity as a blogger on the Times of Israel website, the perpetrator struck. The article opened with observations about Talmudic law before descending into a litany of repulsive race hate. The article was so rancid that some queried whether it was a failed attempt at satire.
Social media shaming can escalate and spread all over the world at eye-watering speed. In the maelstrom that engulfed me for a time, I felt like I was standing in an amphitheatre surrounded by a hostile and highly multicultural audience who were baying for my blood. And the crowd kept growing – minute by minute. Where, as in this case of identity theft, the shaming is entirely misconceived, there is an upside: it can be curtailed quickly, too. A number of people suspicious about the authenticity of the racist rant did some fact checking for themselves. Even before I emphatically communicated on Twitter that I had never written or blogged for the Times of Israel, they had advised the digital mob that a hoax had been perpetrated. My denials followed. The Times of Israel then published its article explaining the hoax and apologising. Other interventions occurred online and over time, the mob muted.
The storm was all but over within 36 hours. Unlike other victims of social media shaming, I did not lose my job. On the contrary, my work colleagues rallied around me. That said, there is another digital twist to this bizarre and disturbing experience. Before the offending article was torn down, an image of it was placed on another site. Despite vigorous attempts to have it removed from the internet, it still continues to be peddled online. As a result, I have received more threats. A genuine blogger, Daniel Sieradski, was promp-ted by my experience to do some online detective work about this episode. He discovered that a few weeks before the fake blog began to be published by the Times of Israel, a post appeared on a website foreshadowing what was to come: “Using a fake Jewish name, profile, and photo, I got myself a blog on The Times of Israel,” the post read. “These people believe I’m really a Jew.”
Sieradski’s work led me to a site that appears to have been created by a neo-Nazi group based in the US. In one of their posts, the group denigrates me as a “subversive Jewish parasite”, a “human rights activist”, “open borders advocate” and “staunch supporter of hate speech laws”. The same photograph of me that was published by the Times of Israel appears on this website; this time with a yellow Star of David emblazoned on my forehead. As unpleasant as it is to be targeted in this way, more than anything, the experience has profoundly reinforced the kindness of strangers. A human rights lawyer based in Sydney who saw Naomi Wolf’s Facebook post about me intervened and prompted Wolf to retract her condemnation. It was replaced with “Progressive Australian Jewish Lawyer Josh Bornstein is a victim of a hoax that called for genocide.” I would have preferred a full apology but once again, Wolf was not alone in not offering one.
Many other strangers, including Palestinian and other Arab activists for Palestinian statehood, acted quickly to defend me from further attacks. They told me of their concern for my welfare and their determination to disseminate the truth. One such activist for Palestinian rights sent me the following: “Josh, saw your situation and ensured to share the facts here in the UK among the various groups sharing that awful Times of Israel blog in your name.” Could I have done anything different to avoid this attack? I suspect that like many others, significant aspects of my identity, including photographs, are there to be found on the internet. I am outspoken and I am Jewish. Am I going to change any of that? Not so long as my tuchus points south. While my public identity, writing and activism undoubtedly elevated the risk of identity theft, social media participation is a two edged sword. As journalist Sarah Seltzer observed:
“But imagine if Bornstein hadn’t been active on Twitter, or easily findable online – and then imagine if the screed posted in his name had been just a tad more subtle and less obviously fishy. In such a case, the post might have stayed up for much longer and made a more lasting digital imprint under his name.” White supremacists don’t do nuance. For that too, I can be thankful.
A reply from Naomi Wolf
What happened to Josh Bornstein, who wrote a piece today about having had his identity stolen and used as the byline for a hatefilled diatribe, was awful. As soon as Bornstein’s piece was posted on social media, in fact on the same day, I asked on Facebook if the piece was a hoax, and asked for citizen journalism confirmation of the piece. I wrote as soon as the hoax was confirmed, that the piece was not authentic and I wrote about how awful it was that someone stole Bornstein’s identity. I noted that Bornstein spoke up against human rights abuses.
Bornstein never contacted me, but I would have been, and am now, very happy to offer an apology to him for my initial horrified response to a very racist piece. I would add this regret to the regrets I expressed at the time that a fellow writer’s voice was hijacked. I think that since I asked right away if this blog post was a hoax, and questioned the veracity of the op ed from the outset, I did what was journalistically ethical and appropriate. But what happened to Bornstein was terrible and I am truly sorry to have added by my distress at the racist language in the blog post, to his understandable distress at the theft of his identity.
© The Guardian
4/5/2015- The number of anti-Semitic attacks in the form of verbal attacks, harassment and threats rose in the Czech Republic in 2014, mainly on the Internet, while there were quite few physical attacks, the Prague Jewish Community says in its annual report and adds that most Czechs do not share anti-Semitic views. The Czech Republic ranks among the countries where anti-Semitism is not significantly present either in the majority society or among politicians, says the report the Jewish community released to CTK Monday. The number of physical attacks on Jewish targets, persons or property remains almost unchanged compared with the previous years, the report showed.
# In 2014, the Jewish community registered one physical attack on a person, the same number as in 2013.
# Five attacks on property were registered, which is two more than in 2013. Most of them were the defacing of graves at Jewish cemeteries.
# "The number of registered incidents such as verbal attacks, hate e-mails and threats addressed to people of Jewish origin has risen sharply, almost four times," the report says.
# The number of anti-Semitic attacks on the Internet rose by 20 percent against 2013. This trend has continued for four years now, according to the report.
# "In 2014, there was also an increase in the number of so called new anti-Semitism aimed against the State of Israel," the Jewish Community writes.
That is why manifestations of anti-Semitism were more intensive during the escalation of the conflict in the Middle East. Some groups view Czech Jews as envoys of Israel and blame them for Israel's political decisions, the report says. In 2014, the Jewish Community registered 28 cases of harassment and nine cases of threats, compared with six and three in the preceding year, respectively. The total number of these incidents was four times higher in 2014.
Like in 2013, anti-Semitism in 2014 was not a matter of right-wing extremists only, but it was also spread by leftist groups and individuals without any obvious links to extremist groups. "This may be because anti-Semitism targeting the State of Israel is a form of anti-Semitism that is more accepted in society," said Petra Koutska Schwarzova, from the Prague Jewish Community. Unlike abroad, anti-Semitism in the Czech Republic has not taken violent forms for the time being, the report says. Although the number of physical attacks is low in the country, a terrorist attack by radical individuals is the biggest threat to the local Jewish community, it says. "Examples of such attacks abroad show that security measures in the vicinity of sensitive places, such as Jewish buildings, must not be underestimated," Koutska Schwarzova said.
© The Prague Daily Monitor
Darren Fletcher, 25, of Wednesfield, was previously jailed for KKK YouTube video
1/5/2015- A lout who dressed up in a Ku Klux Klan outfit and pretended to hang a life-size golly doll has been jailed over racist rants on Facebook. Darren Fletcher was locked up for eight months for breaking an order banning him from making race-hate remarks online. He previously served a year in prison for stirring up racial hatred by posting the KKK video on YouTube. But the 25-year-old flouted the terms of his criminal anti-social behaviour on his release by launching more racist tirades. He even posted that a newspaper needed “bombing” for its coverage of his original court case. Fletcher was sent back to prison for eight months at Wolverhampton Crown Court today. The forklift truck driver, who has Asperger’s syndrome, had earlier admitted breaching the terms of the CRASBO at the city’s magistrates court.
The sentence was welcomed by Det Chief Supt Sue Southern, head of the West Midlands Counter Terrorism Unit. She said: “Fletcher blatantly flouted the conditions the court imposed on him by posting racist and anti-Semitic comments. “We understand how offensive and distressing this type of behaviour can be and worked to bring him before the courts for a second time. “West Midlands Police takes all forms of extremism seriously and we urge anyone with any concerns to contact us on 101.” Fletcher, of Kitchen Lane, Wednesfield, is also known as Christopher Phillips and Darren Clifft. He set up a Facebook page using the name The Whitest Knight and used it to express sympathy with a fellow far right supporter who was posting anti-semitic tweets about a Jewish MP.
Comments on the page included accusing the British Government of doing more to help Jewish and black people than “its native whites”. Fletcher also declared that he “hated Britain with a passion” and made threats against Jews and black people. Nicholas Towers, defending, urged Judge John Warner to pass a non-custodial sentence. He said the postings breaching the order had been intended for an audience sympathetic to convicted extremist Garron Helm. “This wasn’t someone on the streets shouting abuse,” Mr Towers added. “It was supposed to be for a relatively limited audience.” But the judge said Fletcher had “deliberately, defiantly and fragrantly disobeyed the order”. He added that anything less than a custodial sentence would have been a “green light to carry on”.
© The Birmingham Mail
29/4/2015- Brno's Masaryk University (MU) has launched a training centre for experts to simulate serious cyber attacks and train defence against them, MU representatives told media Wednesday. The cybernetic polygon has cost 22 million crowns. Its closed laboratory enables to train the defence without endangering the infrastructure outside. It can be used by scientists, students, company specialists and employees of the National Security Office and other state security bodies. The laboratory has unique software that enables to simulate any network and situation, including an attack on a nuclear power plant or the electric grid operational system, for example. "Very important is the [centre's] safe separation from real networks. If we want to train defence, we also have to create offensive means, which would cause a big problem if they penetrated the real network," said Vaclav Racansky, head of the security section at the MU's Institute of Computer Science. In the past, MU experts assisted in preparing the Czech law on cyber security, which binds infrastructure operators to ensure its safety. Hackers cause hundreds of billions of dollars worth of damage a year. The Czech Republic, too, has experienced extensive cyber attacks. In 2013, for example, a four-day attack targeted news servers, mobile operators and banks.
© The Prague Daily Monitor
By Jake Bennett, James Rund and George Dean.
26/4/2015- There have been a series of hate speech incidents over recent weeks in Tempe and Mesa, orchestrated by Neo-Nazi groups and hate preachers. Some of the incidents included anti-Black, anti-immigrant, anti-LGBT and anti-Muslim speech and intimidation. This behavior and these sentiments do not reflect the values of our community. We the undersigned have joined together to express our opposition to the presence and activities of hate groups in our community. We are deeply concerned about recent manifestations of hate. As community leaders, we have united to underscore our common value of working together to create a community of respect.
Our shared fundamental principles require us to speak out when we see hate around us. Ignoring the presence of hate speech, with its accompanying literature and social media does not make its vile message disappear. We will not sit idly by when hate raises its ugly head in our community. Not only are we united in denouncing hate, but we are united in supporting a community that is committed to the free exchange of ideas, the principles of inclusion and the celebration of diversity.
— Jake Bennett, ADL Arizona
— James Rund, ASU administration
—George Dean, Urban League
—And 22 other co-signers
© Arizona Central
A new survey suggests that Canadians who see hateful content on the internet tend to stay mum about the material.
24/4/2015- Two-thirds of the respondents to the Leger Marketing poll said they ignore hateful or racist online postings. Just 11% said they reply and react to the mate-rial and an equal number said they "tell the responsible authorities to remove it." Among those who ignored online hate speech, 15% said that responding is a "waste of time," would be "pointless" or they admitted that "I don't care." Another 17% said that reporting inflammatory internet comments would give undue attention to the remarks or would worsen the situation. Canadians gave other reasons including "can't prevent others from propagating hate / racism" (7%) and "don't want to be invol-ved" (6%). The Association for Canadian Studies and the Canadian Race Relations Foundation commissioned the March 16-18 survey of 1,711 Canadians. The poll was provided exclusively to Postmedia.
A recent Leger poll indicated Canadians had a paradoxical attitude towards racism. The survey, taken in September, indicated the vast majority of Canadians don't believe they're racist but up to a third admit making racist remarks and to supporting racial stereotypes. Jack Jedwab, whose Association for Canadian Studies com-missioned both polls, told Postmedia that people are becoming desensitized by the sheer volume of internet hate. "The strength of that message confronted with what ... people are seeing on social media, it creates a sort of indifference," said Jedwab. "We suffer as a society when there's an increasing feeling of indifference in the fight against racism." The anonymity of social media, and the lack of human moderators, can be a blessing and a curse, the researcher added. "I'm not knocking social media but we can see that this is a particular area where there's an abuse that we haven't found a way to address," he said. "A lot of the comments you're seeing on social media would never make it into print."
© The Toronto Sun
17/4/2015- The Latin American chapter of the World Jewish Congress launched a Spanish-language website to combat Holocaust denial. The website went online on April 16, Israel’s national day of commemoration for Holocaust victims. The site, www.seismillonesnuncamas.com, which means “six million never again,” targets Spanish-language websites, where Holocaust denial is increasing, according to the WJC site’s initiators. “We launched a website to show how the hate is spreading on the web,” Ariel Seidler, director of the Web Observatory, a watchdog group set up by several Latin American Jewish groups, told JTA. “We encourage people to report videos that spread hatred and encourage the addition of positive content.”
According to the Web Observatory, some 350 Spanish-language videos denying the Holocaust have received nearly 10 million views collectively. Some of the films are tagged with the word “holocuento,” a term used to lampoon the Holocaust or suggest it did not happen, similarly to “holohoax” in English or “shoananas” in French. In 2011, a civil court in Buenos Aires ordered Google to eliminate anti-Semitic search suggestions from its Argentine browsers and drop some 76 websites described in the complaint as “highly discrimina-tory,” including some that deny the Holocaust. Nazism and Holocaust denial are still alive, “and we can see this on the Internet,” said Claudio Epelman, executive director of the Latin American Jewish Congress, the regional branch of the World Jewish Congress.
© JTA News.
By Krystle Mitchell
10/4/2015- YouTube is a well-known platform amongst many people worldwide. Many people use the site for a variety of reasons whether to build an audience, broad-cast their talents, watch shows, learn something new, or listen to music. Lately YouTube has had some users concerned with the company creators being prejudice. As hate marks are constantly on the site, the platform has not found a way to fully protect its users from verbal abuse. YouTube creators have been questioned about racism due to their lack of promoting non-white individuals and their guidelines of banning hate remarks.
YouTube is not responsible for promoting everyone. When a user joins the platform, he or she must have somewhat of a following already. The site is designed to promote channels and users that are worthy of a worldwide audience. Moderators also help those that should get noticed by placing them on the home screen, or sharing them on Twitter. However, there are only a few darker skinned users that are shared on the shared sites. It was not until February 2015 when Akilah Hughes, Fusion contributor, and YouTube user took the initiative to question YouTube creators about their lack of promoting brown skin creators. Hughes generated a study of the amount of shares YouTube has on its Twitter and homepage, and kept a record of how many of the people were non-white out of all the shares in the month. Her results proved the creators of the platform might have racial animosity against those that are not white.
Once Hughes’ information was completely gathered, she presented her findings to a spokesperson and asked the creators about their racist behavior. The spokesperson said YouTube is available to anyone around the world to upload videos, gain a following and profit for their content. Since it is very open, it has accumulated a large diverse library reflecting a broad spectrum of cultures, beliefs, classes, sexualities, and races that are underrepresented elsewhere.
While Hughes is the first to do the study and question the creators about their lack of promotion, she is not the first user to question the creators about racism. CNN interviewed a famous user who was harassed about her race from viewers. When CNN asked YouTube creators about the harassment and why they have not created a better protection realm, they said the guidelines are clear that hate comments are not allowed and should be reported. Users have the ability to block, delete, and refuse comments altogether. This suggests that the site can not go further than what it already does to protect users from obscene comments if they do not do any of the following.
A researcher that focuses on social issues online stated that YouTube is a reflection of the culture people live in. The hate comments are visual proof that many people out there are very racist. Social media platforms are outlets for those people to convey their hate, because the only punishment they get is being blocked by the per-son that they dislike. The person is still allowed to see other videos of the YouTuber in question after being blocked from comments. Therefore, many users ignore the prejudice comments and continue their craft since there is no way to stop them altogether. YouTube creators getting questioned about their racist behavior and acts on the hate crimes is only a stepping stone for what is to come. A new upgrade in the guidelines must take place to prevent further threats, and better ways to promote those that are not being recognized.
© The Guardian Liberty Voice
Italy's far-right leader Matteo Salvini has been temporarily banned from Facebook for using the word “gypsies”, he claimed on Thursday.
9/4/2015- Salvini, leader of the Northern League, said his personal Facebook profile had been blocked for 24 hours after he wrote “gypsies” (“zingari”). Turning to Twitter, the politician said the move was “absurd!” A spokesperson for Facebook was not immediately available to confirm whether Salvini had been temporarily banned. Salvini came under criticism yesterday for stating that if given the chance he would “raze the Roma camps to the ground.” Speaking on International Roma Day, Salvini said around 40,000 ethnic Roma currently living in government-run camps should rent or buy homes. Members of the community, however, face barriers in applying for social housing, even though many people living in camps were born in Italy. Associazione 21 Luglio, a Roma rights group, has called for the camps to be closed and residents to be given equal access to housing. The association on Wednesday accused Salvini of courting voters and said the Northern League had in the past proposed maintaining the camp system. In a bid to tackle discrimination against the Roma community, Rome’s mayor last year banned the word “nomads” (“nomadi”) being used in city hall. Mayor Ignazio Marino said Roma, Sinti and Caminanti (travellers) were more accurate terms which could help promote integration.
© The Local - Italy
2/4/2015- Uber. WhatsApp. Twitter. Google. Snapchat. Instagram. Facebook. Many of the online services most popular among Europeans were created in the United States. The EU wants that to be different in the future. The next generation of software needing to be developed to operate features of the so-called internet of things (the connectivity of physical objects), and handle big data, should come from Europe, EU digital economy commissioner Guenther Oettinger said at a recent event on the future of internet. “Europe’s industrial competitiveness will in the future depend to a large extent on the capacity to develop high quality software and using the most modern computing technologies”, Oettinger said in a speech at the Net Futures event in Brussels on 25 March. To do that, the EU has had a set of software tools created to make it easier for entrepreneurs to transform their idea into a working application. The project is called Fiware – sometimes spelled Fi-ware – a contraction of the words 'future internet' and software. However, critics say the project, which is costing EU taxpayers €300 million, is superfluous because alternatives already exist.
Dutch entrepreneur Michel Visser is the founder of Konnektid, a website which allows its members to find neighbours willing to teach them something - like how to play guitar, speak another language, or how to knit. His company is now building an app version for mobile phones. The programme will need a system that can handle a large amount of requests. Visser has adopted a readily available system from the Fiware toolbox. “We don't have to develop it ourselves, so we win three months of development. Now we can get the app earlier out to the market”, he told this website. "We are a start-up, so we don't have a lot of money to spend." Fiware is kind of like a big box of Lego blocks, said Christian Ludtke, founder of a German company that supports start-ups. “It can be a web service for example, or a cloud service, or an interface for augmented reality”, said Ludtke.
The Fiware project is a public-private partnership between the EU and a consortium of companies that started in 2011. The software tools that entrepreneurs like Visser may use were developed by European telecommunication companies like Telefonica and Ericsson. The industry has said it is also investing €300 million in the project, which includes online tutorials on how to use Fiware, and local 'Fiware innovation hubs'. Fiware is royalty-free and open source, which means that it can be used free of charge, and developers may further develop it as well. Non-European companies can use the tools as well. “We don't mind if they are from Japan, from US, from China, from Latin America”, said Jesus Villasante, from the department of Net innovation in the European Commission.
“What we don't want is that there would be only one operator that would be able to capture value. For us the idea is that internet should be open, and therefore we should allow for open initiatives that would compete with some proprietary initiatives.” Proprietary software, as opposed to open source, can only be used if you have acquired a license. Examples include Microsoft Windows, Adobe Photoshop, and Mac OS X. "In Europe there is a strong potential for innovation, for start-ups, for entrepreneurs. We need to have this innovation capacity in an open environment, not in a closed environment”, noted Villasante.
Grants for start-ups, but only if they use Fiware
To promote the use of Fiware, the EU is investing €80 million in up to 1,000 start-ups. The money is being distributed to 16 so-called accelerators, organisations that help start-ups grow by providing funding and other support. Konnektid is one of the beneficiaries of such an accelerator, called European Pioneers, based in Berlin. One way the EU is trying to spread the use of Fiware is by making grant money - up to €150,000 per start-up - conditional on its use. “It's a kind of a trade-off. You need to find Fiware attractive and useful. If not, then you probably should be applying to a different accelerator”, said Ludtke, adding that the 12 start-ups under his guidance have so far not experienced it as a burden. Michel Visser hasn't either, although he is defiant about what would happen if he found a piece of non-Fiware software that would be better for his app. “It's business first. If it's stopping my business I would definitely say: listen, I tried it, this is what I experienced, this is my feedback, but I'm going to use something different.
That's what I would fight for. I'm a founder of a company and I need to run my business. ” The EU commission's Villasante is much less strict than Ludtke - who oversees the handing out of money to some start-ups- on the use of Fiware as a precondition. Villasante said it was more important that the start-ups tried Fiware to see if it is useful to them. “We don't believe that all the 1,000 start-ups will develop applications that will be successful in the market. There may also be some SMEs that play with Fiware, develop the product, but decide: this is not for me, I prefer to use this other thing. That's fine.” Some recipients of the EU grants have told this website that they were more interested in the grant money than in Fiware. “There are plenty of alternatives to Fiware that are also open source,” said one entrepreneur who wished to remain anonymous. “The EU is pushing software that is not necessarily the best,” he added.
© The EUobserver
Facebook is tracking users, both on and offline, contravening EU privacy rules, according to a report.
1/4/2015- Compiled by researchers for the Belgian Privacy Commission, the report says the social media giant places cookies whenever someone visits a webpage belonging to the facebook.com domain, even if the visitor is not a Facebook user. A cookie is a small file placed onto a computer by a browser and contains information that can be used to track and identify users. People without Facebook accounts are not spared. The 67-page report, first published in late February and then again with updated chapters on Tuesday (31 March), notes that “Facebook tracking is not limited to Facebook users.” Facebook places a so-called “datre” cookie, which contains a unique identifier, onto the browsers of people in Europe who have no Facebook account. The cookie takes two years to expire.
Facebook says it takes this commitment one step further. "When you use the EDAA opt out, we opt you out on all devices you use and you won’t see ads based on the websites and apps you use," said the spokesperson.
© The EUobserver
3/4/2015- A Facebook troll who targeted a disability rights campaigner has been reported to Police Scotland for alleged hate crimes. Tony Bain taunted Rachael Monk with a string of vile comments including saying: “Two Mongs don’t make a right.” But the abusive messages backfired after 32-year-old Rachael, from Dumfries and Galloway, struck back. She told Bain, from Glasgow: “I’m the lady in the video…your comments have actually made me laugh, they’re nothing new or original. “Did you know hate crime is a criminal offence? Good luck when the police come knocking! “Ignorance is a bigger disability.” Rachael, from Annan, has cerebral palsy and can only speak through a computer. She appeared in a video made by the “Now Hear Me” campaign, which promotes understanding for those suffering from speech impairment or loss as a result of disability.
In a series of Facebook messages, posted on an NHS page, Bain also commented: “Warning you will need an umbrella before watching this video.” The comments sparked outrage from campaign supporters, who called Bain “worse than disgusting” and a “trolling little scumbag”. Another asked Bain: “Have you actually looked in the mirror?” Bain, who re-vealed he was previously employed by Argos, was also asked: “Which Argos do you work at Tony? I’d love to pay you a visit.” Someone else wrote: “Send me your address private-ly then Tony Bain and we’ll see who likes hiding behind a keyboard. I’ll knock all them filthy yellow teeth out with a single bat.” Bain was also told: “You’re worse than disgusting, I’m not sure there is a word for people like you yet but there should be and it should come with a jail sentence. You are a disgrace to your family.”
A spokesman for NHS Education for Scotland said, “As soon as we became aware of wholly inappropriate posts on our Facebook page, we took action to block the individual. We have also reported the matter to the police.” A spokesman for the Now Hear Me campaign also confirmed that they had reported the incident to Police Scotland. Police Scotland was unable yesterday (Fri) to find a record of the NHS complaint. But the force added they “would assess and investigate as appropriate any complaint made to us concerning comments which could be considered criminal”. In the campaign video, Rachael says, “I am a bright person and intelligent, and I want my thoughts and feelings to be heard and not wasted. Everyone has the right to communicate.”
Rachael, speaking through her carer today said: “Although I was hurt by the comments made I wanted to respond in a positive way.” “My family and friends were extremely offended and deeply upset that someone could be so cruel about me and others with different abilities.” “It is because of people like him that makes it all the more important to raise awareness and educate.” She added: “I think Tony Bain should receive a warning, if only to make him think about his actions.” “I feel that the police should definitely be involved in such matters. It is important that everyone knows such behaviour will not be tolerated.” Argos confirmed that “Bain left the business last year”. Bain could not be contacted for comment.
© Deadline News
A Newport man has been fined after posting racist comments on Facebook.
30/3/2015- Jason Gwyer, aged 32, of Brown Close, was convicted of a racially aggravated public order offence after posting racists comments on Facebook in relation to the annual Ashura march which takes place in Newport. The march organised by the Islamic Society for Wales was to commemorate the anniversary of the martyrdom of Imam Hussain who was killed in Karbala, Iraq, more than 1,300 years ago. The details of the march were published in the Argus in November, 2014, and Gwyer posted a photo of the article along with racist comments on his Facebook page on November 12, 2014. Gwyer posted: "Need this to go viral!!!! Muslims think they are going to have a nice little march thru my city on Sunday!!! think not!!! Need as much force as possable. We need to stand up and tell these vile pigs where to go!!! Who is with me??? Please share." He was found guilty at Newport Magistrates Court and fined £165. He also had to pay costs of £620.
He was also charged with producing class b drug cannabis and possession of a class b drug which was cannabis. He pleaded guilty to both offences. He received a 12 month commu-nity order, a £100 fine and the drugs were ordered for destruction. PC Ricky Thomas, investigating officer, after the hearing, said: "Gwent Police will not tolerate any type of hate crime in our communities. We will investigate it and put evidence before the courts for the offender to be dealt with. "I hope this serves as a warning to people who think that by posting on social media sites that it is anonymous in some way - it isn't and it's still an offence. We would encourage anyone who has concerns about anything they see on social media to report it to us on 101."
© The South Wales Argus
While the amount of racially motivated crime is in decline, extremists are adopting increasingly sophisticate ways of spreading their message online, experts say, and the government is taking this into account as it adopts a new policy to combat extremism.
31/3/2015- It is hard to prove that extremist groups violate the law in such cases and and they have political ambitions, according to the Conception of Fight against Extremism which government adopted on March 18. “The development of criminality shows the trend that displays of racist discrimination and other forms of hate crime has been recently shifting from the street to virtual area,” reads the conception. The number of reported crimes related to extremism is decreasing. In 2014 police reported 40 crimes of extremism in 2013 it reported 64 of such crimes, in 2012 it noticed 49 crimes of extremism. There are exactly dozen extremist groups operating in Slovakia including political parties such as People’s Party - Our Slovakia (ĽSNS) or sport and paramilitary groups such as Slovak Levies or Action Group Vzdor, according to the government document.
NGOs dealing with extremism approached by The Slovak Spectator say that mere formal punishment and prosecution of such people is not enough. More than anything else, the public and important political figures should condemn such behavior. “I don’t think that eye of Big Brother should be the main tool against spreading of hatred on internet,” Laco Oravec from the Milan Šimečka Foundation (NMŠ) told The Slovak Spectator.
The internet is a strategic place for extremist groups because they do not have sufficient space in mainstream media. Moreover, especially young people who are more active on internet tend spread those ideas, according to Tomáš Nociar, a political scientist focusing on extremism. “Of course, using the internet in this regard is nothing new, however this phenomenon became more relevant in recent years,” Nociar told The Slovak Spectator, “because the number of people using internet every day is increasing.” The strategy proposes to improve cooperation with internet providers to be better in tracking of extremist statements and material spreading via the internet.
The Bratislava Without Nazis initiative currently runs research of statements published on Slovak websites and social networks and there are dozens of them which could violate the law, according to the group. The police however do not prosecute authors of this extremist material so it is questionable how the police work, according to Róbert Mihály, a member of the initiative. “Just open the Facebook, there are hundreds of such cases there,” Mihály told The Slovak Spectator. Mihály was one of seven people detained by police during a March 14 march as he and others confronted a group commemorating the war time Slovak state, an ally of Nazi Germany.
While some extremist rhetoric may violate local laws, it can also be difficult to prosecute. On the other hand, the power of police is limited because a large amount of clearly extremist content which could be grounds for prosecution is on U.S. based servers. Slovak legislation does not apply there and therefore Slovak authorities struggle to deal with it, according to Nociar. The police refuse to describe their methods of fighting with extremism because of tactical reasons, police spokesman Michal Slivka told The Slovak Spectator.
Humor is better than jail sentences
Instead of repression Nociar proposes the public to fight with extremism using humor or rational arguments that make it less attractive for people. Slovak society however often does not recognize the extremism as a problem. This affects also police when dealing with such cases, according to Oravec. “We lack the clarity on the certain line between a crime or at least taboo that is crossed,” Oravec said. Also the voice of political figures should be stronger when fighting with extremisms. Journalists, analysts and NGOs participate on the public debate about this issue but statements of politicians are missing or are evasive, according to Grigorij Mesežni-kov, president of the Institute for Public Affairs (IVO) think tank. “Have you recently noticed a government representative clearly describing his or her attitude towards extremism and not only in form of general statements?” Mesežnikov asked The Slovak Spectator. “To organise a press conference ad hoc after some unpleasant event and then consider the participation on fighting against extremism as complete is insufficient.”
As a part of prevention the government should bolster efforts to educate people about extremism and point to the threat it represents via mass media and schools, according to the conception. The state underestimated the power of education right after Slovakia joined the EU in 2004 and it does not sufficiently explain to public how much organizations, such as EU or NATO, did for Slovak well being. This is the reason why some Slovaks are keen to believe hoaxes, according to Mesežnikov. “We should focus on high-quality of education of young people so they will be able to critically perceive and sort information and respond to hatred on the internet,” Oravec said.
© The Slovak Spectator
30/3/2015- Less than a week after EU digital commissioner Andrus Ansip announced he wants to end geo-blocking, his fellow commissioner Gunther Oettinger indicated he was in no rush to abolish the practice of restricting online content based on someone's location. “We should not throw away the baby with the bath water”, Oettinger said in an inter-view with German newspaper Frankfurter Allgemeine Zeitung, published Monday (30 March). “I want to examine what an opening would mean for the film industry”, the German commissioner noted, adding that “we should protect our cultural diversity”. The interview comes after Ansip said in a press conference Wednesday (25 March) he wants to end geo-blocking. “I hate geo-blocking”, noted Ansip, who is one of the commission's vice-presidents, in charge of the portfolio digital single market.
Oettinger, whose portfolio is called digital economy & society, made light of Ansip's remark, by saying: “I hate my alarm clock at five o'clock in the morning.” “I wouldn't read any contradictions in this”, commission spokesperson Mina Andreeva told this website Monday. She said that “Ansip and Oettinger worked very closely” to prepare a debate with all commissioners last Wednesday, and that the entire commission that day “agreed that geo-blocking would be tackled”. However, Andreeva noted that tackling geo-blocking was only agreed “on a general level”, and that the details need to be worked out now. The details are at the crux of the matter. Geo-blocking is a technical tool that can be used for both 'good and evil'. Sometimes companies use geo-blocking to abide by the law, for example when a gambling website uses it to make sure its services are unavailable in countries where online gambling is illegal. And Ansip has also acknowledged that such practices are acceptable.
However, geo-blocking is also used to redirect online shoppers to a local website which offers the same products at higher prices, which can be illegal under EU law. Another type of geo-blocking occurs when media companies prevent consumers from watching online content like films or tv series in a territory where the company has not acquired licenses. Here, the debate gets murkier. The commission has agreed to eliminate “unjustified geo-blocking”. But defining when geo-blocking is justified, and when it is not, will only begin now. The commission is due to publish its digital single market strategy on 6 May. Then it will hold a public consultation on geo-blocking. Some, like Pirate MEP Julia Reda, oppose “all kinds of artificial barriers on the web and all kinds of website blocking”. The German deputy argues for an introduction of the so-called 'country of origin principle' for online videos. “That would mean that companies have to obtain a copyright licence only in the country from which they operate, which has long been the case for TV broadcasting,” Reda told this website in an e-mailed statement.
It is not the first time the two commissioners differ in tone on the same topic. “We need strong net neutrality rules”, Ansip said Tuesday (24 March), referring to the principle that all data is treated equally by internet providers and intermediaries. “We need an open internet for consumers. No blocking or throttling”. A few weeks earlier, Oettinger called net neutrality a “Taliban-like issue”. “This is not the first time Oettinger has contradicted the Commission and undermined its stated consensus in his home country's media”, noted Reda, and referred to it as the "latest of his PR missteps".
© The EUobserver
1/4/2015- A Facebook page that attacked aboriginal people in Winnipeg and re-ignited the racism debate in the city has been pulled down. The page, called "Aboriginals Need to get a job and stop using our tax dollars," claimed support for Kelvin High School teacher Brad Badiuk who was suspended in January after making racist comments on his own Facebook page. The page was created in December — the same month Badiuk's posting was made. Before disappearing on Wednesday, the page had close to 5,000 members and was filled with negative comments about aboriginal people.
Robert Sinclair, an aboriginal man, who came across the page on Tuesday, called it a hate crime and hopes the people behind it are held accountable. "Knowing the fact that people [were] looking at and supporting it, it doesn't say a great deal of positive outlook for the way that Winnipeg is directing themselves," he said. Just before it was pulled down, the page started getting a lot of posts critical of it, with at least one person calling the administrators "racist a—holes." A new Facebook page called Protest against "Aboriginals Need to get a job and stop using our tax dollars" started in response and was applauding the removal of the racist page.
'Inspiring, important moment'
One aboriginal leader says he's not angered by the page, but rather inspired by the opportunities it presents. Niigaan Sinclair, who teaches indigenous literature, culture, history and politics at the University of Manitoba, said it used to be that no one talked about racism, that it was swept under the rug. Now, people talk about racism and relationships every day, and that is the only way to make things better. "I actually think this is a really inspiring important moment," he told CBC News on Wednesday, adding he wants people to talk about what it means to be a meaningful citizen in this city.
Police asked to investigate
Some are calling for police to hold those responsible for the Facebook page accountable. Tasha Spillett, an aboriginal activist and educator in Winnipeg, said the page refers to death camps for aboriginal people. She says it's hate speech and must be investigated. "Completely horrendous. Like to say something like that is just attro-cious. But that's the beast, that's racism. Racism is hurtful. It's dangerous," she said. "[It's] another assault on us. Yesterday Facebook was not a safe place." But it also offers a chance to dig into the roots of racism.
Spillett says she shared screen grabs of the page with her friends on social media. and it hit a nerve, similar to what happened earlier this year when Macleans maga-zine called Winnipeg the most racist city in Canada. "You could really see the community response in Winnipeg, saying, 'Oh my goodness. This is not acceptable in our community,'" she said. "For Winnipeg to stand up and say, 'Hey, Facebook may have these community standards but these are not our community standards."
© CBC News
30/3/2015- OSCE Representative on Freedom of the Media Dunja Mijatović said today that the unilateral decisions by the Interior Ministry in France, without judicial oversight, to block five websites for allegedly causing or promoting terrorism represents a serious threat to free expression and free media. “Blocking websites without judicial oversight may endanger free expression and free media and creates a clear risk of censorship of online content by political bodies,” Mijatović said. The Representative urged the French authorities to reconsider the parts of the anti-terrorist law enabling website blocking, which was passed in November last year. “Legislation to fight terrorism should not curb free speech by introducing notions that are too vague or lead to the repression of free expression,” Mijatović said.
The Representative also noted with concern legislative debates in several OSCE participating States over provisions with a similar potential impact on the freedom of expression. These include new criminal provisions approved in Spain regarding access to or dissemination of extremist content, and certain anti-terrorist provisions in proposed Bill C-51 in Canada. Mijatović said her Office is monitoring developments regarding anti-terrorist proposals and their effect on free expression throughout the OSCE region. “I call on all OSCE participating States to exercise care and restraint when introducing anti-terrorist laws that could endanger freedom of expression and free media,” Mijatović said.
© OSCE Office of the Representative on Freedom of the Media
28/3/2015- Is nudity ever allowed on Facebook? It's a question that has got plenty of photo uploaders in trouble. In response, Monika Bickert, head of global policy management, and Chris Sonderby, deputy general counsel, wrote on the site's official blog, attempting to provide examples and "more detail on what is and is not allowed" and unveiled updated community standards. The guidelines cover everything from bullying and threatening behaviour to trading illegal goods, but the latest changes mainly concern nudity, hate speech and terrorist activity. Nudity has been a contentious area on social media recently.
The ongoing Free the Nipple campaign was launched in response to the fact that only male nipples are allowed on Instagram and in the new rules Facebook admits "our policies can sometimes be more blunt than we would like". As such, genitals and "fully exposed buttocks" aren't allowed, and campaigners won't be pleased to find that Facebook will still "restrict some images of female breasts if they include the nipple". However, they will "always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring". Plus pictures of paintings, sculptures and other arty nudes are allowed, so that trip to the Louvre doesn't have to go entirely unFacebooked.
Regarding hate speech - attacking people based on race, ethnicity, religion, sexuality, gender, or disability - Facebook says this is a particularly tricky area to police and that they regularly consult with governments, academics and experts on the topic. The standards make it clear that targeting individuals is never allowed, yet it's okay to share a post that contains hate speech, but only if your purpose is "raising awareness or educating others about that hate speech" and you make that intent clear. Recently, Twitter's founders were threatened by supporters of terrorist group Isis for blocking accounts that promote terrorist activities; now Facebook has beefed up its rules on 'Dangerous Organisations' which includes terrorist activities or organised crime.
"It's a challenge to maintain one set of standards that meets the needs of a diverse global community," Bickert and Sonderby admit - and there's no doubt this won't be the last community standards set they'll have to write. It's easy to imagine the hate and crime guidelines getting ever more stringent, while nudity rules could become more lax.
© The Belfast Telegraph
27/3/2015- Imams from around Europe have joined together in order to fight the growing extremism in the name of Islam and have called on Muslims to fight it in the “digital space” in order to eradicate those who are attempting to tarnish their faith. Imams from Britain and Europe gathered on Thursday to condemn the atrocities being committed in the name of Islam. More than 120 Imams participated at the event in London held by ImamsOnline.com, in order to create awareness around the Muslim community of the lies being spread by the terrorist organizations, such as ‘ISIS’ or Boko Haram. The organization aims to provide a medium for Imams around the world to convey the true message of Islam and fight off the growing confusion and radicalization of Islam at the hands of extremists.
“The magazine ‘Haqiqah’ which translates as ‘The Reality’ is aimed at young people and will counter the extremist narrative used by groups such as ISIS.” said the chief editor of Imamsonline.com, Shaukat Warraich. “We’re turning the tide — though we still have a way to go, we know that by taking efforts to support and mobilize the huge online Muslim population we will eventually drown out the violent voices,” The summit was launched as a response to the growing threat towards young Muslims who are either feeling threatened or are being deceived by radicals into joining militant groups. The magazine ‘Haqiqah’ is being launched by the Imams to unveil the true face of the radical groups and clear the confusion among the Muslims in the community.
The Muslims around Europe have been living in fear of the growing threat of anti-Islamic sentiment caused by the attacks on satirical cartoon Magazine, Charlie Hebdo and a synagogue in Copenhagen. Anti Islamic rallies have taken place in Germany and parts of Europe with governments condemning them as Neo-Nazi movements. “We are reclaiming the online space, but we need absolutely everyone to get involved in this effort. So this is a call to log on, get informed, and share the magazine with all your friends and family online,” said the senior editor of the imamsonline.com magazine. Still there is hope in Europe with inter-faith rallies and rings of peace being made around religious worship places. Thousands of participants from multiple faiths took part in a peace march in Brussels which included people from the Jewish, Christian and Muslim community. The march called on members from all faiths to practice interfaith unity and promote peace in the community along with the rest of the world.
© Australian Muslim Times
Hugely popular black American celebrities are at the receiving end of criticism by black Twitter users using the hashtag #BlackCelebsBeLike.
Blog by Mike Wendling
23/3/2015- "Racism was an issue for hundreds of years. Then I got money. I think it's all good now." That's just one sarcastic tweet out of more than 13,000 in the past 24 hours shared under the tag #BlackCelebsBeLike. "We all the same ... There's no racism with the Internet," one tweet read. Another asked: "How can I complain about racism in Holly-wood when we have a Black president?" Most of the tweets were meant to be heavily ironic and came from African-Americans, attacking the supposed stances of celebrities on everything from interracial relationships to income inequality. The prevailing sentiment was that many famous people turn their back on black causes - in other words, that they get rich and sell out. "Racism is over," one user tweeted. "All you need is money and a new tax bracket to be accepted."
One popular target was actor and rapper Common, who won an Oscar for his song "Glory" in the film Selma - a movie in which he also played civil rights leader James Bevel. Ap-pearing on the Daily Show on US television earlier this month he urged black Americans to "forget about the past" when it comes to race relations. BBC Trending tweeted to ask him for comment, but he didn't respond. Others referenced recent remarks by entertainer Raven-Symoné who seemed to defend a remark comparing Michelle Obama to an ape on another American TV show, and actor Terrance Howard who said he was relaxed about white people using the n-word. One of the first and most influential users of the tag was Zellie Imani, an activist and blogger behind the website Black Culture. "The goal really was to challenge the pedestal we sometimes put celebrities on and not to allow media to use them as spokespersons for [all] black people," he tells BBC Trending via email.
Imani mentioned the remarks by Common and Howard. "I was surprised that the hashtag took off so fast but i wasn't surprised at how many people felt about the issue," he says. "Celebrities experience racism and discrimination even when they are in denial of its existence. Denying racism doesn't make it exist any less."
© BBC Trending
Surging numbers of US far-right extremists are inspired to carry out "lone wolf" terror attacks after being radicalised online, despite the number of extremist groups declining, an expert said.
20/3/2015- Mark Potok of the Southern Poverty Law Center, which monitors extremist groups in the United States, said that though there were fewer far-right organi-sations now operating in the country, after a brief surge following Barack Obama's election as US president, the number of attacks by the far-right had risen to levels not seen since the 1990s. "We think that very large numbers of people are essentially abandoning organised groups and moving into both the safety and anonymity but also the broadcasting power of the internet," he said. Of the racist attacks of previous decades, such as that plotted by white supremacist groups in the 1960s, Potok said: "An enormous amount of that violence was planned in smoky rooms filled with men. You would have an entire groups, like the Mississippi White Knights ordering murders or firebombing. It just doesn't happen that way anymore."
Recently, the US Department of Homeland Security warned domestic extremists pose a greater terror threat than jihadist group Islamic State (Isis), with 24 terror attacks by US extremists recorded since 2010. Earlier in March, for example, a convict with links to Neo Nazi groups allegedly killed one man and wounded five others in a shooting spree in Arizona.
Individuals carry out attacks or work in pairs
According to research carried out by the center, an overwhelming majority of these attacks are carried out by so-called "lone wolves", with 74% of the incidents (totalling just over 60) examined involving an individual acting alone. And when looking at 90% of all incidents, only one or two people were involved, usually including a family member or spouse. A recent study by the organisaion identifies 63 actual or foiled attacks by domestic "lone wolves". Among them are attacks by a couple with radical, anti-government views who shot dead two police officers and a bystander in Las Vegas in June 2014, and a 2012 mass shooting at a Sikh temple in Wis-consin by a Neo Nazi. With law enforcement increasingly effective at breaking up violent conspiracies by far-right groups, and with the costs of being exposed as a member of an extremist group high, Potok said the far-right radicals were increasingly taking to the internet to vent their resentment and propagate their ideology.
"When you are outed as a member of these groups, which happens quite frequently now, you may lose your wife, your kids, we have seen it happen to a lot of peop-le," he said. "Instead you see extremists both in white supremacist forums, like Stormfront, but you also see people posting racist and anti-Semitic comments on blogs, and in the comments forums of newspaper sites, and that's spreading."
Far-right subculture produced by the internet
Potok argues the internet has produced a far-right subculture that is international in its scope and ambitions, with lone wolf extremists such Anders Breivik regularly posting on US extremist forums, and US radicals in close contact with their European counterparts. "It shows clearly how international this movement has become," he said. "When you go on to Stormfront and read the posts from some of our own radical right, they are very much apprised of what is going on in Europe." He said far-right lone wolf attacks were "very very" difficult to foil, because groups did not usually directly implore followers out carry out the attacks on online forums. Instead, said Botok, extremists had their prejudices amplified reinforced online, often inspiring them to take deadly action.
"It's a kind of justification. They don't feel like they're some lone nutcase with a mental problem they are part of a very large 'white nationalist community' and there people involved in Europe as well so it looks like an international movement to save the white race," he said. In the wake of the recent murder of three US-born Mus-lim university students near the University of North Carolina, Potok warned Muslims would increasingly become the target of far-right extremists. Though the incident has not yet been classed as a hate crime, Potok said: "What are almost certainly coming into is a period in which anti-Muslim hatred and violence is going up." The Oklahoma bombings in 1995, in which 168 were killed, were the most deadly terror attack in the US before 9/11, and in response the US Department of Justice formed a committee to combat the threat of domestic extremists.
Though disbanded just before 9/11, US Attorney General Eric Holder announced recently that it would reform to combat the terror threat from domestic radicals. Daryl Johnson, a security analyst, told the center: "We're long overdue for a much greater attack from the far right."
© The International Business Times - UK
16/3/2015- Facebook is clarifying some of the parts of its terms of service for using the social network, including what type of nudity and content is allowed. None of the rules have changed, but Facebook has added examples and is committed to offering more support for victims of hate crime. It also claims to be pushing back har-der on government requests for takedowns or information. Nudity is still mostly blocked on Facebook, meaning anything from an overly exposed body to revenge porn will be swiftly removed. The social network is trying to maintain a family friendly stance for its 1.3 billion users, most of which do not want to see any form of nudity on the network.
Hate crime will be dealt with in the same fashion Facebook has always dealt with it, through bans, removals and potentially police reports. Facebook has expressed a desire for everyone to use their real name, even if it is not their birth name. Facebook came under fire last year when the LGBT community was unhappy Facebook was forcing people to use their real names, claiming a lot of people go under aliases and disguises. A few weeks later Facebook changed its policy on real names to include names given to people, even if they are not birth names. Government requests will be more heavily fought against by Facebook, even though specific country requests may be less affected. This means content can be removed in one country if deemed offensive, but remain in another country where the same content is not offensive.
Facebook has done a better job than Twitter and Reddit when it comes to abuse, but it is a much more closely connected network, mostly involving friends rather than random strangers on the internet.
© IT Pro Portal
Facebook has announced a new set of community guidelines, that reaffirms the ban on homophobic and transphobic hate speech on the social network.
16/3/2015- The changes were announced this week, as the company unveiled new rules to clarify what is and isn’t welcome on the website. Users can now flag hate speech directly through the website’s reporting panel – which previously only had options for “harassment”. The website has come under fire in the past for uneven enforcement of regulations, on occasion removing images of same-sex couples while allowing listed hate groups to promote themselves on Facebook. The new guidelines state: “Facebook removes hate speech, which includes content that directly attacks people based on their race, ethnicity, national origin, religious affili-ation, sexual orientation, sex, gender, or gender identity, or serious disabilities or diseases. “Organizations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook. As with all of our standards, we rely on our community to report this content to us.”
Addressing pages such as PinkNews, which regularly draws attention to listed hate groups, it clarified: “People can use Facebook to challenge ideas, institutions, and practices. Such discussion can promote debate and greater understanding. “Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others about that hate speech. “When this is the case, we expect people to clearly indicate their purpose, which helps us better under-stand why they shared that content.” Monika Bickert, the company’s Head of Global Policy Management, said: “It’s a challenge to maintain one set of standards that meets the needs of a diverse global community.
“For one thing, people from different backgrounds may have different ideas about what’s appropriate to share — a video posted as a joke by one person might be upsetting to someone else, but it may not violate our standards. “This is particularly challenging for issues such as hate speech. “Hate speech has always been banned on Facebook, and in our new Community Standards, we explain our efforts to keep our community free from this kind of abusive language. “We understand that many countries have concerns about hate speech in their communities, so we regularly talk to governments, community members, academics and other experts from around the globe to ensure that we are in the best position possible to recognize and remove such speech from our community. “We know that our policies won’t perfectly address every piece of content, especially where we have limited context, but we evaluate reported content seriously and do our best to get it right.”
However, parts of the new regualtions have attracted criticism from trans activists and the drag community, for reinforcing the use of ‘real names’ on the website. The guidelines state: “People connect on Facebook using their authentic identities. When people stand behind their opinions and actions with their authentic name and reputation, our community is more accountable. “If we discover that you have multiple personal profiles, we may ask you to close the additional profiles. We also re-move any profiles that impersonate other people.”
© Pink News
The tweet followed Welbeck scoring against his former side Manchester United in the FA Cup.
16/3/2015- A 15-year-old boy has been arrested by police investigating an alleged racist tweet aimed at Arsenal striker Danny Welbeck. The teenager was held and ques-tioned on suspicion of racial abuse following the posting of a message on Twitter after Welbeck scored the winner to knock his former club Manchester United out of the FA Cup on March 9. A Wiltshire Police spokesman said: "A 15-year-old male from the Salisbury area was arrested on the evening of March 12 on suspicion of racial abuse. "He has been released on police bail until April 13. The investigation is ongoing." Welbeck, who has been capped by England 32 times and scored 12 goals, joined Arsenal for £16 million last year following the arrival of new Manchester United boss Louis van Gaal.
© The Independent
France has blocked five websites accused of condoning terrorism, in the first use of new government powers introduced in February, the interior ministry said on Monday.
16/3/2015- One of the sites -- al-Hayat Media Center -- is accused of links to the Islamic State group, the ministry said. The site "islamic-news.info" has also been blocked since the end of last week. The banning order was given to Internet service providers, who had 24 hours to take "all necessary measures to block the listing of these addresses" under the new rules. They were introduced as part of a package of counter-terrorism measures approved by parliament in November. Critics argued they could breach citizens' rights by bypassing the need for a judge to make the banning orders. Other powers include the right to stop people travelling out of the country if they are suspected of trying to join jihadist groups. France's law allows authorities to block websites that call for or glamorize terrorism. This law against condoning crime and inciting terrorism was introduced in France after January's terror attacks left 17 dead. Interior Minister Bernard Cazeneuve visited the US last month, meeting with heads of major online companies like Facebook and Google in an attempt to prevent "the great area of freedom and growth" from becoming a "space of fanatic indoctrination."
© The Local - France
Police are monitoring a “disturbing” “neo-Nazi” website called RedWatch after images of anti-Pegida protestors were posted alongside a request for information.
13/3/2015- Newcastle MP Chi Onwurah is among the people pictured after she spoke at a rally against an anti-Islam demonstration in the city. The site - run by a far-right group not directly connected to Pegida - brands protestors “degenerates”, claims they were involved in violence and calls on people to provide personal data. It is believed RedWatch has links to the paramilitary group ‘Combat 18’ and many featured on the site fear their names and addresses could be shared with dangerous individuals. Chi confirmed she was reporting the matter to police, adding: “The reference to illegal activities appears defamatory as well as an incitement and to call me degenerate and say I was making death threats – which is absolutely untrue - would also appear to incite people to take aggressive action.” She added: “I think it is disturbing and I have asked the police to keep me informed.”
Pegida marchers, who claim they are trying to defend countries from the spread of extremism at the hands of Muslim immigrants, were outnumbered in Newcastle by counter-demonstrators at a rate of more than five to one. People are calling on Northumbria Police to take action on RedWatch. Newcastle University student Gary Spedding, another anti-Pegida marcher whose picture features on the site, said: “I was shocked to discover the website known as RedWatch. “The police informed me that my image, and those of a number of others that I know personally, had recently been uploaded to this neo-Nazi site following Newcastle Unites highly praised and successful rally against Pegida in Newcastle last month. “The modus operandi of RedWatch, uploading images of anti-fascist individuals and groups in order to identify them and gather our personal details inclu-ding telephone numbers and home addresses, is something I find to be sinister, creepy and potentially criminal.
“Publishing the image, personal details and contact number of individuals with implied intent to incite violence against them is possibly a breach of the Electronic Communications Act (2000). “RedWatch is a far-right platform with strong ties to a paramilitary group known as Combat 18 - the publishing of personal details on the website has previously resul-ted in actual violence towards people at their home addresses and even death threats to elected representatives, including members of parliament and their families. “I would urge those who may have had their images and personal details uploaded to the website to be vigilant and report the website - along with any out of the ordinary occurrences such as no caller ID phone calls - to the police as soon as possible.” A spokeswoman for Northumbria Police said: “We have been made aware of this website and are currently making inquiries into this matter.” We attempted to contact RedWatch for a comment but no-one was available.
The people who run the site use this introduction: “This is a site designed and intended for people who are involved in the struggle against the spread of Marxist lies in the UK.”
© The Chronicle Live
Robert Campbell was the ghost who haunted Roland Stieda.
12/3/2015- Only slightly aware of his existence -- "to be honest, I barely remember the man," Stieda told reporters Thursday -- Campbell's presence in the periphery of Stieda's cons-ciousness took on terrifying presence when Stieda learned it was Campbell who'd waged a 10-year campaign of online harassment against the aspiring city councillor. "That's been one of the most difficult things," he said. "The first thing people ask is 'well what did you do?' And I can't think of any run in I had with Mr. Campbell." "There's nothing there." Camp-bell has already pleaded guilty to defamation and harassment raps. Stieda was just one of about 40 victims of Campbell's poison pen campaigns, which Campbell launched while cowering in the anonymity of the Internet. Campbell would set up fake online profiles of his victims, making it appear as though they were working at strip joints or neo-Nazi orga-nizations. And Stieda's colleagues received e-mails that the hus-band and father was a pervert and a lush -- wildly untrue -- and in a victim impact statement he gave to the court he said it made him paranoid of his colleagues. "To what lengths would this individual go?" he said. Another victim, who can't be named because of a publication ban, recalled getting a vitri-olic letter when she was just a teenager. And when she changed her Facebook profile picture she almost immediately got a note from Campbell commenting on that fact. "This con-firmed one of my fears," she told the court. "I was being watched." Sentencing arguments begin Friday.
© The Ottawa Sun
11/3/2015- Internet providers no longer have to keep their clients phone, internet and email details because privacy is more important, a Dutch court ruled on Wednesday. Lawyers, journalists and three small telecoms firms went to court in a bid to get the legislation set aside. They argue that internet firms should not be keeping information about the communi-cations of everyone in the country, whether or not they are suspected of a crime. Companies have been required to keep the information for a year since 2009. The EU found in 2014 that the mass storage of information is a serious breach of privacy and put its data retention legislation on hold. This put Dutch telecoms firms in a difficult position. They were required to keep the information under Dutch law even though it was not allowed in European legal terms. ‘Dutch law conflicted with European law and that has now been put right,’ a lawyer for the complainants told broadcaster Nos. ‘The law infringes on the right to a private life and the right to the protection of personal details,’ the court said in a statement. ‘The court finds this infringement goes beyond what is strictly necessary.’ The justice ministry said it would study the ruling closely before making any comment.
© The Dutch News
11/3/2015- Eight out of 10 people in the Netherlands now have a mobile phone with internet access, compared with just one in 10 in 2005, the national statistics office CBS said on Wednesday. Nine out of 10 people use the internet daily and 77% of people now buy goods and services online, the new figures show. Some 63% watch television shows and listen to the radio online and 59% read newspapers using a digital device. Older people are also catching up. Some 75% of 65 to 75-year-olds use the internet on a daily basis. Ten years ago only four in ten people who had reached retirement age were active internet users. Older people are most likely to email, Skype with children and grandchildren and use Facebook, the CBS said. Laptops, tablets and smartphones are also gaining popularity over computers, the CBS said. Some 96% of Dutch homes have a fast internet connection, a figure which has been stable for the past four years.
© The Dutch News
9/3/2015- Three people who made racist comments about a selfie featuring Dutch football international Leroy Fer have been told they can avoid going to court by paying a €360 fine. The three, who come from The Hague, Rotterdam and Breda, made the comments about the selfie, which includes eight other black internationals and was placed by Fer on Instagram. The picture was later added to a Facebook football page where it attracted comments about apes, slaves and Zwarte Piet. Dutch captain Robin van Persie said he is pleased that those responsible will not get away with their actions. ‘We all represent the Dutch team and colour has nothing to do with it,’ he told news agency ANP. ‘It is not acceptable to the team or in society in general.’ Fer himself told ANP: ‘It is a clear signal to everyone that this sort of hurtful comment is not acceptable, neither on the pitch or off it.’
© The Dutch News
By Keegan Hankes, SPLC Research Analyst
10/3/2015- One section of the Web forum is dedicated to watching black men die, while another is called "CoonTown" and features users wondering if there are any states left that are "nigger free." One conversation focuses on the state of being "Negro Free," while another is about how best to bring attention to the assertion that black people are more pro-ne to commit sexual assaults than whites. But these discussions aren't happening on Stormfront, which since its founding in 1995 by a former Alabama Klan leader has been the lar-gest hate forum on the Web. They're taking place on Reddit, a huge online bulletin board. Reddit was recently spun off into its own independent entity from Advance Publications, the parent company of mass media giant Condé Nast, which also owns Vanity Fair, The New Yorker and 20 other print and online publications that reach an estimated 95 million consumers. (Advance Publications is still a majority shareholder in Reddit.) Reddit has been hailed as the last bastion of free speech on the Internet, an unregulated and vibrant community of users who post whatever they want and rely on the community around them to police their content.
The world of online hate, long dominated by website forums like Stormfront and its smaller neo-Nazi rival Vanguard News Network (VNN), has found a new — and wildly popular — home on the Internet. Reddit boasts the 9th highest Alexa Internet traffic ranking in the United States and the 36th worldwide. Many of Reddit's racist subreddits are among its most popular. Reddit is a news site that hosts user-submitted links and discussion, organized into specific communities of interest comprised of "subreddits," which are ranked by votes from users. If a reader believes content is a constructive contribution, he or she can "upvote" it, pushing the content further up the page. Conversely, if a user thinks that content is either off-topic or is not constructive, it can be "downvoted," causing it to sink further down the page. Content on Reddit is "moderated based on quality, not opinion," according to the working document that dictates community guidelines, called "Reddiquette." This idea of user-policed communities that contain high-quality, diverse content is part of the ethos Reddit has worked hard to project. "We power awesome communities," reads the graphic atop its "about" page. But awesome communities for whom?
Along with countless others with entirely different interests, Reddit increasingly is providing a home for anti-black racists — and some of the most virulent and violent propaganda around. In November 2013, a hyper-racist subreddit called "GreatApes" was formed. Users posted epithet-strewn links to "news" stories of dubious origin that riffed on long esta-blished stereotypes about the black community. GreatApes was wildly popular and grew quickly, expanding into a much larger Reddit network called "the Chimpire," which was organized by a user known only by his or her posting name of "Jewish_NeoCon2." "We feel it's time to expand our sphere of influence and lebensraum [the Nazi term for "living space"] on reddit. Thus we have decided to create 'the Chimpire,' a network of nigger related subreddits," Jewish_NeoCon2 wrote at the time. "Want to read people's experiences with niggers? There now is an affiliated subreddit for it. Want to watch chimp nature documentaries? We got it. Nigger hate facts? IT'S THERE. … Oh yes you bet we got videos of ghetto niggers fighting each other. Nigger drama on reddit? There's a sub. Sheboons? Gibsmedat."
Within a year, the Chimpire network had grown to include 46 active subreddits spanning an alarming range of racist topics, including "Teenapers," "ApeWrangling," "Detoilet," and "Chicongo," along with subreddits for both "TrayvonMartin" and "ferguson," each of them dealing with the controversial and highly publicized shooting deaths of unarmed black teenagers. Then, last November, Reddit's most racist community evolved once again, adding the subreddit called CoonTown in the aftermath of a dispute between several top moderators at GreatApes. In just four days, CoonTown had reached 1,000 subscribers. And its popularity continues to grow. According to Reddit Metrics, as of Jan. 6, there were 552,829 subreddits. CoonTown, with its 3,287 subscribers, ranked 6,279th, placing it in the top 2% of subreddits. It is the 680th fastest-growing subreddit on the site despite — or because of — violently racist material including a large number of threads dedicated to videos of black-on-black violence.
These gruesome videos show black men being hit in the head repeatedly with a hammer, burned alive, and killed in a variety of other ways. The subreddit's banner features a cartoon of a black man hanging, complete with a Klansman in the background. One fairly typical user, "Bustatruggalo" applauded the graphic violence as "[v]ery educational and entertaining." He or she continued on a separate thread: "I almost feel bad for letting an image like this fill me with an overwhelming amount of joy. Almost…." Others, like user "natchil," were looking for still more. "Where is watchjewsdie?" this user wondered.
'Remember the Human'
There are some limits. "No calls for violence," the CoonTown subreddit's description reads. "It's prohibited by Reddit's site-wide rules." Everything up to violence , however, is very much there, including the horrific content found on other Chimpire subreddits like "WatchNiggersDie" — content which is rarely, if ever, matched on forums like Stormfront and VNN, which worry about being shut down or driving off potential allies. That's despite the Reddiquette section's first rule, which implores Reddit users to "Remember the hu-man." "When you communicate online, all you see is a computer screen," it says. "When talking to someone you might want to ask yourself 'Would I say it to the person's face?' or 'Would I get jumped if I said this to a buddy?'" If Reddit's rules seem relaxed, that's because they are meant to be. Still, although users are asked to "remember the human," there is little humanity in the way the subjects of subreddits like CoonTown are treated.
In June 2013, however, after an extended, public controversy, Reddit did ban the subreddit "Niggers" when large numbers of its denizens began overrunning another subreddit, "BlackGirls," with racist posts that were apparently not being policed by its moderators. "Brigading" — when large groups of people from one subreddit gang up to downvote com-ments on another subreddit that they don't normally visit — is prohibited by Reddit. Users of the Niggers subreddit also engaged in "vote manipulation," which falsely raises the popularity of a post by soliciting like-minded users to blindly upvote it. After repeated warnings and "shadow-banning," or making a user's posts invisible to everyone but the author, the subreddit was finally banned. According to Jewish_NeoCon2, more than a few former members of the Niggers subreddit have now taken up residence at CoonTown.
A Reluctance to Intervene
Condé Nast, one of the largest mass media companies in the United States, acquired Reddit in 2006, although the Internet company still operates independently. The stability of such a well-established and respected media firm, as well as the funding of many high-profile investors, including a $50 million investment from the rapper Snoop Lion this Octo-ber, appears to guarantee Reddit's future. A request for comment on the Chimpire was directed to Patricia Rockenwagner, a member of Condé Nast's public relations department, but she referred it to Victoria Taylor, Reddit's head of communications. Taylor did not respond to requests for comment. Last year, however, Yishan Wong, Reddit's former CEO, took to the subreddit "TheoryofReddit," to explain that Reddit was in the red. He revealed that in exchange for its relative operational independence from Condé Nast, the site was responsible for its own bills. The site's goal, according to Wong, is to pay its own way and its primary engine for accomplishing that is through ads, a premium subscription option, and the Reddit gift exchange.
Racist websites and organizations do sometimes benefit from racist subreddits like the Chimpire. That's because subreddit users often post links to other racist sites, and those links drive traffic to those other sites, which in turn typically sell merchandise in addition to pushing racist ideology and recruiting. It's hard to dispute that Reddit does offer a venue for remarkably lively and unbridled conversation, and that dissident commentary that might not be tolerated elsewhere finds a welcome home there. Richard Spencer, a racist ideologue who heads the National Policy Institute, held an "AMA" (Ask Me Anything) session on Reddit last November, and although his views are widely regarded as loathsome, he was calm and understated in his discussion of far-right European politics. Unlike in WatchNiggersDie, there were no links to videos of brutal killings or other visual images meant to degrade the humanity of minorities.
Reddit is often hailed as one of the last bastions of truly free speech, and its owners' hesitance to jeopardize that status is understandable given the loyal following it has inspired. Reddit has removed content that has been illegally appropriated from commercial interests, such as the revelations that emerged from the November hack of Sony Pictures Enter-tainment. The Internet is awash in racist, anti-Semitic, misogynistic and other hateful content, but much of it is relatively tame. Subreddits such the Chimpire offer a window on to just how awful some of the darkest corners of the Web really are.
Update: This post has been updated to correct and clarify Reddit's relationship to Condé Nast and Advance Publications.
Keegan Hankes is a Research Analyst at the Southern Poverty Law Center's Intelligence Project. The article originally appeared in the Spring 2015 issue of the Intelligence Report.
Ellen Pao said she is suing her former venture capital firm for eight figures because only an amount that large would ‘hit their radar’
10/3/2015- The woman at the center of a landmark $16m Silicon Valley sex discrimination case said she was suing her former venture capital employer for such a large amount because only an eight-figure settlement would “hit their radar” and force change in the west coast technology scene’s “boys’ club”. Ellen Pao, who was fired by prestigious ven-ture capital firm Kleiner Perkins Caufield & Byers after she complained about sexual discrimination, told jurors in a San Francisco court that she had “gone through every possible internal process I thought I could go through” to try and raise her concerns with the firm’s management. In one email to Kleiner Perkins’ partners, she asked them to “imagine your wife or daughter in my position”. “I wanted an even playing field for women at the firm,” she said. “I wanted to have an environment where people who complained about pro-blems related to discrimination or to other issues would be heard and that the firm would do something about it.”
She said that after an internal Kleiner Perkins investigation found that she had not suffered discrimination, she was ignored by superiors and excluded from important meetings, din-ners and corporate jet flights. “It was extremely difficult and very uncomfortable,” Pao, who is now interim chief executive of social news site Reddit, said. Among Pao’s claims was the allegation that she, and other women, were excluded from an important Kleiner Perkins dinner with former US vice-president Al Gore because they would “kill the buzz”. “It was said that if there were women there, the conversation would be tempered and it was because women kill the buzz,” Pao said on the stand on Monday. The organi-ser of the dinner, Chi-Hua Chien, denied saying women would “kill the buzz”, but conceded that the dinner was a male-only affair. At the time of the dinner, Pao lived in the same building as Gore.
Pao said she bumped into the chief executive of Flipboard, the social network aggregator, outside the building and had to tell him that she wouldn’t be able to go to the dinner. “It was pretty humiliating,” she said. “I had to explain to them that I wouldn’t be attending, and it was because I wasn’t a man.” Kleiner Perkins disputes her claims, and argues that she lacked the interpersonal skills to succeed in the company and that she is adequately compensated at Reddit. The trial, being heard at the superior court in San Francisco, has all of Silicon Valley gripped. Kleiner Perkins is among the most prestigious venture capital companies in technology and counts Amazon, Google and Uber among its investments. In earlier testimony Alan Exelrod, Pao’s attorney, argued that Kleiner Perkins systematically discriminated against women. In opening statements he said the company had existed for about 40 years when Pao was let go and had only promoted one woman from junior partner to senior partner in that time.
Exelrod contrasted Pao’s evaluations with those of male colleagues. She was described as having “poor interpersonal skills” and “her own agenda” while male colleagues, who were promoted, were evaluated as “quite tough”, “arrogant” and “blunt and overbearing”. The case is not about sexual harassment. However, Pao has said she was given a book of erotic poetry and nude sketches by a senior partner at the firm. She also claims another male employee interfered with her work after she ended an affair with him. Pao’s testimony follows that of John Doerr, billionaire senior partner at Kleiner Perkins, who testified that he had tried to save Pao’s career at the firm after criticism from colleagues. Pao worked as Doerr’s chief of staff when she joined the company. In a performance review filed with the court, Doerr said Pao had been dismissive of peers who did not meet her expectations and needed to improve her interpersonal skills. He otherwise praised her performance in her first year as his chief of staff.
In court Doerr said he had provided Pao with two coaches to improve her presentation skills, but the training failed to pay off. “Ellen is very talented,” he told the court. “I felt that she ought to have another shot.” According to Doerr, 20% of partners at Kleiner Perkins are women, a far higher percentage than its peers according to a recent study relea-sed by Babson College in Massachusetts. According to that study, released last year, the total number of female partners in venture capital firms has declined significantly since 1999, dropping to 6% from 10%.
© The Guardian
by Chrisella Herzog
8/3/2015- “I’m looking you up, and when I find you, I’m going to rape you and remove your head. You are going to die and I am the one who is going to kill you. I promise you this.” This message was sent in a series of tweets to journalist Amanda Hess, but the truth is women online receive messages similar to this every day. In nearly all cases, women are sent these messages simply for stating an opinion online that someone disagreed with. It is not a new phenomenon, certainly; groups such as women who play online video games or women of color on Twitter have long complained of gendered harassment for daring to even show up, which often escalates to threats of violence when they stand their ground. According to Hess, “Feminine usernames incur an average of 100 sexually explicit or threatening messages a day. Masculine usernames received 3.7.” But late last summer, it burst into the public consciousness in a more violent—and more organized—way.
In August 2014, independent game developer Zoe Quinn had already undergone about 18 months of harassment for her game, Depression Quest. Shortly after the release of her game on Steam in August, a bitter and jilted ex-boyfriend published a screed online claiming Quinn had an affair with a games journalist in order to get good reviews for her game, and the harassment exploded. Despite the allegations being false, Quinn and her family were viciously attacked—she was doxxed, her personal accounts hacked, her stolen nude photos circulated among her harassers, and she was called awful names in all corners of the internet. Organized campaigns were set up with the sole purpose of trying to get her to kill herself. After right-wing actor Adam Baldwin joined in the conversation—criticizing the media for trying to “enforce arbitrary ‘social justice’ rules upon gamers & the culture”—the harassment coalesced under the hashtag he shared: #GamerGate.
Although the movement was ostensibly a “consumer revolt” advocating for “ethics in games journalism”, the movement’s progression showed that what motivated ‘GamerGaters’ was not ethical concerns, but rather a reactionary conservative backlash against the increasingly high-profile encroachment of women into the percei-ved male spaces of gaming and technology. Video game critic Anita Sarkeesian was targeted after a new episode in her YouTube series criticizing the portrayal of women in video games was released in the midst of GamerGate’s formation. Beyond the doxxing and rape/death threats, Sarkeesian also received an anonymous threat of a mass shooting when she was scheduled to speak at Utah State University—a threat signed with the moniker ‘Mark Lepine’, a reference to the 1989 Montreal Massacre shooter who killed 14 women in the name of “fighting feminism”.
Brianna Wu, another indie game developer who has spoken out on the challenges of women in the games industry, became a target after she shared a meme on Twitter that was critical of GamerGate; she has spoken out multiple times about how traumatizing and exhausting the harassment of her family has been, as well as how damaging it has been to the industry as a whole. (She told VentureBeat recently how a woman she hired asked not to have her name used publicly anywhere.) Her harassment has also included numerous instances of transphobic slurs, despite the fact that she is not transgender. However, other transgender feminists have been targets for GamerGates’s harassment and violence to an extreme degree—campaigns were organized around trolling support boards to trigger widespread suicide in the transgender community, and several of the most outspoken trans GamerGate critics were “swatted”.
Swatting has become the most striking example of how violence on the internet has spilled over into real life. The “prank” involves making an anonymous call to police claiming a crisis situation, such as a bomb plot or a hostage crisis, at the target’s home address, which was usually revealed through doxxing. The intention is to get the police force to mobilize its SWAT team and break down the front door of the target. Although the use of swatting began before GamerGate’s inception, the movement has taken great glee in its use, openly hoping that someone dies from their “joke”.
Over the course of six months, GamerGate became the melting pot for the worst of the internet. Violent sexism, homophobia, and transphobia have been the most common and outspoken sentiments of GamerGate, but not far beneath the surface are virulent strains of anti-Semitism, racism, and neo Nazism. As the movement progressed, the rhetoric began parroting concepts from white supremacy groups, men’s rights advocates, and a popular culture perception of the military. When the movement became too extreme, Reddit put restrictions on the movement’s “operations” and email campaigns, and even the notorious 4chan removed any discussion of GamerGate from its boards. Angered about the perceived attack on their “free speech”, the movement found refuge on 8chan, a discussion board site described by The Washington Post as “the most lawless, more libertarian, more ‘free’ follow-up to 4chan.” The site’s founder, Frederick Brennan—a man who wrote a pro-eugenics article for the neo-Nazi website, The Daily Stormer—made 8chan to combat what he perceived as a loss of free speech on the internet.
Things quickly went downhill, as 8chan became the place where bits of the dark web bled over: private information on anyone deemed to be an enemy is listed with a wink to the hostile and angry audience; threads discussing how to destroy lives scatter across the different themed boards; credit card accounts and Social Security Numbers are listed for sale next to doxxes for hire (for the measly sum of $5 to $200). And worst of all on the site: child pornography was freely shared until the site was called out for it and its domain temporarily revoked. What happened in GamerGate that brought out the very worst of the internet, and turned an ostensibly grass-roots consumer revolt into a hate group?
U.S. law has established that free speech is protected until it becomes dangerous or injurious (libel, slander, hate speech), but it is very common for law enforcement to brush off threats as not serious. Harassment and online stalking laws are largely based in an era before the internet; until 2013, the Violence Against Women Act only criminalized abusive, threatening, or harassing speech over the telephone. When Congress proposed an amendment to the law, the Electronic Frontier Foundation opposed the amendment, saying, “A person is free to disregard something said on Twitter in a way far different than a person who is held in constant fear of the persistent ringing of a telephone intruding in their home.”
It’s the same refrain: harassment online somehow is less than ‘real’, and restricting free speech in order to combat something seemingly so ‘harmless’ is poor policy. But like swatting, the violent and racist rhetoric does not stay online anymore. White supremacist groups have moved away from large organizations into smaller cells, which use the internet to recruit from boards full of angry, disenfranchised young men. In 2012, one such man went to a Sikh temple in Wisconson with a gun and a head full of violent music glorifying the white race; he killed six worshippers. Anders Behring Breivik was also involved in white supremacist online groups, as were many other mass murderers over the past few years. Similar online tactics have been used by ISIS, which has garnered much attention for its strategic social media use and its ability to convince Western youth to run away and join their cause. Jeffery Simon wrote about the use of internet communities to radicalize small cells or individuals and incite them to violence in his book, Lone Wolf Terrorism: Understanding the Growing Threat.
In GamerGate, the dehumanization of Quinn, Sarkeesian, and Wu (labeled as “Literally Who” 1, 2, and 3) and the violent threats against them have driven some to take action offline. Beyond the shooting threat against Sarkeesian, Quinn is still unable to return to her home as there are still reports of strangers appearing in her neighbor-hood and yard; on January 31st, Wu shared on Twitter a video of a man who claimed he had crashed his car on his way to kill her, and that ‘The Commander’ (as he named himself) had been threatening her for weeks with no action from law enforcement. (The man was later revealed to be a hoax, when comedian Jan Rankowski revealed he had made up the character, proving the increasing relevance of Poe's Law on today's internet.)
Other women online continue to receive daily harassment as well. And the harassers know that the repercussions of their actions are likely to be minimal—the worst punishment most ever see is their Twitter account suspended, even though they can just create another account. GamerGate has at least brought attention to the amount of harassment and hate that flies around casually online. To the detriment of the industry, hate from reactionary gamers has re-emerged as a popular culture stereotype—it was even turned into a Law & Order SVU episode. However, there has been increasing media coverage of the lack of legal protections and police action for harassment victims, and the attention has reached federal levels, including Congress, which is reportedly considering stronger protections for cyber harass-ment victims. Currently, online spaces are involuntary battlefields. Women who raise their profile at all in virtual spaces are deemed to be encroachers, and therefore ‘fair game’ for any kind of treatment. Will the harassment end only when women’s voices are silenced?
Chrisella Herzog is the Editor-in-Chief of WhiteHat Magazine, and was previously Managing Editor of Diplomatic Courier.
© The Diplomatic Courier
These actions are ‘deplorable’ and cannot be tolerated.
4/3/2015- That is the stark message from the vice-chairman of Portsmouth City Council’s licensing committee in response to a stream of racist posts on the Facebook page in the name of taxi driver Viv Young. Mr Young, who represents hackney drivers and has a share in City Wide Taxis, came under fire after offensive messages published under his name were exposed, and the posts have been criticised by the city’s Muslim leaders. Tory councillor Ken Ellcome said: ‘It is deplorable that people are writing that sort of stuff on Facebook, whoever it is. ‘It is unacceptable and I await to see the outcome of the council’s investigation and see whether it will come before the council’s licensing commit-tee.’
Cllr Donna Jones, Tory leader of Portsmouth City Council, said: ‘Any kind of racist behaviour that is made in conjunction with any interaction with us as a local body, in particular this case, is of huge concern to me. ‘It is unacceptable behaviour. ‘I welcome the investigation and I will be reviewing the conclusion of the investigation as a matter of urgency. ‘I take these allegations seriously – they will not be tolerated here in Portsmouth.’ But supporters have come to the defence of Mr Young and say he is not racist. John Coates, 59, a taxi driver for City Wide Taxis, said: ‘I’ve never known him to be racist.’ ‘I’ve known him for years and he has done a lot for the trade.’ A female taxi driver, who did not wish to be named, said: ‘I can’t see him saying that, he doesn’t come across as that sort of guy. ‘I wouldn’t have thought he’d say something like that, he always seemed pretty nice. I’d be very surprised if he has said it.’
© The Portsmouth News
3/3/2015- A Nazi-affiliated hate website has appealed for the personal information of members of the public, after publishing photos of people who took part in Satur-day's ‘Newcastle Unites’ march. Redwatch carries the slogan “Remember places, traitors’ faces, they’ll all pay for their crimes” – a quote from Ian Stuart Donaldson, the frontman of white power rock band Skrewdriver before his death in 1993. Now the faces of dozens of people from Saturday’s counter demonstration against the anti-islamist group Pegida UK have been posted online under the ‘North East Reds’ section of the site. Anyone can access the website as long as they agree to do so in the knowledge that it contains 'potentially controversial' material intended for reference purposes and not unlawful activity.
In the North East Reds section, the site says that any information on 'the freaks' photographed at the Newcastle march would be gratefully received, along with a statement detailing a desire to increase activity in the region. Redwatch gained nationwide notoriety in 2006, when Alec McFadden, a long-term union activist from Merseyside, was repeatedly stabbed in the face in his doorway – his picture and home address had been published on the site. The website, which displays affiliations with neo-Nazi organisations Combat 18 and Aryan Unity on its homepage, claims that it is simply reacting to left wing organisations who have published the personal details of white nationalists online, and that it does not encourage violence against political opponents.
However, Newcastle Councillor Dipu Ahad, who helped organise the Newcastle Unites march, disagrees. He received numerous threats over social media before the march, including one threatening him with beheading. He said: “It’s all about intimidation, whether it’s through threats of beheading on twitter or being named on this site. “They’re trying to keep mouths shut and the police need to deal with this. “Anybody who spots themselves or anyone they know on that site should report it to the police immediately.”
© The Northern Echo
Internet trolls need to be punished more harshly and anti-social behaviour orders (ASBOs) aren't enough of a deterrent to prevent their behaviour, according to a criminal justice spokesman.
1/3/2015- Earlier this month, MPs backed a move for social media users who persistently spread racial hatred online to be given ‘internet ASBOs’, blocking them from sites such as Twitter and Facebook under proposals to tackle rising levels of anti-semitism. David Stobie, from the the criminal legal aid strategy division of the Ministry of Justice, spoke out on the issue of trolling and the damaging impact it can have on people’s lives. “Personally I don’t think [ASBOs] are enough of a deterrent,” he told MM. “Sometimes, I think the best thing would be for the police to turn up at the troll’s front door in front of their parents. “Other times it’s just about them being humiliated - outed as a troll. “ASBOs sometimes could be sufficient, but generally I'd say not - especially if some are wearing it as a badge of honour. Stobie works in close contact with True Vision, a hate-crime reporting website based in the UK linked to local police forces such as Greater Manchester Police, where victims are also offered support.
He said that third-party systems helping to deal with trolls are expanding all the time and that the organisation had helped save many lives of victims who had become suicidal over the past few years. “It certainly helps if victims know there is something they can do about it [trolling], so they feel a bit safer,” he said. “Part of the problem is about getting the message to them. “We could do all the work in the world to make society safer, but if the people that are under threat don’t know about it, then it’s pointless.” David believed that a proposal to restrict the creation of social media accounts to personal identification would be a good move, but that some trolls would still find a way to access other peoples profiles. He believed that handing tough penalties were a good way of sending out a message, citing the well-documented case of former Bolton midfielder Fabrice Muamba.
Muamba was targetted with racial abuse on Twitter after collapsing on the pitch during an FA Cup match at Tottenham Hotspur three years ago by Swansea University student Liam Stacey, who was sentenced to 56 days in jail. “I think when a high-profile troll that gets caught out, it sends out a warning that this can happen to you,” David said. “When that happens, people realise you can be fined, have your name put in the papers, or even jailed. “These people need to know the difference between right and wrong and the damage that they do.” Although trolling has become a growing epidemic on the internet, David disagreed that social media did more harm than good and highlighted its positive aspects. “Social media is great," he said. “We couldn't do our jobs with being able to refer to Google, and Facebook is good for keeping in touch and makes a lot of peoples lives better. “It’s just about getting rid of those that abuse it and there are far too many. "But there are so many people using it, how do you catch them all? It’s very difficult.”
© Mancunian Matters
27/2/2015- The Federal Communications Commission voted on Thursday to regulate broadband Internet service as a public utility, a milestone in regulating high-speed Internet service into American homes. Tom Wheeler, the commission chairman, said the FCC was using “all the tools in our toolbox to protect innovators and consu-mers” and preserve the Internet’s role as a “core of free expression and democratic principles.” The new rules, approved 3 to 2 along party lines, are intended to ensure that no content is blocked and that the Internet is not divided into pay-to-play fast lanes for Internet and media companies that can afford it and slow lanes for everyone else. Those prohibitions are hallmarks of the net neutrality concept.
Explaining the reason for the regulation, Wheeler, a Democrat, said Internet access was “too important to let broadband providers be the ones making the rules”. Mo-bile data service for smartphones and tablets, in addition to wired lines, is being placed under the new rules. The order also includes provisions to protect consumer privacy and to ensure that Internet service is available to people with disabilities and in remote areas. Before the vote, each of the five commissioners spoke and the Republicans delivered a scathing critique of the order as overly broad, vague and unnecessary. Ajit Pai, a Republican commissioner, said the rules were government meddling in a vibrant, competitive market and were likely to deter investment, undermine innovation and ultimately harm consumers. “The Internet is not broken,” Pai said. “There is no problem to solve.”
The impact of the new rules will hinge partly on details that are not yet known. The rules will not be published for at least a couple of days, and will not take effect for probably at least a couple of months. Lawsuits to challenge the commission’s order are widely expected. The FCC is taking this big regulatory step by reclassifying high-speed Internet service as a telecommunications service, instead of an information service, under Title II of the Telecommunications Act. The Title II classification comes from the phone company era, treating service as a public utility. But the new rules are an à-la-carte version of Title II, adopting some provisions and shunning others. The FCC will not get involved in pricing decisions or the engineering decisions companies make in managing their networks. Wheeler, who gave a forceful defence of the rules just ahead of the vote, said the tailored approach was anything but old-style utility regulation. “These are a 21st-century set of rules for a 21st-century indus-try,” he said.
© The Financial Express
The FCC's net neutrality rules, voted in yesterday, will probably be challenged in court by a broadband provider or trade group. But FCC Chairman Tom Wheeler is confident that the net neutrality rules will stand up to legal scrutiny.
27/2/2015- When the FCC voted Thursday to enforce strong "net neutrality" rules by reclassifying broadband providers as “common carriers,” the Commissioners knew that putting the rules in place is only half the battle. Chairman Tom Wheeler’s net neutrality plan will almost certainly be challenged in court by one of the major broad-band providers, and the new rules must stand up to legal scrutiny. But Chairman Wheeler says that he’s confident net neutrality will survive a court challenge. This sce-nario has played out once before. In 2010, the FCC published its Open Internet Order, which imposed many of the same restrictions on broadband providers as the new net neutrality rules do: no blocking legal content, no throttling Internet traffic based on the source of the traffic, and no “fast lanes” for content providers who pay extra. Verizon sued to block those rules, and in January 2014 the D.C. Circuit Court struck down most of the order, saying that the FCC was trying to regulate broadband provi-ders as though they were common carriers, even though they had never moved to classify them as such.
That ruling set the stage for yesterday’s vote, which moved broadband providers into the Title II “common carrier” category of the Telecommunications Act of 1996. Now the FCC has greater authority to regulate the commercial activities of those providers, though it has explicitly said it won’t meddle with Internet subscription prices or require providers to lease their networks to competitors. It’s all but certain that Verizon, AT&T, Time Warner Cable, or another broadband provider will sue to nullify the FCC’s new rules. Verizon and AT&T have already publicly expressed dissatisfaction with the net neutrality vote – Verizon even put out a statement in Morse Code to protest what it says is the application of an antiquated legal framework to modern data networks.
Comcast has said it will not sue the FCC, but executive vice president David L. Cohen said in a statement that “we all face inevitable litigation and years of regulatory uncertainty challenging [the] Order.” He added that Comcast has no quibble with the underlying principles of no blocking, no throttling, and no fast lanes, but that the company is uncomfortable with the FCC’s expanded authority. Michael Powell, a former FCC chairman and president of the National Cable & Telecommunications Association, a trade association that lobbies on behalf of broadband providers, said in a statement, “The FCC has ... pried open the door to heavy-handed government regulation in a space celebrated for its free enterprise.” He added that consumers “will likely wait longer for faster and more innovative networks since investment will slow in the face of bureaucratic oversight.”
The FCC is on pretty solid legal ground with its new rules – in fact, the Order specifically addresses the D.C. Circuit Court’s criticisms of the 2010 Open Internet rules. Still, a legal challenge is almost certain, and only a judicial judgement on that case will tell us whether the “common carrier” classification of broadband providers will stick.
© The Christian Science Monitor
27/2/2015- Advocates for open access to the Internet were popping champagne corks on Thursday after the Federal Communications Commission voted in favor of reclassifying broadband Internet as a public utility. In addition to regulating fixed broadband lines that go into your home, the FCC vote also extended public utility rules to mobile broadband for the first time. The FCC vote means that Internet service providers (ISPs) will be required by law to respect the principles of net neutra-lity. But what exactly does that mean, and why are so many people celebrating the FCC’s ruling while others are cursing it?
Here’s a quick explainer.
What does “broadband is now a public utility” mean? On Thursday, the FCC voted 3-2 down party lines to reclassify fixed broadband lines under Title II of the Telecom-munications Act. This turns ISPs and mobile broadband providers into public utilities. As a result the companies will be more highly regulated than they were in the past. In terms of net neutrality, ISPs will be prevented from offering “paid prioritization,” or fast and slow lanes where customers and/or web services must pay the ISPs for better speeds for certain content. It also prevents ISPs from actively blocking access to online content or from punitively throttling (slowing down) certain kinds of traffic, such as torrents.
Were these regulations even necessary? Are ISPs doing this kind of thing?
There has been concern for some time that ISPs would experiment with slow and fast lanes, but the companies never actually tried it out. The ISPs and mobile broad-band providers have, however, deliberately throttled traffic. Comcast began throttling users who were causing large amounts of traffic on the company’s network back in 2009. The throttling efforts were targeted against so-called bandwidth hogs, such as people downloading large amounts of torrents. Comcast was also hit with a class-action lawsuit (settled in 2010) over previous throttling ef-forts that began in 2007. Comcast was required to pay up to $16 million as part of the settlement. More recently we’ve seen reports of BitTorrent throttling rising in the U.S. Even mobile broadband providers have been getting into throttling. Last summer, Verizon publicly announced a throttling policy for heavy LTE data users even if you’re on an unlimited data plan. That policy was put on the back burner in October a few months after the FCC got involved.
Will ISP innovation die under Title II?
That’s what some ISPs and other opponents of the FCC’s ruling are saying. They argue that more regulation will mean less investment and innovation in broadband. From a consumer’s point of view, however, it’s hard to argue that American ISPs were innovating on Internet delivery and working hard to improve service under the previous “light touch approach.” For starters, a lot of major ISPs are despised by their customers due to poor customer relations. You can also find instances where major broadband providers refuse to increase broadband speeds or expand capacity. America has some of the slowest, yet priciest broadband in the world, according to an October report by the Open Technology Institute. Last May, Tier 1 global network provider Level 3 said five major U.S. ISPs were deliberately failing to increa-se their interconnect capacity with Level 3, resulting in near constant web traffic congestion. Level 3 did not name the companies, but did say that all five had domi-nant or exclusive market share where the congestion was happening.
Level 3 has also accused Verizon of failing to improve interconnections at data centers where Level 3 and Verizon networks meet—an improvement that Level 3 said would be very inexpensive. The result? Deliberately constrained capacity that causes poor Internet speeds at home, and degraded Netflix performance more specifically in this case. Arguably the most innovative ISP in the United States right now is Google Fiber, which delivers blazing fast 1Gbps speeds over the “last mile” to homes in select markets. Google Fiber has had the effect of increasing competitor speeds and service in areas where it operates—competition that arguably would not have happened otherwise.
Does Title II mean more competition in broadband?
Probably not. As part of the FCC vote on Thursday the commission explicitly ruled out forcing the ISPs to share their networks with competitors. That means market dominance by one ISP in local markets will not be broken as a result of the FCC ruling. Some are arguing that because of this the FCC ruling does little to change problems with U.S. broadband services, such as the aforementioned high prices and slow speeds.
Now that broadband is considered a public utility is the fight over?
Heck, no! When Verizon is so incensed about an FCC ruling it protests with a blog post written in Morse Code you know this issue is headed to the courts. Congress is also getting involved. Prior to the FCC vote on Thursday, there was already a movement lobbying Republican lawmakers in Congress to undo a potential FCC ruling in favor of reclassification. In the words of Han Solo, “it ain’t over yet.”
© PC World
France’s government is looking to adopt a tough new stance on online racism, anti-Semitism and other hate speech that would allow authorities to shut down offending websites amid a recent rise in hate crimes in the country.
25/2/2015- Justice Minister Christiane Taubira has said she will push for legal reforms that would help French authorities crack down on racism and anti-Semitism online in much the same way they do with paedophilia. The proposals include empowering French authorities to shut down websites hosting content that is deemed illicit without prior court approval. “Crimes recognised in public spaces must also be recognised as such on the Internet,” Taubira told a French Jewish student group on Sunday, echoing other recent state-ments on combating terrorism. “Our challenge is to find the most appropriate responses, but we are determined to wage an unmerciful battle against racism and anti-Semitism on the Internet.”
The declaration of war against online hate speech has raised questions about possible violations of civil liberties and the curtailing of due process as France struggles to find a way forward after a wave of deadly violence and anti-Semitic hate crimes in the country. An Islamist gunman in January targeted a kosher supermarket – killing four people and taking hostages – as part of a string of attacks that terrorised the French capital for three days and started with a bloodbath at the office of satirical weekly Charlie Hebdo that left 12 dead. France saw a sharp escalation in anti-Muslim acts immediately after the Paris killing spree, which was carried out by assailants claiming allegiance to al Qaeda in Yemen and the Islamic State jihadist group.
A French group that monitors Islamophobia said it recor-ded 199 anti-Muslim acts in January alone, more than those reported in all of 2014. Last week more than 250 tombs were vandalised by a group of teens at a Jewish cemetery in eastern France, sparking what appeared to be copycat acts in other non-Jewish cemeteries in Normandy and the Pyrenees in the following days. Amid the compounding tensions, and real fears over the radicalisation of young people via the Internet, Taubira and other authorities want the legal means to counter racism, anti-Semitism and Islamist extremism on the web. But blocking ubiquitous online hate speech could be a thorny task for officials.
‘Protecting’ civil liberties
Some people are applauding France’s aggressive approach. The Simon Wiesenthal Center, an international rights group researching the Holocaust and hate crimes, says it has ob-served a steady rise in racist and anti-Semitic speech online since it began studying the phenomenon 20 years ago. The increase has been exponential since the advent of social media platforms like Facebook and Twitter. “France’s efforts must be congratulated,” Shimon Samuels, who heads the center’s Europe office, told FRANCE 24. “If child porno-graphy and paedophilia have no place on the Internet, if advertising for things like alcohol and tobacco are controlled because they are considered noxious to children, then what about hate?”
Samuels downplayed the dangers of curtailing free speech or privacy as a result of Taubira's proposed reforms. He pointed out that nowhere are free speech laws an unlimited privilege, and that we constantly forfeit our right to privacy to online advertisers without batting an eye. “I see this as a way of ultimately protecting civil liberties,” Samuels said. “Of course the measures need to work within the framework of the law, of course there has to be oversight so that they are not abused. A healthy debate is arising about freedoms, but that is part of democracy.”
But other experts are not as convinced about the wisdom of France’s more aggressive approach, nor about whether it will ultimately pay off. “Other countries have already adop-ted very restrictive measures, some really go to the limits of what is acceptable in terms of freedom of expression,” noted Bridget O’Loughlin, the coordinator of the Strasbourg-based No Hate Speech Movement, a campaign funded by the Council of Europe. O’Loughlin said what her campaign and others are finding is that, while pushing governments toward uncharted legal terrain, repressive measures are extremely difficult to implement because of the anonymity of web users and the borderless nature of cyberspace. “There are real limits on what legislation can do,” she said.
French officials are aware of their own limits. While championing tougher online hate speech legislation at home, they have also embarked on a campaign abroad to bring other governments into the fight. Harlem Désir, France’s state secretary for European affairs, urged world leaders gathered at the UN in late January to support the international regula-tion of social networks in order to crack down on racist and anti-Semitic propaganda. French Interior Minister Bernard Cazeneuve last week took a rare trip outside the country to Silicon Valley, where he reportedly urged the heads of Facebook, Apple, Twitter and Google to help his government identify and block online content defending acts of terrorism and hate speech.
It is unclear whether France will get what it wants from other countries and the Internet giants, with whom it has clashed in the past. In the meantime, it has launched an Internet site where citizens can report worrying content to police, and launched a multimedia campaign to expose the recruiting methods and myths used by jihadists. Samuels and O’Loughlin agree that more also needs to be done on the education front. Parents in both Jewish and Muslim communities need to be better informed about the kind of content children are encountering on the Internet, and be encouraged to have frank – even uncomfortable – discussions with them about what they see, said Samuels. O’Laughlin said people who have become blasé about the vitriol they encounter regularly on the web need to be woken from that stupor and given the tools to identify and report online hate speech. “Our methods of education and research focus on young people, between the ages of 13 and 30,” she said. “But what we keep hearing is that we need to be talking to kids who are even younger than that.”
© France 24.
France’s Union of Jewish Students (UEJF) held a conference Sunday on how to tackle online hate speech, while a poll published the same day revealed that an over-whelming majority of French people back measures to curb racist and anti-Semitic material easily found on social media networks and websites.
22/2/2015- “Nowadays when you type on YouTube search bar Shoah – Holocaust – the first videos you come across are Holocaust denial videos,” said Nicolas Woloszko, the treasurer for the Union of Jewish Students, which organised Sunday’s conference in Paris. This can pose particular problems, he continued, when young students turn to online resources to find out about the Holocaust and what they find is contrary to what they’re learning in school. “The most fundamental values of the French republic are challenged by alternative ideas, such as holocaust denial, anti-Semitism or racism and this is a very big problem for us,” he said. France has strong laws against hate speech, anti-Semitism and statements that glorify terrorism. However, when it comes to the realm of the online world, Woloszko says these laws, which in his opinion do not infringe on freedom of expression, are easily forgotten when they are not applied online.
In fact, most French people are in favour of blocking users and cracking down on the Internet’s inherent culture of anonymity, according to a survey commissioned by Opinionway for the conference and published Sunday in the Journal du Dimanche. The poll, which was conducted in February with a sample of over 1,000 people aged 18 and up, revealed that 92% of French people agree with blocking or removing links to websites that host material that advocates for terrorism. Additionally, 89% were in favour of seeing Internet firms such as Google, Facebook and Twitter have broader control over content. Likewise, a large majority would like to see a system of fines for those who spread hate messages (83%) and the same amount of people also espouse that the ability to post content anonymously online promotes hateful comments.
“We are students. We use Twitter, Facebook everyday of our lives so we are very keen on preserving freedom of expression,” Woloszko said. “But freedom of expression is not the right to say everything. That’s not how it works.” Nevertheless, half of French people have faced racism online (51%), including anti-Muslim (49%), homophobic and xenophobic (45%) and anti-Semitic (43%) remarks. The UJEF also believes that by cracking down on what can be said online would have a strong effect offline. “I think that our best way to fight against racism and anti-Semitism is to constantly repeat that it is forbidden,” Woloszko said, “anti-Semitism and racism are not opinions, they are misdemeanours and they must also be forbidden from the Internet.”
The Danish government introduced a new anti-terror package following the Copenhagen shootings, but internet privacy advocate Henrik Chulu argues that privacy restrictions will limit the free speech that politicians say they want to defend.
23/2/2015- In everyday language, free speech means that you have the ability to say whatever you what with the understanding that you can face legal punishment if what you say violates the rights of others: for example through libel, copyright infringement, etc. Different countries have different limits on the freedom of expression. I know of no place where it is absolute. Perjury, for example, is universally frowned upon. Free speech exists so that the state cannot silence its critics, who are free to scrutinize the powers that be and criticize them with the the goal of keeping those who exercise power on the straight and narrow, thus assuring the rights of the rest of us. Another right that is equally im-portant to the freedom of expression is the right to privacy. Like freedom of expression, the right to privacy exists to ensure a balance of power between the state and the indi-vidual. If there is a reasonable suspicion that you are engaging in something criminal, a court can give the police the authority to monitor your behaviour and communications.
In the old days, that meant that they could ransack your home, open your mail and tap your telephone. But today it means that they can set up nearly invisible microphones and cameras, listen in on all the phones you are likely to call, instal spyware on your computer or phone, read your emails and monitor your internet traffic. Today, the police don’t even have to go to much effort to get access to your private life, as you have most likely voluntarily subjected yourself to constant surveillance from the likes of Google, Facebook and others. You live surrounded by sensors and all of your electronic communication is automatically tapped and collected in huge databases that police can access with a court order. But for some politicians, who would hate to see a useful crisis go to waste, the already vast abilities of the police to do their jobs is not enough. The chorus is singing for more. After all, no one wants to appear “soft on terror”.
In the rush to defend free speech, politicians are further dismantling the right to privacy. The problem however is that free speech and privacy are not “values” that should be defended, but rather tools (or weapons, if you will) to ensure a balance of power between the individual and the state. And they are related. Without the strong protection of privacy, freedom of expression is nearly useless. Or to put it another way: a restriction on privacy is in itself a restriction on free speech. Privacy is the foundation, free speech is the structure. Statements don’t arise out of nowhere within an individual’s thoughts and then fly directly out into the public sphere to fascinate, infuriate or speak truth to power. They are absorbed and shaped in the peace of library halls and the corners of the internet, they are studied in private conversations, tested through back channels and often (although not always in politics) destroyed due to a lack of logic, evidence or relevance.
The freedom to speak and think in peace is what allows free expression to be used effectively as it is intended: to confront state power verbally so that it doesn’t trample upon those rights that we as citizens have fought so hard for over the past century and a half – e.g. women’s suffrage, the right to unemployment benefits, etc. The insistence of poli-ticians to sacrifice citizens’ privacy upon the flames of free speech is a far bigger threat than any militant Islamist who finds a cartoon offensive. It is a stab in the back of free speech. I do not doubt that their basic intentions are good: they want to protect the public. But the road to hell is, as we all know, paved by naive fools with good intentions.
Henrik Chulu is the co-founder of Bitbureauet, an independent internet think-tank. This column was originally published in Danish at frikultur.dk and has been translated and republished under the Creative Commons Attribution license.
© The Local - Denmark