When Bolton Wanderers footballer Fabrice Muamba collapsed on the pitch at White Heart Lane on 17 March, a 21-year-old student mocked his plight on Twitter. Two weeks later he received a jail sentence of 56 days.
Elsewhere on the Internet, life continued as normal.
One week after the incident, a YouTube troll named DoktorLekterreturns took it upon himself to log in to the website, search for a video of Muamba collapsing, and post this comment below it:
"Why spend a cool hundred thousand of precious money on someone just arrived on the proverbial "banana boat"? Much better to lob him in the Thames and bring over another one."
Despite Google's trust in its users to self-moderate the site, the comment is still viewable. And DoktorLekterReturns was free to log in to the site on 5 April, and beneath a video of the 9/11 attacks bemoan the fact that the "soldiers of Allah" did not film mobile phone footage of the "hapless people in the buildings". He also described the 2004 tsunami as "ethnic cleansing" beneath another video. His account is still active.
On some videos of the Muamba collapse several comments had been removed. But on others, offensive remarks remained. In one a user named babafreya2009 joked about the sound the life support machine would make if and when Muamba died. Enjoying the attention they went on to make the same joke on several other clips. On another a commenter used the video to claim "the white man and his white woman are superior".
And on yet another clip, a user named mattyp09ful joked after he was accused of racism: "SEND ME TO PRISON,,PROOVE IVE WROTE ANYTHIN ,C.P.S WOULD LAUGH AT IT."
But while YouTube remains unscathed, the police and the Crown Prosecution Service appear to have recently run out of patience. Several major cases of reported abuse online have been dealt with swiftly by the courts, including that of Liam Stacey, on the principle expressed by the Department for Culture, Media and Sport that what happens online should be equivalent to what happens offline.
Stacey was sent, sobbing, to jail after being found guilty under the Crime and Disorder Act. In another recent case, Newcastle University student Joshua Cryer, also 21, was convicted under section 127 of the Communications Act 2003 for abusing footballer Stan Collymore. He was sentenced to pay costs, work 240 hours without pay and a serve a two-year community order.
Also in March a teenager Azhar Ahmed appeared in court charged under the Communications Act after posting messages about six dead soldiers killed in Afghanistan which prosecutors claimed were "grossly offensive". He will stand trial in July.
Some have feared the recent spate of cases, as well as others which emerged out of the English riots in 2011, often with far harsher sentences, represent an offensive campaign to turn back the tide on racist abuse - perhaps with worrying implications for future race relations. The CPS itself has spoken of the sentences as "warnings" and the ease at which the cases have led to convictions suggests more could be on the way.
'Can You Really Police The Entire Internet?'
When it comes to online abuse, however, the CPS and others have one major factor working against them: the law of big numbers.
According to YouTube more than 60 hours of footage is uploaded every 60 seconds, and more than half of those videos receive comments. Twitter receives more than 340m tweets a day. Experts suggest that the sheer scale of racist comments posted online makes anything other than isolated prosecutions impossible.
So what, in this campaign against online abuse, represents victory for the CPS, the police, the government - and for users of social media who just wish for a slightly less abusive atmosphere online?
David Banks, who is a media law consultant, said that the Stacey prosecution was "more intended to be a warning than a serious push to tackle the volume of material that's on the internet".
"The idea that you can police the whole internet effectively is a bit optimistic really," he told the Huffington Post UK. "I think that what they will probably do is go for things that come to their notice - as they do with any crime."
"The Liam Stacey case very rapidly came to people's attention in this country, he was tweeting from within the boundaries of the UK, and under his own name. He was easily tracked, arrested and put through the courts.
"If he'd said what he did in a pub and policeman had heard him he might well have found himself in trouble, but he ended up 56 days in prison because of the potential damage of saying that thing in a medium that can be spread further can do. The courts take these new media crimes very seriously."
For their part most social networks including YouTube, Facebook, Twitter, newspapers, blogs and other niche discussion sites like Reddit leave comment moderation (mostly) to their users - in part because it reduces a publisher's liability if a user posts something that could be accused of being libellous. It also negates the tricky issue of reconciling various international jurisdictions and legal codes.
"If you don't pre-moderate you have a safety net of saying, 'we're a platform and we're okay as long as we take stuff down when it's complained about'," Banks says.
"So, I can't seriously see the CPS taking on the mountain of material that's on YouTube, especially when it's placed there by people outside the UK. It would fill their day completely, and they wouldn't have time for any other prosecutions."
'YouTube Involves A Certain Level Of Trust'
A CPS spokesperson did not comment on the potential volume of future cases, but said it takes all reports of racism "very seriously".
"No matter how it is communicated, if there is sufficient evidence to provide a realistic prospect of conviction an offence of racism, harassment or threats, we will mount a robust prosecution," the CPS said.
"In fact, far from being anonymous, when an offence is committed online the evidence is clear for all to see, including prosecutors and police."
Google, Twitter and Facebook, as well as smaller social media companies, all have published guidelines for discourse on their sites, with varying language.
"We're not asking for the kind of respect reserved for nuns, the elderly, and brain surgeons," reads YouTube's version. "We mean don't abuse the site. Every cool new community feature on YouTube involves a certain level of trust. We trust you to be responsible, and millions of users respect that trust, so please be one of them."
But given the abundance of questionable comments on its site, that trust doesn't appear to be working.
Some YouTube users appear to agree, and have written to Google asking for help:
"I understand first amendment rights," said one YouTube user on Google's own product page. "However, if you can regulate the video content of Youtube for violence, sex, and copyrighted material, how come you cannot remove vile and inappropriate comments from the posting section?"
'The Social Networks Have A Responsibility To Their Customers'
For many campaigners against racism and bullying, relying on users to police themselves isn't good enough.
Sherry Adhami, director of communications at Beat Bullying, a charity that works to stop harassment of young people said that convictions for online abuse were encouraging - but that more had to be done by social networking companies to deal with the problem.
She said: "We as a society have a collective responsibility to make sure that we make clear things like this are unacceptable.
"It's not just about the users doing the work. The social network sites have a responsibility to their customers as well. If you walked into a shop and you were consistently harassed and abused you would expect the shopkeeper to help you. It's much bigger than that - the sites need to take a stance, as do internet service providers, and government."
Adhami added that legislation specifically dealing with cyberbullying and racism would be required.
"It is quite staggering [that the legislation doesn't already exist] when you think about how social networking and social media play a crucial part in our lives," she said.
"No one is anonymous online and you can be tracked. And if you're consistently abusing people in an anti-social way then that's why the internet service provider can track you down."
A Google spokesperson said that as a matter of policy the company would act against racism and other online abuse - but that in the main it relied on its users to report it:
"YouTube has policies against harassment and hate speech as outlined in our Community Guidelines," said a spokesperson.
"We have an easy-to-use Help & Safety tool that lets users contact us about comments. A staff of specialists monitors the reports from the tool 24/7. More information is available on our safety centre."
"We don't allow the promotion of hatred toward groups of people based on their race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity.
"If a user believes that someone is violating this policy, they can click 'Report this profile' within that person's profile. We'll review your report and take action if appropriate."
Other social communities take freedom of speech even more seriously. Until recently Reddit, a popular link-sharing and discussion site, even tolerated forums dedicated to users posting suggestive, if not technically illegal, images of children, on the grounds of freedom of speech - a move of which most of its most dedicated fans approved. In February the site finally took action, under duress, to police its images more carefully. The site currently hosts a forum dedicated to highlighting racist posts which receive positive votes from its users.
Ultimately, as David Banks points out, the nature of the internet, and the trolls who patrol its darker corners, means that the only truly effective censorship is self-censorship. If you don't appreciate racism online, then report it when you see it - and if you don't want to be hauled before the courts, don't post racist abuse.
"Where people are trackable the CPS and the police will go after them," Banks said.
And calls for new legislation also perhaps miss the point, he says.
"Most of these things are covered under existing legislation. If you're making statements that are likely to incite racial hatred it doesn't matter if you shout them in the street or shout them on Twitter or your blog. If it is likely or aimed at inciting racial hatred you can expect a visit from the police."Suggest a correction