Just four days after the tragic events in Paris, it's been good to see so many countries gathered at the #WePROTECT summit in Abu Dhabi with the joint mission of making the world a more hostile place to share images of child sexual abuse material (CSAM) online.
At the summit, I've been able to share some good news about the phased implementation of the Internet Watch Foundation's (IWF) Hash List. Since June 2015, IWF have created just under 19,000 category 'A' CSAM hashes sourced from the UK Child Abuse Image Database (CAID).
But I'm happy to reveal in the Huffington Post, that the first complete day of IWF analysts hashing images from our own systems (IWF public and proactive reports, last Friday November 13, 2015) we hashed 1525 illegal images of children being abused.
Since the Hash List figures are totally new to us, we have nothing to compare them to. However, as a guide, our usual daily number of illegal CSAM URL's added to our URL list (which is a separate list) bounces around 300.
This could have huge implications for the work of IWF as the daily hash number will be added to the CAID sourced figure (19,000 as above).
The figures are impressive, but the child victims behind the figures is fundamental to our work. This development will be a real game-changer in the battle against child sexual abuse images online. In human terms, not only can we identify and remove victim's images more quickly, but we can also stop the 'hashed' images being uploaded to the internet in the first place.
This is a really important step forward, as it will help us prevent these images from being shared again and again. And we must never lose sight of the fact that the children in these hideous images are really being sexually abused. Their suffering can only be compounded by the knowledge that people are able to view images of their abuse, repeatedly. It is our mission to stop this.
1. IWF will automatically begin creating two types of hashes to meet the needs of the online industry. It will create PhotoDNA (technology developed by Microsoft), and MD5 hashes.
2. The hashes created during the implementation stage were sourced from images forensically captured on the Home Office Child Abuse Image Database (CAID). In the future, hashes will also be created from images that our highly-trained analysts have assessed and sourced from IWF public reports, online industry reports and by proactively searching for criminal content.