Every time you upload a photo to the site, Facebook uses machine learning to identify what’s shown in the photo.
It will often make a very specific attempt at people identification – it might work out which of your friends is in the photo, for example.
But it also makes an attempt to describe what’s happening in the photo, such as: this photo shows people standing, and there’s sky, a mountain and trees.
Now users can find out exactly what Facebook has learned about each photo using a plugin called “Show Facebook Computer Vision Tags”.
The tool was developed by software engineer Adam Geitgey, who has mixed feelings about Facebook’s technology.
“On one hand, this is really great,” Geitgey wrote on the plugin’s Chrome page. “It improves accessibility for blind users who depend on screen readers which are only capable of processing text.”
“But I think a lot of internet users don’t realize the amount of information that is now routinely extracted from photographs,” Geitgey added.
He hopes the tool will let show users exactly how much information can be drawn from their images.
At the moment, Facebook’s computer vision is a bit hit and miss. But with time Geitgey thinks it will become more and more accurate.
Because the tool is based on machine learning, it gets better at identifying objects the more images it sees.
“A year or two from now they could detect thousands of different things,” Geitgey told NYMag.com. “My testing with this shows they’re well beyond 100 keywords already.”
Facebook is far from alone in developing this sort of machine learning. Many of the tech giants, including Google and Apple are also building computer vision.
But Geitgey said Google and Facebook are at a distinct advantage, thanks to the sheer number of images they can analyse.