75% of all photos are now taken with some kind of phone. This is up from 40% in 2010, according to the New York Times. And it's a hell of a lot of photos - the number taken in a year is projected to grow to 1.3 trillion worldwide by 2017.
Photos of interest
The photos in our phone galleries tell the stories of our lives. While we may upload only our most impressive snaps to our social media channels, our phones often act as catalogues of bad poses, blurry scenery and photobombs. Yet that's the beauty of it. Our smartphones hold so many memories that they create a bigger picture of what we experience every day, not just the bits we want to share with everyone.
More often than not, we only take photos of things, places or people that interest us. As a brand, having access to this means getting to know people's preferences. This is a goldmine for those wanting to know who to target, and how they want to be targeted. Even if a user doesn't take photos, it still says certain things about them. It's all data, and it's incredibly useful.
Context is everything
This information doesn't say a lot on its own. Combining machine learning with common sense means this huge amount of data can be analysed effectively. We use data from focus groups and surveys, plus data and information we've already gathered from consumer images to 'train' our machine. Machine learning with the added boost of a helping human hand. It's a bit like how you would teach a baby to recognise certain objects.
Brands will look for certain personas. The demographic for a cool new fashion line is unlikely to be those aged 60 plus, so the profiles of elderly users won't be of use. However, a recently-launched dentures brand would be interested in targeting this audience.
However, it can be so much more than that. A fashion brand might want to direct efforts to those interested in styles similar to its own, and images are a simple way of getting a sense for sartorial sense.
Sometimes we already have the right data on file for our technology to know what to look for, and sometimes we need to 'teach' our machine. To take the dentures example, we could explore photos of older people with gaps in their teeth. Our technology won't know that itself, but we can programme it to. It takes brand targeting to a completely new level.
As time goes on and we analyse more and more images, the technology becomes increasingly accurate and we can see that the technology is matching the right images with the right characteristics and personas.
As humans, we make split-second judgements when looking at a face - 30 milliseconds is all we need. We categorise people, from gamers to gym bunnies to football hooligans, based on what they wear and how they stand. Our natural prejudices rarely give us real insight into a personality.
In my view, the mathematical answer is always the right answer - the machine is more accurate than a human can be.
What do you think your photo gallery says about you? What brands do you think would take an interest in you? Do you think it would paint an accurate painting of your life?
After reading this, try scrolling through your gallery to guess what a machine might deduce about you. You could see yourself in a completely different light...