Instagram has suffered at the hand of its own algorithm after it tried to use an image that you could definitely classify as inappropriate as an advert.
Guardian journalist Olivia Solon discovered the incident when Solon’s sister revealed that she had seen the image as an advertisement for her to actually sign up to Instagram.
Unfortunately the image in question showed a screenshot of an email that Solon had received containing a violent threat.
The advert, shown on Facebook, appears to have relied on an algorithm to pick the most engaging content being posted by friends and relatives and then shared it with Solon’s sister as an incentive to join the service.
In this case “engaging” may be referring to the likes and comments the post received, however it seems to have completely overlooked the contents of the image itself.
type=type=RelatedArticlesblockTitle=Related... + articlesList=59898ac4e4b0d7937389f4d0,5909e658e4b05c397684e7c7
In a statement to the Guardian an Instagram spokesperson said: “We are sorry this happened – it’s not the experience we want someone to have, this notification post was surfaced as part of an effort to encourage engagement on Instagram. Posts are generally received by a small percentage of a person’s Facebook friends.”
Facebook is certainly no stranger to algorithmic problems. Back in August of last year the company made a complete transition to using algorithms to pick its trending news topics.
Unfortunately within its first week it picked a fake story which had to be quickly taken down.
The question that remains in this instance is whether or not Instagram’s algorithms are actually able to ‘read’ the images that we post. So whether or not it was able to read the words that were contained within the image posted.
-- This feed and its contents are the property of The Huffington Post UK, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.