01/08/2018 / By Isabelle Z.
We live in a society that is obsessed with oversharing. While it’s becoming increasingly difficult to tune out the trivial bits of people’s lives that we don’t care about, we can still choose not to sign up for social media accounts to preserve our own privacy. We can also take comfort from the idea that our private thoughts will always remain our own – at least for the time being. You might want to enjoy that last bit of privacy while you still can because a creepy new AI looks set to change that very soon.
Japanese researchers have now developed an AI machine that can take a look into your mind with an uncanny degree of accuracy. It studies the electrical signals within your brain to determine the images you are looking at or even just thinking about, and then it can create images of it that are startlingly reliable.
The project is being carried out at Kyoto University under the leadership of Professor Yukiyasu Kamitani. The researchers are creating the images using a neural network and information culled from fMRI scans that detect the changes in people’s blood flow in order to analyze electrical activity. This data enabled their machine to reconstruct images such as red mailboxes, stained glass windows, and owls after volunteers stared at pictures of these items. In addition, it was able to create pictures of objects the participants were simply imagining, including goldfish, bowling balls, leopards, crosses, squares and swans with varying degrees of accuracy.
The researchers boasted that the breakthrough provides a “unique window into our internal world” but this is one window that many people might not be willing to open up. The same technique could be used for retaining footage of mental images such as memories and daydreams, and it could even be used to help patients who are in vegetative states communicate with loved ones.
In their project, neural networks were used that are similar to those that form the basis of many of the latest AI developments, such as Snapchat’s live filters for altering images, the facial recognition software used by Facebook, and the language translation services offered by Google.
The team of researchers trained their deep neural network with 50 natural images in conjunction with the fMRI results that corresponded to those taken from the volunteers who stared at the images.
A second AI method known as a deep generative network was then used to see whether the images generated looked like real ones and refined them to make them look more like actual objects.
University College London Neuroimaging Expert Geraint Rees told the media that he considered the study to be a huge advance in this type of technology, saying he was impressed at how it could learn to read letters of the alphabet even though it had never encountered them before.
Not everyone is thrilled about the potential of this technology, however. It takes invasion of privacy to entirely new levels, and it’s frightening to think of all the ways that this type of technology could be abused. While it could possibly be helpful to those in a coma or some type of vegetative state and it might be fun to make art just by imagining something, there are so many other ways it could be used for harm that it’s hard not to see this development with anything other than serious reservations.
Sources for this article include:
Tagged Under: deep generative networks, deep neural networks, future tech, insanity, Mind, mind reading, neuroimaging, police state, privacy, privacy watch, private thoughts, surveillance, weird science