Google Glass mind control isn't quite as futuristic as it sounds. Here's the science behind the headlines
Your brain is buzzing with electrical signals so it seems reasonable, if somewhat creepy, that you should be able to control technology using your mind alone. It sounds like pure science fiction, but such technology is a very real, if somewhat hit and miss, reality. Earlier this week the internet was ablaze with sensationalist headlines claiming that a team of developers had hacked Google Glass to allow people to control it using their minds. Here’s the real story.
The technology in question is known as electroencephalography or EEG, a non-invasive way of recording electrical activity in the brain without needing to drill through your skull. One or more small sensors are fixed to your scalp and these can then be used to detect brain activity. These signals coming from the brain can then be decoded and used to control electronic devices.
If accurately detecting brain activity is hard enough then training people to use their brains is even tougher. Early experiments with the technology were marred by difficulties as people struggled to train their brains to behave in the correct way. To this day the technology is hampered by its limitations – it can’t understand specifically what the brain is up to and it is very susceptible to outside noise, which gets in the way of signals from the brain.
Technology designed for everyday use tends to be simple but gimmicky. In the past we’ve seen mind control software that blew up virtual reality barrels when a user concentrated and a typing exercise that flashed letters of the alphabet to try and recognise what letter the brain wanted to type. Both were intriguing but ultimately pointless.
Enter Google Glass. A new app for the high-tech specs boasts that people can control them using nothing more than the power of thought. The claims are remarkable but hugely misleading. What the technology is actually capable of detecting is increased brain activity when someone is concentrating. This allows for two controls – ‘on’ when the user is thinking and ‘off’ when they are not.
The technology, known as MindRDR, uses a biosensor developed by a company called Neurosky. The sensor is a small piece of plastic with a blob on the end that rests on the forehead. Once connected to Google Glass via Bluetooth the data from the sensor is run through an app allowing the user to control Glass using their brain.
“When we measure the collective action of lots of neurons, then we can get some good insight into the type of activity the brain is undertaking at any one time,” explains Russell Plunkett, a cognitive interface consultant working on the Google Glass mind control project. “The device is measuring the collective electrical activity of neurons close to the electrode – in this case the forehead.”
The sensor and software used here have huge limitations, making claims of mind control wide of the mark. Plunkett explains that the differing rate of neurons allows for simple binary controls.
“Neurons fire at different rates when performing different actions, firing at a quicker rate when performing cognitively demanding tasks and at a lower rate when less demanding tasks are undertaken. The rate of neuronal firing is a correlate of the cognitive demand placed on the brain. This means that we can determine whether the brain is highly active or less active by measuring the rate of neuronal firing.”
For ‘highly active’ and ‘less active’ read concentrating and relaxing. The technology used here is able to detect high and low levels of general brain activity and can then turn this into two distinctive commands – do something and do nothing. Its use in early demonstrations with Glass is fairly simple – take a photo and post it to a pre-determined Twitter profile. To do this a white line appears on the tiny screen in Google Glass, concentrate for long enough and that white line will move upwards and a photo will be taken. Concentrate for a bit longer and the white line will move up again and the photo will be uploaded to Twitter.
IT ALL MAKES SENSOR
The sensor works by comparing the electrical activity at the site of the sensor with a separate reference site, in this case a clip attached to the earlobe. With the reference point established the technology is then able to tune into electrical signals coming from the brain. When concentrating the frequency is high with a low amplitude, something known as beta waves – when relaxing the signal frequency is low with a high amplitude, also known as alpha waves. The clear difference between these two signals is what makes the system work.
Behind all the slick-looking tech is, predictably, some less slick looking code
The simplicity of the system means it is well suited for wider public use, something that EEG has struggled for in the past. But its simplicity is also a limitation with the technology unable to understand more complex goings on in the brain it is unlikely to offer especially sophisticated controls for Google Glass.
“If research-grade sensitivity is required, such as detecting individual responses to noises for example, then there has to be a lot more control of possible electrical interference; such as electrical shielding, reduced movements, hair abrading and electrode gels,” Plunkett explains.
The instantly recognisable EEG cap, with its multitude of electrodes, is both more sophisticated and more cumbersome. By focussing on different parts of the brain the EEG cap is able to identify the source location of the electrical signal. Such accuracy is essential for scientific research, but not for consumer mind control technology.
“For this device, that level of accuracy is not required as we are getting a general measure of the level of activation at any one time rather than specific responses to stimuli. This means that we can have a single electrode against the skin with no special preparation and still give us good enough signal to take a measure of mental activation,” says Plunkett.
EEG has been the subject of scientific research for decades and it is hoped that by coupling it with technology like Google Glass it could eventually find everyday use. One example of where this might be found is people with locked in syndrome. The condition paralyses voluntary muscle movement while leaving the person otherwise cognitively intact. So while someone with locked in syndrome can fully understand and comprehend the world around them they are unable to react to it.
“There are other devices that allow interaction between someone with locked in syndrome and the external environment, however it is felt that this device has possibilities to be very effective in aiding this interaction,” Plunkett claims. “As the Google Glass is directly in line of sight and the interface is a simple binary of concentration and relaxation, we would like to think that this could be suitable for use as a tool to aid interaction.”
This is the actuality of mind control technology, not the science fiction version. It isn’t trying to break new frontiers and the technology is widely available and ready for the general public to use. The company behind the project has even put the code for its software online for everyone and anyone to use and adapt.
“There is no accurate way of reading someone’s individual thoughts, let alone with this technology,” Plunkett happily admits.