This computer is learning to read your mind | DIY Neuroscience, a TED series


Translator: Joseph Geni
Reviewer: Krystian Aparta Greg Gage: Mind-reading.
You’ve seen this in sci-fi movies: machines that can read our thoughts. However, there are devices today that can read the electrical
activity from our brains. We call this the EEG. Is there information
contained in these brainwaves? And if so, could we train a computer
to read our thoughts? My buddy Nathan
has been working to hack the EEG to build a mind-reading machine. [DIY Neuroscience] So this is how the EEG works. Inside your head is a brain, and that brain is made
out of billions of neurons. Each of those neurons sends
an electrical message to each other. These small messages can combine
to make an electrical wave that we can detect on a monitor. Now traditionally, the EEG
can tell us large-scale things, for example if you’re asleep
or if you’re alert. But can it tell us anything else? Can it actually read our thoughts? We’re going to test this, and we’re not going to start
with some complex thoughts. We’re going to do something very simple. Can we interpret what someone is seeing
using only their brainwaves? Nathan’s going to begin by placing
electrodes on Christy’s head. Nathan: My life is tangled. (Laughter) GG: And then he’s going to show her
a bunch of pictures from four different categories. Nathan: Face, house, scenery
and weird pictures. GG: As we show Christy
hundreds of these images, we are also capturing the electrical waves
onto Nathan’s computer. We want to see if we can detect
any visual information about the photos contained in the brainwaves, so when we’re done,
we’re going to see if the EEG can tell us what kind of picture
Christy is looking at, and if it does, each category
should trigger a different brain signal. OK, so we collected all the raw EEG data, and this is what we got. It all looks pretty messy,
so let’s arrange them by picture. Now, still a bit too noisy
to see any differences, but if we average the EEG
across all image types by aligning them
to when the image first appeared, we can remove this noise, and pretty soon, we can see
some dominant patterns emerge for each category. Now the signals all
still look pretty similar. Let’s take a closer look. About a hundred milliseconds
after the image comes on, we see a positive bump in all four cases, and we call this the P100,
and what we think that is is what happens in your brain
when you recognize an object. But damn, look at
that signal for the face. It looks different than the others. There’s a negative dip
about 170 milliseconds after the image comes on. What could be going on here? Research shows that our brain
has a lot of neurons that are dedicated to recognizing human faces, so this N170 spike could be
all those neurons firing at once in the same location, and we can detect that in the EEG. So there are two takeaways here. One, our eyes can’t really detect
the differences in patterns without averaging out the noise, and two, even after removing the noise, our eyes can only pick up
the signals associated with faces. So this is where we turn
to machine learning. Now, our eyes are not very good
at picking up patterns in noisy data, but machine learning algorithms
are designed to do just that, so could we take a lot of pictures
and a lot of data and feed it in and train a computer to be able to interpret
what Christy is looking at in real time? We’re trying to code the information
that’s coming out of her EEG in real time and predict what it is
that her eyes are looking at. And if it works, what we should see is every time that she gets
a picture of scenery, it should say scenery,
scenery, scenery, scenery. A face — face, face, face, face, but it’s not quite working that way,
is what we’re discovering. (Laughter) OK. Director: So what’s going on here?
GG: We need a new career, I think. (Laughter) OK, so that was a massive failure. But we’re still curious:
How far could we push this technology? And we looked back at what we did. We noticed that the data was coming
into our computer very quickly, without any timing
of when the images came on, and that’s the equivalent
of reading a very long sentence without spaces between the words. It would be hard to read, but once we add the spaces,
individual words appear and it becomes a lot more understandable. But what if we cheat a little bit? By using a sensor, we can tell
the computer when the image first appears. That way, the brainwave stops being
a continuous stream of information, and instead becomes
individual packets of meaning. Also, we’re going
to cheat a little bit more, by limiting the categories to two. Let’s see if we can do
some real-time mind-reading. In this new experiment, we’re going to constrict it
a little bit more so that we know the onset of the image and we’re going to limit
the categories to “face” or “scenery.” Nathan: Face. Correct. Scenery. Correct. GG: So right now,
every time the image comes on, we’re taking a picture
of the onset of the image and decoding the EEG. It’s getting correct. Nathan: Yes. Face. Correct. GG: So there is information
in the EEG signal, which is cool. We just had to align it
to the onset of the image. Nathan: Scenery. Correct. Face. Yeah. GG: This means there is some
information there, so if we know at what time
the picture came on, we can tell what type of picture it was, possibly, at least on average,
by looking at these evoked potentials. Nathan: Exactly. GG: If you had told me at the beginning
of this project this was possible, I would have said no way. I literally did not think
we could do this. Did our mind-reading
experiment really work? Yes, but we had to do a lot of cheating. It turns out you can find
some interesting things in the EEG, for example if you’re
looking at someone’s face, but it does have a lot of limitations. Perhaps advances in machine learning
will make huge strides, and one day we will be able to decode
what’s going on in our thoughts. But for now, the next time a company says
that they can harness your brainwaves to be able to control devices, it is your right, it is your duty
to be skeptical.

100 thoughts on “This computer is learning to read your mind | DIY Neuroscience, a TED series

  • How is this science? You limit the expected result to what you already knew without using AI – face reaction and "everything else" – which you call "scenery". That my friends is not science. Your model will say scenery to everything that isn't a face. You were already able to do that before even trying machine learning algorithms. What you might have tried instead – is get a much more detailed EEG scan and trying it on that (more sensors, higher sampling frequency).

  • to think that not so long ago we were only hunters and gatherers. The future is an increasingly terrifying and dehumanized you materialistic, bloody consumer.

  • Scary enough 😳 imagine a century later, this tech would get to the full level (the machine becomes a mind reader) and someone uses it to read ours 🧠

  • Got to go and join the Amish before they start scanning everyone's thoughts and making sure everyone is thinking politically correct thoughts.

  • https://www.google.com/search?q=light+fingerprints+exoplanets&oq=light+fingerprunrs+exopl&aqs=chrome.1.69i57j33.22612j0j7&client=ms-android-sprint-us&sourceid=chrome-mobile&ie=UTF-8

  • With quantum computers, there might be a possibility to take an EEG or some more sophisticated version of it (with a lot more data points) and calculate back the currents that created them. It will probably also need ai/machine-learning.
    I just made this up but it seems to me like the very thing quantum computers are good at, similar to many-particle-simulations which I know they are very good at (or will be). There is also the holographic principle (yes I watch SpaceTime), so it's not unseen in nature for the information of a volume being related to the surface area containing the volume (brain).

    Also, I'm stoned, have a nice day.

  • I though our phones were already reading our minds I always get ads on my apps or recommendations on subjects I had only though of not speak of, and of course I always get the o es on things I had said

  • This is a classic neural network problem – a result and data connected to the result. Just set up the network and train it. Stop trying to preprocess the data. Let the network figure it out. This is trivial compared to identifying cancers from x-ray data.

  • Really cool work but as you said I can tell if it's a face or something else just by that big dip… It's not really doing anything beyond what we could do before.

  • Why does the "picture ready" block in the bottom left corner flashing differently for each category? I mean that black & white flashing used in the last test to "cheat", for me it seems now that you told the category to the computer with the flashing sequence… Instead of a "start monitor" signal it should be

  • Does the computer has a "time" activity control? I am pretty sure that the human brain knows what is going to be on the screen before it appears…

  • You know they say that people in comas think and have conscious thoughts but just can't say them or move. Imagine how helpful this would be to them if the computer could fully detect the people's thoughts. An idea for the future.

  • wtf i think the exactly thing Mr. Gage tell at the end!!!!!! be carefully. (we need to saw also that we cannot stop the technology transition).

  • But we could see with our eyes the difference between the data from the face picture and the data from the scenery. It was a big difference.

  • For the Machine learning it might be interesting to try lstms or grus and maybe go into a deep memory network. I am wondering how advanced their machine learning is.

  • Interesting, I am working for a while now on something similar but a bit deeper and more complex. I also use an EEG (OpenEEG Cython 8 channel) to monitor brain waves but plan to combine it with other biometrical and sensory data (ekg, skin resistance, head/eye-trakcing, environment audio and video) to improve my cognitive performance and in the long run to automate certain cognitive processes.

  • EEG says something. But the resolution is VERY LOW. As if you want native resolution you need a connector to each connecting synapse. We have around 80 billion or more neurons. Echt neuron has about 7000 synaps connections. Troughout the brain in 3D. And no EEG will not give you such information about thoughts. But it can give us super low resolution information on which part of our brain is active. Using Neural networks.. ( technology borrowed from how our brain works) to reverse pattern match it is possible. But only simple things can be done like in this video. Converting EEG to a realtime "neural image" is…. far from possible.

  • You tried to decode in an afternoon what her brain coded for itself through trial and error for her entire life. Of course it didnt work. Also, there should be constant noise activity in ones brain even while idle. Showing the same face twice may also cause different reactions and brain patterns, which may mean the only signal you were able to pick up was from a lower function facial recognition response.

  • With 6 months of R&D I see this being extremely useful to police. It should be able to eventually tell between a face you know and one you don't. Imagine how useful it would be if police could just say "If you"ve never seen this person before, take this test" Or even when trying to find a criminal, don't ask the person which picture it was. Just read their brain so it can't be wrong.

  • the difference between face and others is already obvious that you dont need ML to do this, try two other types of photos other than face.

  • time break /closure + noise stacking has been done for decades= seismic data defining oil field limits. Looks like a lot of the same techniques are needed to gather data here.

  • So most important part of this project was machine learning…

    You should hire good machine learning engineer then you can classify even 100 categories.

    (btw i am available next month) 😉😁😄😂

  • And…who thinks don't won't be abused? Seriously here. Never thought I would see someone seriously demonstrating their research into making yet another doomsday device…

  • You were explaining before that there is a visible dip after the P100 for faces. So machine learning is probably not needed for spotting that.
    I think it is a mistake to average out all sensors. A lot of localisation information is lost. But I see great potential in applying machine learnining here.
    There is amazing progress in deep neural networks learning to remove noise on images for example.
    Looking forward.

  • Even better, look up Yukiyasu Kamitani, a scientist who seems to have come up with AI code that does a surprisingly good job of relaying the image the subject is seeing. You can also search for one of the articles using the search: brain scan can read images. Good for humanity? That's another question altogether.

  • This AS been done. The computer could even reproduce the image the watcher was looking at.
    It was blurry and it required the watcher to look at the picture for more time, but they said that with faster computers and more brain sensors it WAS achievable… And it was a TEDtalk.

  • "one day" … not with the traditional EEG, with each sensor you cover so huge areas what you are never able to read each single neuron. it will always be this kind of guesswork.

  • Fascinating… But for now they are categorising images into certain similar patterns like face, scenery and so on…… But what makes specificity of these images like types of faces or colour of faces or weather in scenery ?

  • 1. add more test subjects
    2. add more difficulty images, eg. man faces vs woman faces (no house, no scenery)
    3. focus on occipital lobe, subject can only use one of his/her dominant eyes
    4. add splitting image (eg. solid orange colour) for 1-2 secs after each face pictures

  • Scientists will need far more detailed brain measurement/diagnostic devices and more efficient machine-learning algorithms for thoughts to be interpreted properly by a computer. If a neuroimaging device could image and measure every single neuron, and every synapse, and measure the electrical signal at each of those in real-time, and all of that information was digitized and capable of being stored in RAM, then a sophisticated machine-learning program should be able to find way more useful data about our thoughts. There is another TED talk from Summer 2018 in which a woman, who is a neuroscientist and engineer, talks about her team's new brain-imaging device. This device uses red-wavelength laser light combined with holography to allow non-intrusive imaging and measurement of individual neurons and synapses in real-time, it has extreme spatial resolution and incredible speed of measurement. Very soon we'll be living in a world where people's entire central nervous systems can be imaged, measured, and interpreted in great detail, if her team's devices are paired with machine-learning algorithms and deep-learning neural networks. This is simultaneously incredibly promising for the advancement of neuroscience, psychiatry, psychology, psychopharmacology, medicine, and deeply frightening if one contemplates the less ethical applications of such technology.

  • Oh. the asian guy is a korean chirofractor. I learned chinese medicine history from him. And to study neuroscience should you speak with scientists. Dont make your way with outsiders if you want to find truth. Every study related with chirofractor failed in korea. A lot of korean physicists who study with chirofractor gave up before 1 year. Especially if you want to study neuroscince within other department, contact me. I will give my lecture note that blackhole and brain have same network processing. And I can give my lecture source too. This is the topic of neuroscience today.

  • MIND CONTROL there is 20 years between the scientist discovery and the date they present for the public.
    The research on the MIND CONTROL is more important than IA or robotic for them!
    They work for devil and they try to control humanity.
    They can read the mind so they can send the kind of mind they want in a brain.

  • Invasive technology that will inevitably be abused.These researchers have an infuriating sense of entiltlement;they believe that they have privileged access to trespass into the private world of the human brain. We 're in danger!

  • I have a suggestion your feeding her the images to fast. Give them 1 to 2 mins for each video if you don’t want to cheat next time. Like this experiment try doing it again.

  • What would happen if they change the subject? Probably the waves would be different from every different human doesn't it?

  • It is not mind reading that you can retrieve stored thoughts that are tied to a signal, Mind reading means that you capture these thoughts yourself and and this is not what you deal with in your experiment.

    You are already having a computer recall a stored signal and not recognize it yourself.

  • This might be a stupid question but do the waves change from person to person of the way they see an image or do they remain similar?

  • 0:14 I listening "machines future that can read our thoughts", but in translate "machines that can read our thoughts", even on TED.com too, Can anybody hepl me explain this issue? Thanks.

  • This, if it avances, is a serious threat to the last vestiges of privacy. Do not try to say there are not some private businesses, corporations, government or those with an ulterior motive who wouldn't use this in the worst possible way. There is a reason we have the term thought police and it isn't because privacy is respected.

  • Now china can hopefully manufacture this into thought surveillance to ensure no one thinks the wrong thing. Thanks, science!!!

  • Old town roads is full of weird comments…
    https://chrome.google.com/webstore/detail/threelly-ai-for-youtube/dfohlnjmjiipcppekkbhbabjbnikkibo

  • Oops
    https://chrome.google.com/webstore/detail/threelly-ai-for-youtube/dfohlnjmjiipcppekkbhbabjbnikkibo

Leave a Reply

Your email address will not be published. Required fields are marked *