Kenneth Shock met his wife on eHarmony. They texted for 19 hours straight the first day. When she finally heard his voice on the phone, she says that was it — love at first sound. ALS started taking that voice away. By early 2026, Kenneth could barely mouth words. Most conversations happened without him. In January 2026, Kenneth received a Neuralink brain implant. In March, he said four words that nobody in the room expected to hear so clearly, so soon: "I'm talking to you with my mind." He wasn't being poetic. He meant it literally. His brain signals were being decoded in real time, converted to phonemes, synthesized in his original voice — the voice his wife fell in love with — and played out loud. No mouth movement required [1].
What's Actually Happening Here
Let's be precise about what Neuralink's Voice Study is and what it isn't. This isn't a text-to-speech hack where you type with your eyes. It's not a button-press system. The N1 implant — a chip about the size of a large coin, inserted flush with the skull by a surgical robot — records electrical activity from motor neurons in the brain's speech cortex. When Kenneth thinks about speaking, those neurons fire in patterns. The system translates those patterns into predicted phonemes, assembles them into words, and plays them in audio synthesized from recordings of his own voice made years earlier [1]. Right now, Kenneth still tends to quietly mouth words as he thinks them — it helps the model. But that's temporary. According to Neuralink's president DJ Seo, the team is actively working to decode pure imagined speech, with no lip movement at all. The goal is thought-to-speech that feels instant [1].





