Search

Can privacy coexist with technology that reads and changes brain activity? - Science News Magazine

cicingwos.blogspot.com

Gertrude the pig rooted around a straw-filled pen, oblivious to the cameras and onlookers — and the 1,024 electrodes eavesdropping on her brain signals. Each time the pig’s snout found a treat in a researcher’s hand, a musical jingle sounded, indicating activity in her snout-controlling nerve cells.

Those beeps were part of the big reveal on August 28 by Elon Musk’s company Neuralink. “In a lot of ways, it’s kind of like a Fitbit in your skull with tiny wires,” said Musk, founder of Tesla and SpaceX, of the new technology.

Neuroscientists have been recording nerve cell activity from animals for decades. But the ambitions of Musk and others to link humans with computers are shocking in their reach. Future-minded entrepreneurs and researchers aim to listen in on our brains and perhaps even reshape thinking. Imagine being able to beckon our Teslas with our minds, Jedi-style.

Some scientists called Gertrude’s introduction a slick publicity stunt, full of unachievable promises. But Musk has surprised people before. “You can’t argue with a guy who built his own electric car and sent it to orbit around Mars,” says Christof Koch, a neuroscientist at the Allen Institute for Brain Science in Seattle.

images of gertrude the pig and her brain signals
Whenever Gertrude’s snout touched something, nerve cells in her brain fired electrical signals detected by an implanted device (signals shown as wavy lines on black). Similar technology may one day help people with paralysis or brain disorders.Neuralink

Whether Neuralink will eventually merge brains and Teslas is beside the point. Musk isn’t the only dreamer chasing neurotechnology. Advances are coming quickly and span a variety of approaches, including external headsets that may be able to distinguish between hunger and boredom; implanted electrodes that translate intentions to speak into real words; and bracelets that use nerve impulses for typing without a keyboard.

Today, paralyzed people are already testing brain-computer interfaces, a technology that connects brains to the digital world (SN: 11/16/13, p. 22). With brain signals alone, users have been able to shop online, communicate and even use a prosthetic arm to sip from a cup (SN: 6/16/12, p. 5). The ability to hear neural chatter, understand it and perhaps even modify it could change and improve people’s lives in ways that go well beyond medical treatments. But these abilities also raise questions about who gets access to our brains and for what purposes.

Because of neurotechnology’s potential for both good and bad, we all have a stake in shaping how it’s created and, ultimately, how it is used. But most people don’t have the chance to weigh in, and only find out about these advances after they’re a fait accompli. So we asked Science News readers their views about recent neurotechnology advances. We described three main ethical issues — fairness, autonomy and privacy. Far and away, readers were most concerned about privacy.

The idea of allowing companies, or governments, or even health care workers access to the brain’s inner workings spooked many respondents. Such an intrusion would be the most important breach in a world where privacy is already rare. “My brain is the only place I know is truly my own,” one reader wrote.

Technology that can change your brain — nudge it to think or behave in certain ways — is especially worrisome to many of our readers. A nightmare scenario raised by several respondents: We turn into zombies controlled by others.

When these types of brain manipulations get discussed, several sci-fi scenarios come to mind, such as memories being wiped clean in the poignant 2004 film Eternal Sunshine of the Spotless Mind; ideas implanted into a person’s mind, as in the 2010 movie Inception; or people being tricked into thinking a virtual world is the real thing, as in the mind-bending 1999 thriller The Matrix.

Today’s tech capabilities are nowhere near any of those fantasies. Still, “the here and now is just as interesting … and just as morally problematic,” says neuroethicist Timothy Brown of the University of Washington in Seattle. “We don’t need The Matrix to get our dystopia.”

illustration of scientists on a ladder removing thought bubbles from a brain
The ability to nudge brain activity in certain directions raises ethical questions.Julia Yellow

Today, codes of ethics and laws govern research, medical treatments and certain aspects of our privacy. But we have no comprehensive way to handle the privacy violations that might arise with future advances in brain science. “We are all flying by the seat of our pants here,” says Rafael Yuste, a neurobiologist at Columbia University.

For now, ethics questions are being taken up in a piecemeal way. Academic researchers, bioethicists and scientists at private companies, such as IBM and Facebook, are discussing these questions among themselves. Large brain-research consortiums, such as the U.S. BRAIN Initiative (SN: 2/22/14, p. 16), include funding for projects that address privacy concerns. Some governments, including Chile’s national legislature, are starting to address concerns raised by neurotechnology.

With such disjointed efforts, it’s no surprise that no consensus has surfaced. The few answers that exist are as varied as the people doing the asking.

Reading thoughts

The ability to pull information directly from the brain — without relying on speaking, writing or typing — has long been a goal for researchers and doctors intent on helping people whose bodies can no longer move or speak. Already, implanted electrodes can record signals from the movement areas of the brain, allowing people to control robotic prostheses.

In January 2019, researchers at Johns Hopkins University implanted electrodes in the brain of Robert “Buz” Chmielewski, who was left quadriplegic after a surfing accident. With signals from both sides of his brain, Chmielewski controlled two prosthetic arms to use a fork and a knife simultaneously to feed himself, researchers announced in a press release on December 10.

Robert “Buz” Chmielewski, who has had quadriplegia since his teens, uses brain signals to feed himself some cake. Via electrodes implanted in both sides of his brain, he controls two robotic arms: One manipulates the knife and the other holds the fork.

Other research has decoded speech from the brain signals of a paralyzed man who is unable to speak. When the man saw the question, “Would you like some water?” on a computer screen, he responded with the text message, “No, I am not thirsty,” using only signals in his brain. This feat, described November 19 at a symposium hosted by Columbia University, is another example of the tremendous progress under way in linking brains to computers.

“Never before have we been able to get that kind of information without interacting with the periphery of your body, that you had to voluntarily activate,” says Karen Rommelfanger, a neuroethicist at Emory University in Atlanta. Speaking, sign language and writing, for instance, “all require several steps of your decision making,” she says.

Today, efforts to extract information from the brain generally require bulky equipment, intense computing power and, most importantly, a willing participant, Rommelfanger says. For now, an attempt to break into your mind could easily be thwarted by closing your eyes, or wiggling fingers, or even getting drowsy.

What’s more, Rommelfanger says, “I don’t believe that any neuroscientist knows what a mind is or what a thought is,” she says. “I am not concerned about mind reading, from the existing terrain of technologies.”

But that terrain may change quickly. “We are getting very, very close” to having the ability to pull private information from people’s brains, Yuste says, pointing to studies that have decoded what a person is looking at and what words they hear. Scientists from Kernel, a neurotech company near Los Angeles, have invented a helmet, just now hitting the market, that is essentially a portable brain scanner that can pick up activity in certain brain areas.

For now, companies have only our behavior — our likes, our clicks, our purchase histories — to build eerily accurate profiles of us and estimate what we’ll do next. And we let them. Predictive algorithms make good guesses, but guesses all the same. “With this neural data gleaned from neurotechnology, it may not be a guess anymore,” Yuste says. Companies will have the real thing, straight from the source.

Even subconscious thoughts might be revealed with further technological improvements, Yuste says. “That is the ultimate privacy fear, because what else is left?”

Rewrite, revise

Technology that can change the brain’s activity already exists today, as medical treatments. These tools can detect and stave off a seizure in a person with epilepsy, for instance, or stop a tremor before it takes hold.

Researchers are testing systems for obsessive-compulsive disorder, addiction and depression (SN: 2/16/19, p. 22). But the power to precisely change a functioning brain directly — and as a result, a person’s behavior — raises worrisome questions.

The desire to persuade, to change a person’s mind, is not new, says Marcello Ienca, a bioethicist at ETH Zurich. Winning hearts and minds is at the core of advertising and politics. Technology capable of changing your brain’s activity with just a subtle nudge, however, “brings current manipulation risks to the next level,” Ienca says.

“Imagine walking into McDonald’s and suddenly you have an irresistible urge for a cheeseburger (or 10).”

What happens if such influence finds a place outside the medical arena? A doctor might use precise brain-modifying technology to ease anorexia’s grip on a young person, but the same might be used for money-making purposes: “Imagine walking into McDonald’s and suddenly you have an irresistible urge for a cheeseburger (or 10),” one of our readers wrote.

Is the craving caused by real hunger? Or is it the result of a tiny neural nudge just as you drove near the golden arches? That neural intrusion could spark uncertainty over where that urge came from, or perhaps even escape notice altogether. “This is super dangerous,” Yuste says. “The minute you start stimulating the brain, you are going to be changing people’s minds, and they will never know about it, because they will interpret it as ‘that’s me.’ ”

Precise brain control of people is not possible with existing technology. But in a hint of what may be possible, scientists have already created visions inside mouse brains (SN: 8/17/19, p. 10). Using a technique called optogenetics to stimulate small groups of nerve cells, researchers made mice “see” lines that weren’t there. Those mice behaved exactly as if their eyes had actually seen the lines, says Yuste, whose research group performed some of these experiments. “Puppets,” he calls them.

illustration of scientists observing the inside of a brain
Once researchers or companies can change our brain activity, will neural privacy require special protections?
Julia Yellow

What to do?

As neurotechnology marches ahead, scientists, ethicists, companies and governments are looking for answers on how, or even whether, to regulate brain technology. For now, those answers depend entirely on who is asked. And they come against a backdrop of increasingly invasive technology that we’ve become surprisingly comfortable with.

We allow our smartphones to monitor where we go, what time we fall asleep and even whether we’ve washed our hands for a full 20 seconds. Couple that with the digital breadcrumbs we actively share about the diets we try, the shows we binge and the tweets we love, and our lives are an open book.

Those details are more powerful than brain data, says Anna Wexler, an ethicist at the University of Pennsylvania. “My e-mail address, my notes app and my search engine history are more reflective of who I am as a person — my identity — than our neural data may ever be,” she says.

“How would we know that what we thought or felt came from our own brains, or whether it was put there by someone else?”

It’s too early to worry about privacy invasions from neurotechnology, Wexler argues, a position that makes her an outlier. “Most of my colleagues would tell me I’m crazy.”

At the other end of the spectrum, some researchers, including Yuste, have proposed strict regulations around privacy that would treat a person’s neural data like their organs. Much like a liver can’t be taken out of a body without approval for medical purposes, neural data shouldn’t be removed either. That viewpoint has found purchase in Chile, which is now considering whether to classify neural data with new protections that would not allow companies to get at it.

Other experts fall somewhere in the middle. Ienca, for example, doesn’t want to see restrictions on personal freedom. People ought to have the choice to sell or give away their brain data for a product they like, or even for straight up cash. “The human brain is becoming a new asset,” Ienca says, something that can generate profit for companies eager to mine the data. He calls it “neurocapitalism.”

And Ienca is fine with that. If a person is adequately informed — granted, a questionable if — then they are within their rights to sell their data, or exchange it for a service or product, he says. People ought to have the freedom to do what they like with their information.

General rules, checklists and regulations are not likely to be a good path forward, Rommelfanger says. “Right now, there are over 20 frameworks, guidelines, principles that have been developed since 2014 on how to handle neuroscience,” she says. Those often cover “mental privacy” and “cognitive liberty,” the freedom to control your own mental life.

Those guidelines are thoughtful, she says, but the technologies differ in what they’re capable of, and in their possible ethical repurcussions. One-size-fits-all solutions don’t exist, Rommelfanger says.

Instead, each company or research group may need to work through ethical issues throughout the development process. She and colleagues have recently proposed five questions that researchers can ask themselves to begin thinking about these ethical issues, including privacy and autonomy. The questions ask people to consider how new technology might be used outside of a lab, for instance.

Moving forward on the technology to help people with mental illness and paralysis is an ethical imperative, Rommelfanger says. “More than my fear of a privacy violation, my fear is about diminished public trust that could undermine all of the good this technology could do.”

A lack of ethical clarity is unlikely to slow the pace of the coming neurotech rush. But thoughtful consideration of the ethics could help shape the trajectory of what’s to come, and help protect what makes us most human.

Let's block ads! (Why?)



"activity" - Google News
February 11, 2021 at 06:00PM
https://ift.tt/3rJnvP7

Can privacy coexist with technology that reads and changes brain activity? - Science News Magazine
"activity" - Google News
https://ift.tt/3ddCXMh
https://ift.tt/2WkO13c

Bagikan Berita Ini

1 Response to "Can privacy coexist with technology that reads and changes brain activity? - Science News Magazine"

  1. Silahkan bermain Tebak ANGKA bersama kami di DEWALOTTO kami juga menyediakan permainan SBOBET, IBCBET dan SABUNG AYAM
    langsung daftar dan bermain minimal depo hanya 20 rb saja silahkan ADD WA +855 888765575 Terima Kasih admin...:)

    ReplyDelete

Powered by Blogger.