by: Roger Dooley
WIRED ran an interesting piece that suggests increasingly invasive brain technologies will become a legal battleground. The more obvious areas have already been discussed here and elsewhere: using brain scans as lie detectors or to see if an individual recognizes someone or something (as part of a legal investigation, perhaps). That could be just the beginning, though.
But this isn’t just about reading minds; it’s also about bombarding them with messages or tweaking their chemistry. Transcranial magnetic stimulation — now used to treat epilepsy — has shown that it can artificially generate states of empathy and euphoria. And you’ve probably heard of propranolol, a drug that can help erase traumatic memories.
Let’s say you’ve been assaulted and you want to take propranolol to delete the memory. The state needs that memory to prosecute the assailant. Can it prevent you from taking the drug? “To a certain extent, memories are societal properties,” says Adam Kolber, a visiting professor at Princeton. “Society has always made claims on your memory, such as subpoenaing you.” Or what if you use transcranial stimulation to increase your empathy. Would you be required to disclose that? Could a judge throw you off a jury? Could the Army turn you away? [From WIRED: Clive Thompson on Why the Next Civil Rights Battle Will Be Over the Mind.]
There’s little doubt that a host of ethical and legal issues will arise as our brain-related technologies continue their rapid development. Thompson doesn’t even discuss cognitive enhancement drugs, a topic which I think is perhaps the most immediate brain science dilemma we’ll face as a society. Despite all the fuss about performance-enhancing drugs in sports, so far cognitive enhancers are getting a free pass. And, if they are mostly safe, perhaps they should. Wouldn’t we be better off as a society if we all were a bit sharper mentally? But what are the risks? Scientists, ethicists, and politicials will have a field day once a college student somewhere overdoses or takes the wrong drug in a misguided attempt to prepare for finals.
Fortunately, I don’t think neuromarketing is going to be a big part of this discussion. Despite the occasional burst of alarmist rhetoric, there doesn’t seem to be that much for neuroethics debaters to worry about. The neuromarketing practices used today and envisioned for the near future pose no danger to the subjects and are completely voluntary. Nobody is being forced to watch commercials in an fMRI machine.
The big concern of the neuro-alarmists has been that somehow neuromarketing insights would allow marketers to manipulate consumers to a far greater degree than in the past. For a variety of reasons, I think this is unlikely. People’s brains are too different, and advertising has been practiced for too long, for a new breed of super-ads to pop out of a few fMRI scans.
Still, neuromarketing could see some collateral damage if less benign applications of brain technology (like lie detection in legal disputes or interrogation of criminal suspects) come under heavy fire, so marketers should stay aware of the evolving ethical debates and, when necessary, actually participate. The promise of neuromarketing is positive - less waste in advertising, products that are more satisfying - and that message mustn’t get lost if other applications of brain science ignite controversy.