Back to Dignitas Issue

In early 2015, The Center for Bioethics & Human Dignity sponsored a briefing on Capitol Hill featuring Rosalind Picard, ScD, founder and director of the Affective Computing Research Group at the Massachusetts Institute of Technology (MIT) Media Lab and co-director of the Things That Think Consortium. We asked Professor Picard to update Congressional staff in Washington, D.C. about recent advances in her research and the associated ethical implications.

Professor Picard’s pioneering work (which she also described at CBHD’s 2015 annual conference) has led to the development of several technologies that enable computers to decipher correlates of human emotion. Motivated by fundamental beliefs about what it means to be human, Professor Picard has long insisted that computers will most effectively assist human activities if they can decode and respond to human emotion. In collaboration with computer scientist Rana el Kaliouby, Professor Picard developed a complex algorithm that allows computers to “read” human facial expressions recorded on video. This technology has been used to help individuals on the autistic spectrum better decode subtle human social-emotional communication. Picard and Kaliouby founded the private company Affectiva to commercialize this technology for a range of applications.

Professor Picard’s work in affective computing also led to an unexpected discovery about the correlation between the autonomic stress response—the fight-or-flight response—and an oncoming grand-mal seizure, the type of seizure that leads to violent muscle contractions and a loss of consciousness. At Picard’s company, Empatica, a team of engineers and product designers has harnessed this information to develop a device that enables caregivers to monitor seizures in epilepsy patients. The Embrace watch can also help monitor stress or sleep disruptions.[1] With a firm belief in the power of technology to better the human condition, Professor Picard works with a seemingly tireless enthusiasm, stewarding her significant gifts and talents for the common good.

From the beginning, Professor Picard has given thoughtful consideration to what it means to be human and to the role emotion and affect play in nuancing human communication and social interaction. The importance of social-emotional communication is not new to psychologists and social scientists, who have begun to document the effects of substituting digital connection for human interaction in our ever-more-wired culture. But in computer science and robotics, Professor Piccard was one of the first to call for including a social-emotional dimension to the development of “smart” technologies.

Picard has also consistently attended to the often complicated ethical questions that are raised by giving computers the ability to read and imitate human emotion. She devoted a chapter of her groundbreaking book, Affective Computing, to considering the short and long-term implications of her research, and she has collaborated with philosophers, scientists, engineers, and others seeking to address many of the ethical questions raised by her work.

So what are some of the most pressing of these ethical considerations?

Privacy is one of the most immediate concerns raised by technologies that can “see” and quantify what we are feeling. Not surprisingly, the advertising sector quickly perceived the value of being able to assess emotion in real-time. In an age where marketers are increasingly competing for our attention—which is now divided among various devices (your television, computer, smartphone, tablet, electronic game platform, etc.) and myriad different programs or apps on those devices—companies highly value the ability to accurately target an advertising message to the appropriate individual at the appropriate time. Samsung has already been criticized for voice activation technology on its SmartTVs which can listen and record conversation, and for pushing ads into apps that are streaming content from your personal video library.[2]

But many of us share an intense desire to keep our emotions private. What control will consumers have over what marketers can see? Should this information be in the hands of employers for the purpose of detecting disgruntled employees? While this knowledge could potentially prevent workplace violence, it could also be used to assess productivity and job satisfaction. Similarly, if a car could sense when its driver becomes agitated or enraged and respond—either by reminding the driver to calm down, alerting other drivers, or through internal controls—perhaps accidents could be prevented. But who should have access to such information? Law enforcement? Automotive insurance companies? People at-risk for suicide could be monitored by health professionals or trusted friends. But how can we ensure that their dignity will be honored against unwanted or unhelpful invasions of privacy? Should such monitoring be connected to our electronic health records? What about law enforcement’s desire to assess criminal intent in a potential suspect or improve lie detection capabilities? As with other technologies, efforts to preserve individual privacy must be balanced against concerns for the common good, and it is not always clear where to draw the line.

Another important ethical issue raised by giving computers the ability to “see” our emotions is the potential for emotional control and manipulation. Advertisers are already on morally questionable ground here. Is it good to sell children on the “value” of a sugar-laden cereal? As a society we have decided that marketing tobacco to minors is wrong, but tobacco is bad for the health of everyone, all the time. What if advertisers could selectively place an advertisement for a delicious-yet-unhealthy treat—a treat which would not be harmful for most individuals if consumed in moderation—in front of an already-obese individual who is depressed after losing a job or a loved one and struggles with using food to assuage emotional pain? Integrating information gleaned from “big data” about our purchasing habits and the websites we visit with our emotional state gives advertisers a lot of subtle power. And it does not take much imagination to conjure more sinister versions of such emotional manipulation for those wanting to enrich political power, for example, or to groom suicide bombers or terrorists. Neuroscientists, psychiatrists, and psychologists have confirmed that emotions sear memories into our brains. Thus the potential for emotional manipulation is a serious concern that should be monitored, and various consent mechanisms must be put in place allowing consumers to “opt-in” or, at the very least, “opt-out” of emotion-sensing technologies, which will likely be integrated into the “internet of things” as it develops.

Specifically regarding applications that assist people with disabilities, as a society we must wrestle with the question of what is normative when it comes to human emotion . Clearly it is beneficial to help people with autism interpret human affect so that they can better navigate their social environment, but potentially harmful if they feel that they must artificially adopt such affects themselves in order to be accepted or considered “normal.” The question of what is “normal” human emotional-social interaction is not unlike other questions that emerge when evaluating assistive technologies and their capacity for human enhancement.

So what can and should be done to develop policies to ensure that these technologies serve rather than subvert human dignity and the common good?

Many of these questions are not easily addressed through legislation. The issues are complex and context-dependent, and the technology is rapidly changing. But as Congress debates the larger issues of privacy and consent in the context of the massive amounts of data available to both government and the private sector, specific consideration about the nature of emotional data should be part of that larger debate. For instance, should research participants, rather than researchers, own their emotional data, which empowers them to take action that promotes their own emotional, psychological, and physical health. This would help preserve the autonomy and dignity of the research subject even as their data is used to further the field.

Congress should also continue to encourage (and fund) thoughtful, ethical reflection by scientists like Professor Picard who are trying to shape the direction and use of the technologies they are developing.

More concretely, there may well be a need for legislative protection from discrimination on the basis of emotional data, akin to the Genetic Information Nondiscrimination Act, to prevent employers, for example, from discriminating against someone for being depressed or for reacting negatively to instructions given by their boss. In addition, basic standards governing human-computer interactions should be developed, a process well underway in the UK and in the EU more broadly.

Of course, this only scratches the surface. Technological advancement often outpaces the ability of Congress and various social institutions to establish the kinds of standards that ensure that what is developed promotes human dignity and flourishing. At The Center for Bioethics & Human Dignity, we hope that by providing opportunities for policymakers to consider and reflect on technological advances and their ethical and legal implications, they will be better positioned to respond thoughtfully.

References

[1] “Embrace,” Empatica, https://www.empatica.com/product-embrace (accessed September 29, 2015).

[2] “Not in Front of the Telly: Warning over ‘Listening’ TV,” BBC News, February 9, 2015, http://www.bbc.com/news/technology-31296188 (accessed September 29, 2015); Claire Reilly, “Samsung Smart TVs Forcing Ads into Video Streaming Apps,” CNET, February 10, 2015, http://www.cnet.com/news/samsung-smart-tvs-forcing-ads-into-video-streaming-apps/ (accessed September 29, 2015).