New Scientist reports on technology aimed at improving the emotional intelligence of the autistic. Much like corrective lenses for the near sighted or far sighted, these glasses signal to the wearer the emotions transmitted through facial expressions:
When Picard and el Kaliouby were calibrating their prototype, they were surprised to find that the average person only managed to interpret, correctly, 54 per cent of Baron-Cohen’s expressions on real, non-acted faces. This suggested to them that most people – not just those with autism – could use some help sensing the mood of people they are talking to. “People are just not that good at it,” says Picard. The software, by contrast, correctly identifies 64 per cent of the expressions.
Picard and el Kaliouby have since set up a company called Affectiva, based in Waltham, Massachusetts, which is selling their expression recognition software. Their customers include companies that, for example, want to measure how people feel about their adverts or movie. And along with colleague Mohammed Hoque, they have been tuning their algorithms to pick up ever more subtle differences between expressions, such as smiles of delight and frustration, which can look very similar without context. Their algorithm does a better job of detecting the faint differences between those two smiles than people do. “The machines had an advantage over humans in analysing internal details of smiles,” says Hoque.
… In addition to facial expressions, we radiate a panoply of involuntary “honest signals”, a term identified by MIT Media Lab researcher Alex Pentland in the early 2000s to describe the social signals that we use to augment our language. They include body language such as gesture mirroring, and cues such as variations in the tone and pitch of the voice. We do respond to these cues, but often not consciously. If we were more aware of them in others and ourselves, then we would have a fuller picture of the social reality around us, and be able to react more deliberately.
To capture these signals and depict them visually, Pentland worked with MIT doctoral students Daniel Olguín Olguín, Benjamin Waber and Taemie Kim to develop a small electronic badge that hangs around the neck. Its audio sensors record how aggressive the wearer is being, the pitch, volume and clip of their voice, and other factors. They called it the “jerk-o-meter”. The information it gathers can be sent wirelessly to a smartphone or any other device that can display it graphically.
The first test of the badges in the real world revealed that there was money to be made from these revelations. Pentland and his team upgraded the sociometric badges to analyse the speech patterns of customer service representatives at Vertex Data Science in Liverpool, UK, which provides call centre services for a number of companies. This revealed that it is possible to identify units of speech that make a person sound persuasive, and hence to teach them how to sound more persuasive when talking to customers (International Journal of Organisational Design and Engineering, vol 1, p 69). The team claims that the technology could increase telephone sales performance by as much as 20 per cent.
One observation: Without the market potential for these technologies there be less research that had the spillover effect of benefiting the autistic.
Like most technologies for good, these have the potential for abuse.
Would you like to keep the lid on this Pandora’s box?