A- A A+

Moves Like Jagger-bot: “We Can Make A Robot Dance, But Can We Really Make It Dance?”

Social Robots with Professor Emily Cross: From Social Cognition To Social Robots.

Social RobotsConsidering the apocalyptic madness of 2020, one could be forgiven for being surprised that neither the aliens dropped by, nor the robots, led by their self-checkout overlords, began their inevitable uprising. A fear of the unknown and apprehension about the march of technological innovation can elicit a sense of, THEY’RE COMING THEY’RE COMING! However, what is the reality? Is the uprising truly inevitable? Is the robot fear justified?

Following the brilliant March conference, Technology and Psychology, London OUPS hosted Professor Emily S. Cross. Emily is a cognitive neuroscientist jointly based at the University of Glasgow and Macquarie University in Australia, with an interest in how different experiences shape human brains and behaviours. Emily is the principal investigator on the European Research Council Starting Grant, and director of the Social Brain in Action Laboratory at Macquarie University.

In a lively lecture delivered over the internet from Australia, Emily talked about a range of interesting topics including; social cognition, working with artificial agents, the implications of how we perceive artificial agents, and finally, she shared a thought-provoking quote before a brief question and answer session.

Social cognition refers to how people understand the actions and behaviours of others in a social world. Emily referenced how significant social cognition has become owing to the ongoing pandemic and how it has changed societies, such as the increase of screen-mediated communication. The human brain has evolved to make sense of the gap between a person’s actions and the perception of those actions in a bidirectional loop. Emily’s research investigates how different types of experiences shape perception, for example, dancer Emily is particularly interested in motor learning and expertise. Other research areas include observational learning, the rapidly evolving field of neuroaesthetics, plus work with artificial agents, or robots.

It is remarkable how much working with artificial agents can reveal about human social cognition. A core element of social perception is the theory of ‘like me’, which refers to the importance placed on similarity, or, "me as a template to understand you" (Meltzoff & Prinz, 2003, Meltzoff, 2007). Emily mentioned evidence which supported this assertion [Meltzoff & Prinz, 2003], and those which suggest greater flexibility in social information processing [Cross et al., 2009, Ramsey & Hamilton, 2010]. This influenced Study 1 [Cross et al., 2012] which looked at perceiving human versus robot form and motion. The results were surprising as they were in contravention to the "like me" hypothesis because the participants showed more reaction to robotic dancing than human dancing. Emily suggested two explanations for this surprising finding: low-level action features or, greater engagement of compensatory top-down modulation. It inspired further research.

People’s perception of artificial agents influences their reaction and behaviour, which Emily demonstrated in the lecture using, for example, Fembots. A Fembot, for the uninitiated (or young!) is an ostensibly attractive blonde woman from the 1997 Austin Powers film. Looking at the film still, you either have the stimulus cue of what you see or the knowledge cue of having seen the film and knowing that the negligee-wearing femmes were especially fatale as they are in fact, robots with a pair of guns in their chests: the name is a portmanteau. To manipulate cues, participants in Study 2 [Cross et al., 2016], watched a professional video explaining the history and industry use of motion capture and computer animation technology. In the study, they were then told that the video they were watching was made using either motion capture or computer animation thus manipulating their beliefs. When rating the videos for smoothness and extent to having liked them, the knowledge cues had greater impact than stimulus cues. Emily identified the potential for these findings to influence the design of artificial agents and to manage expectations.

Research suggests that social mimicry, or, automatic imitation, can help develop social bonds [Chartrand & van Baaren, 2009, Heyes, 2011]. Does this social mimicry extend to nonhuman agents and what is the effect of stimulus and knowledge cues to automatic imitation? Study 3 [Klapper, Ramsey, Wigboldus & Cross, 2014] investigated interacting with artificial agents and the social mimicry phenomena. Once again, participants watched the video from Study 2 to manipulate cues. Results indicated that if there are cues to human-ness, participants are more likely to show social mimicry. Emily summarised the robot study findings as indicating that a social cognitive neuroscience-based approach to investigating and improving the human elements of human-nonhuman agent interaction.

Emily introduced us to the ‘robotic empire’ who are being built and developed to tackle ever more complicated questions. The robots included, Pepper, who is the size of a small human and has humanistic facial features, particularly big eyes. Cosmo, on the other hand, fits in your palm. Nao is a toddler-sized ‘workhorse in social robotics’, who similarly has a human appearance. Miro is a robot who looks like a dog and who many of us encountered at the Technology and Psychology conference in Professor Tony Prescott’s lecture. Emily mentioned that she is currently working with Miro in Australia. Indeed, there is a lot of exciting work with robots being conducted, which Emily referenced, for example, further work on automatic imitation, collaboration, hared representations, empathy for pain after longer-term social interactions, and, synchrony and social reward. It is an exciting and evolving field of study and one that is worth following for the developments.

Emily spoke about several additional studies including interesting research about empathy for pain and neural overlaps [Cross, Riddoch, Pratts, Titone, Chaudhury & Hortensius, 2019]. Does spending time interacting with social robots lead to participants developing more overlap in neural mechanisms when observing a human or robot in pain or pleasure and then vice versa? The hypothesis was that there would be more repetition suppression [reduced neural response observed when stimuli are presented more than once], for agents not emotions, after the socialising intervention. The socialising intervention was a schedule of tasks participants carried out at home with a Cosmo robot. Cosmo can learn things, can recognise faces as well as being programmed to be receptive to cats and dogs. One participant had shared a spectacular picture of Cosmo sat on the dinner table while their pet cat stealthily stared at the unfamiliar (relatively speaking as the at-home element was five days) interloper to their home. Emily gave a brief summary of the complicated results from this study, which included a null result, thus suggesting that social cognition is not so easily challenged by an arguably brief amount of time socialising with a robot. The result may have been disappointing, but it nevertheless contributes to the body of knowledge.

A further study Emily talked about concerned synchrony and social reward. Synchrony refers to simultaneous action, for example, dancing. Synchrony is important in social cognition because when people synchronise, it can lead to increased bonding and engage significant regions in the brain. Participants were in an experiment with Pepper involving them both tracing shapes. Participants also could ask Pepper questions, with the idea that if participants were synchronising with the robot, that they would ask more questions. Results indicated that synchronising with Pepper did not seem to change people’s social motivation towards Pepper. An additional null result can seem similarly disappointing however, Emily highlighted that when scientists are conducting studies, the literature search reveals results upon which they [we!] draw upon and so, despite this study having a null result, the next researcher investigating this topic will be able to draw upon both results: the scientific cycle of inquiry.

At the end of the lecture, Emily shared a quote from Erik Sofge: “we’re only barely scratching the surface of the brain’s social algorithms, which become even more complicated and unpredictable when we interface with technology”. This was an interesting quote to end this fascinating lecture because it encapsulated two of the biggest themes of the lecture and concerns of contemporary society. The complexity of the brain is such that there is so much which is yet to be discovered, plus things that we thought could be predicted have ended up not being represented when tested. Alongside this, there is the advancement of technology and the interaction of these factors. Answering one of the last questions, Emily described making increasingly humanoid robots as a ‘fool’s errand’ and said that there are more important considerations than how the robot looks or moves. However, Emily pointed out that the idea of making our robot double is ‘too seductive’.

Returning to the self-checkout robot uprising, despite my personal knowledge of the evil within those infuriating hell devices, there is nothing to suggest the Machiavellian machines are mobilising. On the contrary, the social robots covered by Emily’s lecture and the Technology and Psychology conference prior, are lovers not fighters. Arguably, the null results from Miro and Pepper studies show humans in a questionable light. Moreover, the social robots are being used with vulnerable groups, such as the elderly [Laban et al., 2020, Paladyn, Riddoch & Cross, under review], and to investigate the development of theory of mind toward a variety of machines [Jastrzab et al. in prep; ongoing]. The fear and apprehension about robots may be a Hollywood-mediated effect. There is also the elephant in the room that behind every increasingly intelligent robot, are the humans who created it. Ultimately, OUPS’ David posed the question to leave us all with something to think about: ‘we can make a robot dance, but can we really make it dance?’.

A massive thank you to Emily for her wonderful lecture, and to all at LOUPS for ending this year with another fantastic event. 2020 has been bewildering, challenging and, of course, unprecedented. Despite the drastic changes we have all experienced, LOUPS have still hosted fab events and got inspirational and motivational speakers. Roll on 2021!

Print

Search OUPS