Emotion simulation trial

aspiThe abundance of information and the universal access to the Internet have contributed to what can be described as an addiction to mass media. Effortless access to data has become the basis of a complex new system of social communication, influencing our intellect, emotions, and social behavior. This dependence on information could potentially be used to stimulate human-robot interaction. By definition, a social robot should be capable of generating behaviors (including methods of communicating information) that conform to his user's expectations, while at the same time staying in accordance with social norms. Therefore he should communicate information with regard to its emotional character. This could have paramount implications for the process of forming a relation.
 
The control system of EMYS™ complies with the three-layer control architecture paradigm. Its lowest layer provides a necessary hardware abstraction, and integrates low-level motion controllers, sensor systems and algorithms implemented as external software. The middle layer is responsible for the functions of the robot and the implementation of his competencies. It defines a set of tasks the robot will be able to perform. Two lower layers of the architecture are based on the open source Urbi software, created by Gostai. This software enables accessing the robot hardware and competencies in a unified manner - using a tree structure. The highest layer may incorporate a dedicated decision system, a finite-state machine or a comprehensive system simulating human mind functionalities.
 
The lowest layer of the control system consists of dynamically loaded modules called UObjects which are used to bind hardware or software components, such as actuators and sensors on the one hand and voice synthesis or face recognition algorithms on the other hand. Components with an UObject interface are supported by the urbiscript programming language which is a part of Urbi software.
 
The video group of modules provides image processing capabilities on RGB and RGB-D data. RGB-D data from the Kinect sensor can be extracted with OpenNI library (UKinectOpenNI2 module) or Kinect SDK (UKinect module). The former allows to measure distance to certain elements of image, detect human silhouette as well as provide information on position of particular elements of human body. It also implements very simple gesture recognition algorithms. The Module based on Kinect SDK provides the same functions as UKinectOpenNI2, and adds some more. Additional functions include 2D and 3D face tracking and microphone array support, providing speech recognition and detection of voice direction. The auditory modules are based on SDL library and Microsoft Speech Platform. URecog (or UKinect ) module uses Microsoft Speech Platform to recognize speech recorded using an external microphone. USpeech, utilizes MSP for real-time speech synthesis.
 
Connection with the Internet is provided by UBrowser and UMail modules, based on POCO library. The first module implements functions of a web browser and an RSS reader. The module provides a wide variety of functions needed for extracting particular information from the Internet, like weather forecast or news. UMail serves as an e-mail client with the ability to check and read mails and send messages with various types of attachments (e.g. image from the robot's camera or a voice message recorded by Kinect). 
 
aspi4
Information gathered by the robot (from the websites, e-mails or via auditory modules) can be affectively assessed to extract their emotional meaning. All necessary functions to achieve this goal, are implemented by UAnew,USentiWordNet and UWordNet modules. The first one utilizes ANEW (Affective Norms for English Words) project, which is a database containing emotional ratings for a large number of English words. It can be used for evaluating a word or a set of words in terms of feelings they are associated with. USentiWordNet is based on a project similar to ANEW - USentiWordNet. It is a lexical resource for opinion mining, assigning ratings to groups of semantic synonyms (synsets). UWordNet plays a different role than the two previous modules. It is an interface to WordNet - a large lexical database of English words, in which nouns, verbs, adjectives and adverbs are grouped into synsets, each expressing a distinct concept. When the word cannot be assessed by previous modules, UWordNet is used as a synonyms dictionary to find the basic form of a word. Also a learning competency has been developed, relying on algorithms coming from OpenCV library. The learning module UKNearest implements the k-nearest neighbours classifier. This module can serve for information acquisition and classification of the robot environment (in this demo it projects continuous value of mood to discrete emotion) . 
 
Robot behaviors programmed as finite-state machines can be enriched with an emotional component simulated in external software. It cannot rival affective mind architectures, but will provide a wide variety of reliable and less repetitive behaviors. EMYS™ is adapted to working with two emotional systems - Wasabi and a dynamic PAD-based model of emotion. Both systems are based on dimensional theories of emotion, in which affective states are not only represented as discrete labels (like fear or anger), but as points or areas in a space equipped with a coordinate system. Emotions which can be directly represented in this space are called primary (basic). Some theories also introduce secondary emotions which are a mixture of two or more basic ones. The most popular theory in this group is PAD, proposed by Mehrabian and Russell, whose name is an abbreviation of three orthogonal coordinate  axes: Pleasure, Arousal, Dominance.
 
In order to evaluate the cooperation of modules tasked with acquiring data from the Internet and the emotional appraisal, a trial scenario has been devised. FLASH's dynamic emotional system is affected by the aforementioned components. The connections between various components are shown in figure below. The scenario is based on the possible everyday activities that a human may perform together with his/her personal robot.
 
aspi1
 
The dynamics of emotion in the presented example have been described as a first order inertial element. According to the experiment scenario, a set of attractors (emotional system inputs) has been created. These attractors along with the corresponding emotions are:
  • user's appearance (happiness),
  • user's departure (sadness),
  • accurate assessment of news' nature (happiness),
  • inaccurate assessment of news' nature (anger),
  • boredom - triggered every 1 second (boredom),
  • ANEW/SentiWordNet appraisal (depending on the evaluation),
  • weather forecast appraisal (depending on weather conditions).
 
aspi3
 
The scenario began with the human subject sitting at the table, face to face with the robot. After detecting the user, EMYS™ greeted him/her and began tracking his/her face. The human then asked the robot to check his/her emails. Finding new messages positively stimulated the robot, and their lack made him sad. Next, the user asked for some news from the BBC website, which were then read to him/her in a way that reflected the emotional appraisal calculated for that particular piece of information. If the absolute value of the appraisal was greater than |0.1|, the robot added verbal means of expression (e.g. ''Oh no!'', ''Great!'', ''Excellent!'', etc.). After every piece of news EMYS™ asked whether his evaluation had been correct. In the first two cases the person contradicted the robot's opinion, and in the next two agreed with him. Towards the end of the scenario the robot was asked to check the weather and, after he answered, the user left. PAD values obtained during the experiment are shown in figure below. 
 
aspi2
 
The complete dialog list for the experiment is presented below:
 
  User: Hello!
Robot:(t1, happy, low) Hi!
  User: Check my e-mail please.
Robot: (t2, happy, mid) Great! You have 9 new emails.
  User: Check some news please.
Robot: I found some news on BBC News channel.
Robot: (t3, content, low) Moscow increases its rhetoric against Ukraine's... (P=0.01)
Robot: It's a good news. Isn't it?   
  User: No, it's not! 
Robot: (t4, angry, mid) Would you like to hear the next piece of news?
  User: Yes, please! 
Robot: (t5, content, high) Italy's Prime Minister Matteo Renzi sets out... (P=0.017)
Robot: It's a good news. Isn't it?   
  User: No, it's not! 
Robot: (t6, angry, mid) Would you like to hear the next piece of news?
  User: Yes, please! 
Robot: (t7, sad, high) Oh no! 
Robot: Barcelona pay 11.2M pounds to Spain's authorities after being charged... (P=-0.12)
Robot: It's a sad news. Isn't it? 
  User: Yes, it is. 
Robot: (t8, happy, mid) Would you like to hear the next piece of news?
  User: Yes, please.      
Robot: (t9, content, mid) A new department at the Vatican is created by... (P=0.05)
Robot: It's good news. Isn't it? 
  User: Yes, it is. 
Robot: (t10, happy, mid) Would you like to hear the next piece of news?
  User: No, thank you. 
  User: Check the weather please. 
Robot: (t11, sad, mid) The weather is fair. The temperature is 5 degrees Celsius.
  User: Bye. 
Robot: (t12, sad, high)user has left, the robot starts getting bored
Robot: (t13, bored, high) the robot goes to sleep
 
 

ncn en

The project has received funding from National Science Center, grant no. 2012/05/N/ST7/01098.

 

 

  • Tweets