Social Networking
Vol.05 No.01(2016), Article ID:63154,12 pages
10.4236/sn.2016.51004

Emotional Awareness: An Enhanced Computer Mediated Communication Using Facial Expressions

Moran David, Adi Katz

Department of Industrial Engineering and Management, Shamoon Academic College of Engineering, Ashdod, Israel

Copyright © 2016 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 8 December 2015; accepted 25 January 2016; published 28 January 2016

ABSTRACT

This paper describes the design of a prototype for an emotionally enhanced computer mediated communication system which is aimed at compensating for the emotional distance between communicators. The prototype system encourages the sender to take the emotional perspective of the message receiver by giving the sender a sense of how his message may affect the intended receiver, and by allowing the sender to reconsider sending the message or revise it. The objective of an emotionally enhanced communication system is to raise awareness to the ease in which a written word might be misinterpreted since emotions are too often unseen due to the nature of lean communication media. In order to show the receiver’s assumed emotion, the written text would be parsed before sending the message. An algorithm, for demonstrating the prediction of the emotion a written message might evoke, will choose a facial expression from a pool of 7 expressions, according to the words and phrases inserted in the message. We conducted a user test to evaluate the acceptance and attitudes toward such an emotionally enhanced communication system and received a high level of acceptance and favorable attitudes towards using it.

Keywords:

CMC, ICT, Social Media, Emotional Communication, Affective Design, Interpersonal Communication

1. Introduction

Despite rapid advances in information and communication technology (ICT), textual messages are still a main communication mode, used through email, weblogs, instant messaging and many other communication systems. When sending a written message, there is an inevitable lack of non-verbal cues which may reduce the ability to understand unspoken emotions [1] . According to the media richness theory, when conversing face-to-face, we may notice various non-verbal social cues such as facial expressions, tone of voice, body language and posture, gestures and so on. These enriching non-verbal cues, which may reduce misunderstandings, usually don’t exist in written communication [2] . Rich media with high channel capacity is more appropriate for affective communication, primarily because of the complexity of feelings and the importance of non-verbal messages [3] . Since text based communication is relatively lacking in cues, its emotional tone is often ambiguous. Emotions are likely to be inaccurately perceived in email communication [4] . In situations of irregular communication, and when the interpersonal context of the sender-receiver communication is less established, the uncertainty of the receiver’s reactions to the communication is high [5] .

Perspective-taking, an act of taking into account the attitude of the other that includes the ability of a person to empathize with the situation of another person is fundamental to interpersonal communication [6] . A lack of emotional perspective taking or emotional awareness is more noticeable in adolescents and children, whose emotional intelligence is less developed in comparison to adults [7] . In addition, the act of perspective taking is less likely to occur in computer mediated communication (from here after CMC). This is due to the fact that people have a tendency to be parsimonious in their messages since information transfer by typing, which is particularly heavy in CMC, requires cognitive effort [8] . Therefore, relying on relatively lean media might enhance an affective distance between communicators causing communicational situations ranging from small misunderstandings to alienation and even situations of hurting the feelings of others.

Unfortunately, a worldwide phenomenon that increased with the advances in communication technology is cyber bullying. Cyber bullies are those who use the web in order to bully and harass others. While the phenomenon itself is not new, the way it is carried out has changed radically with today’s technology [9] . Nowadays, we see more and more people involved in negative technology activities such as cyber bullying since various communication channels allow people to send anonymous messages. Communication with another person via a computer screen is a situation that is characterized by weak social context cues, causing behavior to be relatively self-centered, less regulated, and therefore acts can be extreme and impulsive [10] . Under the guise of anonymity, people feel free to express offensive attitudes and opinions [11] and often cause the person to whom they are addressing great distress to the point that he or she may feel abused.

There are many goals for communicating (such as influencing, managing relations, reaching a mutual outcome and requesting information), and many contexts (e.g. cooperative work, social relationships, etc.). While our approach is not limited to a certain goal or context, our main concern is to ensure that the intention of the sender would be interpreted correctly at the receiver's side, with a particular emphasis on reducing affective misunderstandings. We are especially interested in the idea of designing an emotional oriented CMC that would be able to prevent negative emotions, and consequently would form positive relationships between human beings. While previous works on emotion prediction in CMC concentrated on business communications [4] [12] , our approach is more suited for less formal contexts, such as chatting and writing in forums in social network systems. In addition, since emotional intelligence is less developed in adolescents and children [7] an emotional oriented CMC may be implemented in educational systems (such as school portals) as a means to teach sensitivity, empathy and social skills. There are previous works on emotion-related textual indicators, such as detecting deception in online dating profiles [13] , sentiment detection in the context of informal communication [14] , emotion detection in emails [15] and investigating the relationship between user-generated textual content shared on Facebook and emotional well-being [16] . There are earlier works that used emoticons to express emotion evoked by text in emails (such as [12] [17] ), but our focus is on CMC aimed at raising the awareness of a message sender to the emotional consequences for the message receiver in informal synchronic communication.

We offer an idea of an emotionally enhanced CMC system, which visually presents to the message sender an image of a facial expression depicting the assumed emotion a receiver would have upon reading the message, prior to sending the message. Among various communication strategies [5] , our approach combines three: affectivity, perspective taking and control. A facial expression of the receiver’s assumed emotion provides an affective component (affectivity) that takes the emotional view of the receiver into account (perspective taking), and allows the sender to revise the outgoing message (controlling by planning the communication ahead).

We are aware of the widespread use of emoticons (also known as “emoji”) in CMC, as a means for incorporating non-verbal cues in textual communication exchanges [18] , and therefore as an effective way to overcome potential misunderstanding. It should be clear that our approach is certainly not designed to replace the use of emoticons. Instead, it serves as an additional means to prevent possible emotional misinterpretation, by ensuring an awareness of the sender to the assumed emotion evoked at the receiver’s side, and allowing the sender to revise his message if necessary.

The rest of this paper is organized as follows. Section 2 describes the phase of designing a prototype of the emotionally enhanced CMC. Section 3 describes a user test conducted for demonstrating the emotionally enhanced CMC idea using the designed prototype, and shows the user test results. Section 4 presents our conclusions, discusses the research’s limitations and finally suggests future research directions and implications of the idea in several contexts.

2. Designing a Prototype System

The main objective of this work was to examine the acceptance and attitudes towards an emotionally enhanced CMC system that raises the message sender’s awareness of the receiver’s assumed evoked emotion. For demonstrating the idea, we created a preliminary prototype and conducted a user test. In the current prototype for an emotionally enhanced CMC, we added to outgoing messages a facial image expressing an emotion the receiver of the message may experience. The emotion displayed is based on keywords and phrases contained in the message. We used six universally recognized facial expressions [19] depicting emotions of sadness, happiness, surprise, anger, disgust and fear.

2.1. Emotion Database

In order to show the receiver’s assumed emotion, the written text would be parsed before sending the message. An algorithm, for demonstrating the prediction of the emotion evoked by the written message, will choose a facial expression according to the words and expressions appearing in the message. This will be achieved by parsing a message for finding keywords and phrases that are stored in an “emotion database” along with their emotion weights. While previous work in interpersonal communication is concerned with the accuracy of the receiver’s emotional judgments in terms of the extent to which the receiver interprets the valence of the message emotional content correctly [4] , we are more concerned with the accuracy of the system’s judgment in interpreting the receiver’s emotional reaction, and with the attitudes of potential users to such an idea of an emotionally enhanced CMC.

We are aware of existing approaches to textual affect sensing, that generally fall into the categories of keyword spotting and statistical modeling [17] , but because this is a preliminary examination of the attitudes towards our idea of an emotionally enhanced CMC system, we used a simple algorithm (the “percent of total respondents” method, as we describe below in the current section) and only few keywords and phrases. The emotion database stored records of 149 keywords and phrases along with their evoked emotion. To assume the emotion evoked in a receiver to each keyword or phrase, a survey was conducted using an online questionnaire via Google Forms. Sixty one participants were asked to choose which emotions each keyword or phrase evokes in them, from a list of 6 basic emotions chosen for being universally recognized [19] : sadness, happiness, anger, disgust, surprise and fear. In addition, there was a neutral choice indicating that the word did not evoke any of the emotions (for short, we will use the term “7 emotions” from here after).

In order to create a homogeneous range of weights, the collected responses to the survey were normalized between 0 and 1. We tried out three different methods to calculate “emotion weights” indicating the extent that a keyword or phrase is charged with an emotion. The 3 methods are as follows:

Method 1. Percent of total votes―based on the number of votes a specific emotion received for a keyword and divided by the total amount of votes that all emotions received for the same keyword. The weight for each emotion was calculated by using the following:

Table 1 shows an example of the emotion weights calculation for the keyword “Idiot” using method 1.

Method 2. Percent of total respondents―based on the number of respondents who chose the specific emotion for the keyword out of the total number of respondents. The weight for each emotion was calculated by using the following:

Table 1. Emotion weights calculation for the keyword “Idiot” using method 1.

Table 2 shows an example of the emotion weights calculation for the keyword “Idiot” using method 2.

Method 3. Relative to “top” emotion―for each keyword, the emotion with the highest number of votes gets a weight of 1. The weight of the other emotions for the same keyword is relative to the highest ranked emotion according to the number of votes. The weight for each emotion was calculated by using the following:

Table 3 shows an example of the emotion weights calculation for the keyword “Idiot” using method 3.

Comparing the methods. In order to select the most suitable method for calculating the emotion weights, a graphical comparison of the three methods was conducted in order to easily visualize the differences between them. Figure 1 presents two graphs of the three methods, to illustrate two extremes: Figure 1(a) is an example for an “emotionally obvious” keyword (“love”) and Figure 1(b) is an example for an “emotionally ambiguous” keyword (“embarrassing”). Keywords with a high level of agreement among respondents on the emotion evoked by them are emotionally obvious, and keywords with a low level of agreement are emotionally ambiguous because they did not evoke a significantly conclusive emotion. From the graphs patterns we noticed that all three methods produced similar emotion weights for “emotionally obvious” keywords, while for “emotionally ambiguous” keywords method 3 produced higher emotion weights.

Whereas the keyword “love” evoked the emotion of happiness in almost all participants (60), the keyword “embarrassing” evoked a wider range of emotions as presented in Table 4.

Method 1 creates the same result as if the questionnaire were to allow only one emotion (the most pertinent) to be selected for each keyword. This beats the purpose of allowing the respondents to choose more than one emotion per keyword (since most keywords evoke more than a single emotion). For this reason, method 1 was ruled out.

With method 3, the emotion weights for “emotionally ambiguous” keywords were significantly higher than the weights when using methods 1 and 2 (Figure 1(b)). This was not the case for “emotionally obvious” keywords (Figure 1(a)). This fact could result in a distortion in the emotion calculation for messages containing both “emotionally obvious” and “emotionally ambiguous” keywords, such as the message “I love you even though I think you’re embarrassing”. The first part (“I love you”) receives similar weights for each emotion when using any of the three methods, but the second part (“you’re embarrassing”) receives similar weights when using methods 1 - 2 and a much higher weight when using method 3. In method 3, sadness becomes more significant in the message compared to its significance when using methods 1 or 2, and this is a potential bias in favor of “emotionally ambiguous” keywords. Therefore, method 3 was ruled out as well.

We chose method 2 which takes into account the fact that respondents were able to select multiple emotions for each keyword by calculating the weight for each emotion based on the number of respondents (as opposed to the total number of votes for all emotion per keyword as in method 1).

2.2. Assuming a Message’s Emotion

In order to choose and then display a facial expression of the emotion assumed to be evoked by a certain message, the system parses the message text to find keywords and phrases stored in the emotion database. For each keyword or phrase found, the system adds weight increments for each of the 7 emotions based on the emotion database, to form an overall emotion for the message. The facial image displayed on the screen, expresses the emotion with the highest weight at the end of the calculation.

2.3. Designing the User Interface

The CMC system prototype (Figure 2) was designed in a way that gives the sense of contemporary instant

(a) (b)

Figure 1. Comparison of emotion weight calculation methods (in Figure 1(a), method 3 is hidden behind method 2). (a) Keyword with high agreement, (b) keyword with low agreement.

Figure 2. The interface.

Table 2. Emotion weights calculation for the keyword “Idiot” using method 2.

Table 3. Emotion weights calculation for the keyword “Idiot” using method 3.

Table 4. Number of votes for each emotion of keywords “Love” and “Embarrassing”.

messaging applications (such as Whats App and other synchronic messaging systems). Each message is displayed inside a speech bubble coming from the “speakers” side. The system is geared toward Hebrew speakers, thus the text input and conversation display area are on the right side. On the left side of the screen is an image with a facial expression depicting the assumed (calculated) emotion.

Layout. The prototype system uses 7 images of facial expressions representing the aforementioned 7 emotions (as shown in Figure 3). We chose images from an existing facial expression estimation technology [20] of a young adult Caucasian male. In a real working system, the image should correspond to the user’s age group (child, teenager, young adult, adult or senior), gender (male or female) and race, in order to create a higher level of empathy.

The interface includes a textual label (“tooltip”) for each image as an aid to ensure that the user understands the emotion displayed correctly.

User Textual Input. Since the current research is a preliminary attempt to evaluate the acceptance of the emotional CMC idea, we intentionally designed a simple and experimental demonstration, and therefore the first prototype lacks elements that are crucial for actual communication. We will address this later and suggest improvements for the following prototypes. Since we had a predefined pool of keywords and phrases in our database, and also for the purpose of avoiding complexities that arise from allowing participants to input free text, in the current version of the prototype we asked the participants to choose a message from a pre-composed list as we describe below.

Since our focus is on emotional feedback for the message sender and the user’s role as a message receiver is currently irrelevant, we did not want to insert unwanted noise in terms of incoming messages, thus the prototype only simulated a chat. In order to give the participants feedback after sending a message and a feeling of a “conversation” with back and forth messages, each message sent was answered by an automated response (a reply of “…”).

3. Demonstration and Evaluation

This section includes a description of a user test conducted for demonstrating the idea of an emotionally enhanced CMC, followed by the results we received in our evaluation of the acceptance and attitudes toward the idea.

3.1. Demonstration

The system prototype was used to demonstrate an emotionally enhanced CMC system. In total there were 30 participants, 4 of which did not follow all the stages of the testing, which resulted in referring to their responses as irrelevant and disregarding them from the analysis. Among the remaining 26 responses, there were 13 male participants and 13 female with an average age of 32.4 (ranging between 15 and 61). We used a convenience sample of participants, comprised of friends and family members, all of which are familiar with social network systems, and specifically with chatting systems.

Test Phases. The test system guided the participants through a number of phases including interacting with the system, a questionnaire regarding the perceived usability of the system, feedback regarding their perceived correctness of the system’s assumed emotions and their overall opinion regarding such a system.

First Phase. At the beginning the participants were presented with a short written explanation of the prototype and of the system’s purpose.

Chat Phase. The first screen of the user interface was a login screen, asking the participants for their age and gender. After logging into the system, the participants were presented with the chat screen (Figure 2). They were required to send 10 messages (one at a time) from a pre-composed list of 25 textual messages. The 25 messages were composed using different keywords and phrases from the emotion database while making sure that each of the 6 emotions (facial expressions) would be a result of at least one of the messages (in other words, each emotion would be evoked by the calculation of the assumed emotion algorithm). Each time the participants

Figure 3. Facial expressions used in the system’s prototype (Retrevied from [20] ).

were to send a message, they were shown a dropdown list with a set of 5 random messages from the pool of 25 pre-composed messages. After selecting a message from the list and before sending it, the participants were instructed to press a “check” button in order to run the algorithm for calculating the message’s assumed emotion evoked and to display the image of the corresponding facial expression. To ensure that each participant would send 10 messages, an “end conversation” button which allowed participants to move forward to the next phase was hidden, and appeared only after sending the tenth message.

Accuracy of the System Phase. After finishing the “chat”, the participants were presented with a “teach me” screen, where they were asked to “teach” the system which emotion they thought was to be evoked by each message sent. A list of the 10 messages sent was presented, alongside the emotion that was displayed by the system in the previous chat phase. For each message, participants were able to mark a checkbox if the emotion displayed by the system was incorrect in their opinion, and then select the correct emotion from a dropdown list of 7 emotions. This phase had two purposes. The first was to evaluate the accuracy of the system in judging emotional reactions to messages. The second, was to allow participants to experience the act of teaching the system the correct emotion (which is intended to be a feature of a future emotional CMC system with “learning” abilities).

System Perceived Usability Phase. In this phase, the participants were presented with an adjusted system usability scale (SUS), which is a standard 10 item 5-point Likert scale questionnaire, resulting in a SUS score which allows a comparison of different systems [21] . SUS is considered as a tool that is highly robust and versatile for usability professionals, providing participants’ estimates of the overall usability of an interface, regardless of the type of interface [22] .

Open Questions Phase. Upon completing the use of the system and answering the SUS questionnaire, the participants were presented with 13 open questions regarding the system. The questions were about the perceived usability of the system (ease of use), their acceptance and willingness to use such a system, where do they think such a system would be useful and so on.

3.2. Evaluation

Results. We now present the results, divided into three parts: 1. System Usability Scale; 2. System Accuracy―examining the correctness of the system’s algorithm in assuming the emotion; 3. Open Questions―ana- lyzing the responses to the open questions in order to learn about the attitudes of the participants regarding the idea of an emotionally enhanced CMC.

System Usability Scale. Results for the SUS questionnaire appear in Table 5. The average of all the individual SUS scores is the SUS score for the interface.

An independent-samples t-test was conducted to compare the SUS scores between male and female respondents. A significant difference was found in the scores between men (M = 83.846; SD = 8.14) and women (M = 90; SD = 6.038); conditions: t(24) = 2.1892, p = 0.0385, meaning that the female participants perceived the system as significantly more usable than the male respondents. There is a need to interpret the received numeral SUS score to a textual description, to determine whether or not the received score indicates that the system is usable. We followed a previous study conducted in order to determine the meaning of SUS scores, and concluded that the total SUS score for both genders can be translated as “Excellent”, meaning the system has a very high level of usability [23] .

System Accuracy. The measurement of the system’s accuracy was based on the results of the “teach me” screen. Table 6 presents the percent of the cases in which the system’s calculated emotion was correct, according to the participants. The accuracy of the system is in terms of the percent of emotions displayed that were considered correct by the users out of the overall emotions displayed.

Table 5. SUS scores for all users and by gender.

Table 6. Accuracy in terms of the percent of correct emotions.

Overall we can conclude that the system’s accuracy in assuming the emotion is satisfactory. The female respondents agreed more with the emotions displayed in comparison to the male respondents. An independent- samples t-test was conducted to compare the accuracy reported by the male and female respondents. No significant difference was found between men (M = 0.8538; SD = 0.133) and women (M = 0.9231; SD = 0.1092); conditions: t(24) = 1.4505, p = 0.1599. We found that about half of the emotion corrections made by male respondents (10 out of 19 corrections) were to the “neutral” option, whereas the female’s corrections were almost always to an alternative emotion (only a single correction out of 10 corrections was to the “neutral” option). Perhaps this can be explained by previous findings that men are more likely to have difficulty distinguishing one emotion from another [24] , and that they are less accurate and less sensitive in labeling facial expressions [25] and therefore they tended to choose an unspecific option (“neutral”). We cannot think of any other explanation for this finding, and since our sample is small, a following research is needed to examine if there are significant differences in system accuracy perceptions between women and men in this type of system or if there are different emotional responses to keywords and phrases between genders.

Open Questions. The open questions were about the usability of the system, the acceptance of such a system and the willingness to actively teach such a system.

System Usability. 80% of respondents (21) indicated that the emotion displayed by the system was understandable, though most users did not notice the tooltip with the textual label of the emotion displayed. From this we can conclude that a facial expression alone is usually enough to convey the emotion that would be evoked at the side of the message receiver. A recurring comment was that there were not enough emotions included in the system, including suggestions for specific emotions (such as “concern” and “disappointment”). This was noticed when the respondents were asked to say whether or not the emotion displayed by the system was correct. Another suggestion from the respondents, for improving the system, was to add an explanation of what part of the message is causing the emotion displayed by the system, such as a popup message or the use of colors to highlight the relevant part, without having to search for it or to make assumptions.

System Acceptance. 73% of the respondents (19), when asked if they would use such a system for communicating, stated that they would. The respondents who stated that they would not make use of such a system indicated that they have a high level of emotional intelligence and do not feel the need for such a system. When asked to elaborate on the situations in which such a system would be used, the prevalent answer was-when there was an uncertainty on whether the message being sent evokes the desired emotion in the receiver of the message. Another reason indicated for using such a system was during conversations with unfamiliar people or with people in a professional capacity (co-workers, managers, costumer relations).

Another question regarding the acceptance of such a system was whether or not the respondent would recommend such a system and if so, to whom. All but one of the respondents indicated that they would recommend such a system, to family members and to friends. A number of respondents indicated that such a system should be used among co-workers and in business related conversations. Another recommendation was that such a system would be effective for people with a low level of emotional intelligence (such as people diagnosed with Asperger syndrome) and for children and teenagers.

In addition to the willingness to use and recommend the use of such a system, the respondents were asked where they would like to see such a system incorporated. The majority of responses (80%) indicated the use of instant messaging applications, chats and email with specific references to Facebook and to Whats App applications.

Willingness to Teach the System. We asked about the users’ willingness to actively teach the system the correct emotional reaction for a message. Three levels of interaction with the system were asked about, each level includes the previous interaction and an increased level of user involvement: 1-Noting the correctness of the emotion displayed (correct/incorrect); 2-Selecting a correct emotion; 3-Marking which specific keyword or phrase evoked the emotion. Overall, the responses indicate a high level of willingness to teach such a system, with remarks such as “if it would help someone” and “if it would cause the system to be accurate”. The few reservations raised about teaching the system were in regards to the amount of effort a user needs to invest in order to teach the system. Some of the respondents considered the third level of interaction (marking a specific keyword or phrase) as too complicated and had asked whether there was some sort of gain or incentive from such an interaction with the system.

4. Conclusions and Discussion of Limitations and Future Directions

The main objective of this work was to examine the acceptance and attitudes towards an emotionally enhanced CMC system that raises the sender’s awareness to the emotion assumed to be evoked in the receiver. This was done by creating a prototype of such a system and conducting a user test. The responses from the participants indicate there is a good level of acceptance for a system that is aimed at preventing emotional misinterpretation and harm. The algorithm used in the prototype system, which calculates the assumed emotion of the receiver by using weighed keywords and phrases, produces satisfactory results and, according to the system testing participants, the system displayed a correct emotion most of the time. We found that female participants perceived the system as significantly more usable than the male respondents. Previous research findings show that females and males differ in their emotional reactions to facial expressions [26] [27] , and therefore there is need to further examine gender differences in attitude toward an emotional CMC system with facial expressions. Encouraged by the results of our preliminary evaluation, we are certain that the idea for a system as proposed here should be studied more thoroughly and extensively in order to determine the real feasibility for a system as such and its possible implementations.

4.1. Limitations and Future Directions

For practical reasons and since this is a preliminary demonstration of our idea of emotionally enhanced CMC in informal contexts, we used a simple algorithm for analyzing the message text. For the same reason, and due to the complexity of mining algorithms and the involvement of artificial intelligence (AI) issues, the demonstration was not designed as a real two-way conversation. We focused on the sender’s side, the message’s author, and how he would accept a system that offers him the chance to revise his message to prevent affective misinterpretations. This is why participants were unable to type freely whatever message they wanted, and used a pre- composed list of messages instead, and a quite small emotion database of keywords and phrases. We intend to continue with the demonstration, but with a much more complex and reliable algorithm for text analysis. Also, a future study should allow the typing of any “free” text by the user (not only the selection of a pre-composed message), and the test should be conducted in a way which allows a conversation between individuals (not only a simulation with an automated response). Then the system would ask the receiver what emotion the message evoked in him, and we will be able to compare the receiver’s choice to the emotion that was calculated by the system.

Another shortcoming is that we conducted our evaluation with a small group of users, comprised mostly of friends and family members, meaning that the results and conclusions should be verified with future more extensive studies.

To evoke emotional responses of empathy, we used photographic expressions of a “real” person, instead of using Chernoff style faces or some other type of lifelike cartoon faces. But our realistic approach may have disadvantages such as difficulty to empathize with dissimilar persons (e.g. opposite gender, different age, race, etc.). People experience more empathic emotions when interacting with people with whom they have a communal relationship, or proximity [28] . Future work that deals with designing systems that express emotions by faces should take into account a wider range of photographic images, which correspond to the user’s group in order to ensure a higher level of empathy.

For creating more realistic conversations, following emotional CMC system prototypes should deal with AI issues. In real CMC, several distinct emotions may come up during a single conversation, and the reaction to a specific message may be affected by many contextual factors that were not addressed in the current system prototype. Such factors include previous utterances exchanged in the same conversation or in past conversations, the communicators current context (e.g. mood, goal, beliefs), the nature of the emotional relationships between communicators prior to the conversation (e.g. pleasant versus tense relationships), or the types of relationships between communicators (e.g. group types such as professional, familial, and friendship, with additional sub- groups types such as siblings, parent-child, co-workers, employee-employer and so on). Also, if a comment in a message is about a third party, the emotion evoked in the receiver might be different than the emotion if the comment was directed at him. A real CMC system that would implement our idea should deal with the complexity of human communication.

Another aspect that makes written utterances hard for a computerized system to interpret is identical words: many words can be used as either an adjective or a noun in different contexts and tools that can overcome this issue are needed. For example, PCFG (Probabilistic Context-Free Grammars) [29] is a method that allows determining the role of the word in the sentence statistically. These tools are indeed relevant and should be dealt with in future emotional CMC interactive systems.

A real emotionally enhanced CMC system needs to understand the meaning of a keyword or phrase, based on the conversational context. Of course, this requires AI components. Language is full of figurative speech; these are expressions that are used in certain situations and contexts, such as idioms and metaphors. Idioms such as “break a leg” and “piece of cake” have two different meanings, one is the literal (taking the words in their most basic sense) and the other is figurative (meaning “good luck” and “it’s very easy”, respectively in the examples given above). Another example is the metaphorical phrase “out of this world” that has two different meanings?while one may refer to something that is extraterrestrial, the other may describe something as remarkable or excellent. Intelligent systems should “know” if an utterance is intended to be taken literally or figuratively.

In addition, an emotionally enhanced CMC system will need to learn new words and their meanings. Like relationships, language is dynamic. There will be a need to constantly update keywords and phrases in emotional CMC databases, in order to catch up with language development, and to take into account new slang. Therefore, we plan to address the issue of a learning system, which will dynamically update its emotion database with new keywords and phrases, and also update the emotion weights, on the basis of users’ willingness to teach the system about their emotional reactions to messages. We will rely on previous work in the area of behavioral persuasive and persuasive design [30] , and on gamification techniques applied in recommender systems to encourage users to provide ratings and evaluations (for example [31] ).

An additional route to follow in emotional CMC designs is to expand the basic emotions to a richer variety, since the human emotional range expands beyond the 6 universally recognized emotions. Also, CMC systems should support the notion of “mixed” emotions such as “happily surprised” and “sadly angry” [32] .

4.2. Implications

As previously claimed it is important to interpret keywords or phrases in the correct context. Unlike our rather artificial demonstration of the idea of emotional awareness in CMC, real emotionally enhanced CMC should most definitely have complicated AI aspects, making it capable of “understanding” various textual phrases, taking contextual information and the relationship between communicators into account. Systems that create and constantly update user profiles may be designed to capture various types of interpersonal information in order to contextualize the dynamic nature of relationships between communicators.

Emotionally enhanced CMC aimed at increasing the message sender's awareness of the receiver's emotional reaction can be applied to social network systems, education contexts, and also to work-related communication and computer-supported cooperative work (CSCW).

Schools have an important role in fostering not only children's cognitive development but also their social and emotional development [33] . In the twenty-first century, when elementary and high school children are increasingly using ICT, it is more important than ever that educators be aware of the emotional distance that can be intensified by these communication media. This is why we consider educational contexts, especially school portals with online chatting and forums, as most suitable and we even consider them as an opportunity to teach youngsters the values and skills of consideration for others, compassion and empathy.

Since the practice of medicine is fraught with emotion [34] , healthcare systems that involve communication between medical staff and patients, will surely benefit from an emotionally sensitive CMC. Health support technologies with emotionally enhanced CMC may help medical staff move from detached concern to empathic guidance.

Nowadays, elderly people (age 65 and older) deepen their adoption of technology and, their usage of social media had tripled since 2010 [35] . Emotionally enhanced CMC systems have a high potential for bridging gaps that are due to language differences between generations. A great deal of work in the area of designing for the elderly is already invested in making “age-friendly” interfaces with a focus on human computer interactions [36] , but additional emphasis is needed to be put on interpersonal interactions via computers. We are convinced that CMC systems that raise emotional awareness would have a positive effect on intergeneration relationships, especially between grandparents and their grandchildren. Future CMC systems using facial expressions to convey the assumed emotion evoked in the receiver can also highlight specific parts of the message that evoked the emotion, so the message sender would be encouraged to decide if an additional explanation is needed to clarify an expression that is likely to be misunderstood.

Our current demonstration and evaluation shows that the idea of an emotionally enhanced CMC system is acceptable and perceived useful for interpersonal engagement in CMC systems. We are certain that any human communication can benefit from systems that reduce emotional misinterpretation, and that encourage users to control their communicative actions and be more thoughtful.

Cite this paper

MoranDavid,AdiKatz, (2016) Emotional Awareness: An Enhanced Computer Mediated Communication Using Facial Expressions. Social Networking,05,27-38. doi: 10.4236/sn.2016.51004

References

  1. 1. Derks, D., Fischer, A.H. and Bos, A.E. (2008) The Role of Emotion in Computer-Mediated Communication: A Review. Computers in Human Behavior, 24, 766-785. http://dx.doi.org/10.1016/j.chb.2007.04.004

  2. 2. Dennis, A.R. and Kinney, S.T. (1998) Testing Media Richness Theory in the New Media: The Effects of Cues, Feedback, and Task Equivocality. Information Systems Research, 9, 256-274. http://dx.doi.org/10.1287/isre.9.3.256

  3. 3. Westmyer, S.A., DiCioccio, R.L. and Rubin, R.B. (1998) Appropriateness and Effectiveness of Communication Channels in Competent Interpersonal Communication. Journal of Communication, 48, 27-48. http://dx.doi.org/10.1111/j.1460-2466.1998.tb02758.x

  4. 4. Byron, K. (2008) Carrying Too Heavy a Load? The Communication and Miscommunication of Emotion by Email. Academy of Management Review, 33, 309. http://dx.doi.org/10.5465/AMR.2008.31193163

  5. 5. Te’eni, D. (2001) Review: A Cognitive-Affective Model of Organizational Communication for Designing IT. MIS Quarterly, 25, 251-312. http://dx.doi.org/10.2307/3250931

  6. 6. Krauss, R.M. and Fussell, S.R. (1996) Social Psychological Models of Interpersonal Communication. In: Higgins, E.T. and Kruglanski, A., Eds., Social Psychology: A Handbook of Basic Principles, Guilford, New York, 655-701.

  7. 7. Mayer, J.D., Caruso, D.R. and Salovey, P. (1999) Emotional Intelligence Meets Traditional Standards for an Intelligence. Intelligence, 27, 267-298. http://dx.doi.org/10.1016/S0160-2896(99)00016-1

  8. 8. Katz, A. and Te’eni, D. (2007) The Contingent Impact of Contextualization on Computer-Mediated Collaboration. Organization Science, 18, 261-279. http://dx.doi.org/10.1287/orsc.1060.0237

  9. 9. Li, Q. (2007) New Bottle but Old Wine: A Research of Cyberbullying in Schools. Computers in Human Behavior, 23, 1777-1791. http://dx.doi.org/10.1016/j.chb.2005.10.005

  10. 10. Sproull, L. and Kiesler, S. (1986) Reducing Social Context Cues: Electronic Mail in Organizational Communication. Management Science, 32, 1492-1512. http://dx.doi.org/10.1287/mnsc.32.11.1492

  11. 11. Hamburger, Y.A. (2013) The Good, the Bad and the Ugly—The Psychology of Life on the Internet. Matar, Tel Aviv.

  12. 12. Shmueli-Scheuer, M., Feigenblat, G., Borovoy, I., Daniel, T., Herzig, J. and Konopnicki, D. (2015) Extending Email with Emotion Analysis. IsraHCI, Hertzelia.

  13. 13. Toma, C.L. and Hancock, J.T. (2010) Reading between the Lines: Linguistic Cues to Deception in Online Dating Profiles. Proceedings of the 2010 ACM conference on Computer Supported Cooperative Work, 5-8. http://dx.doi.org/10.1145/1718918.1718921

  14. 14. Thelwall, M., Buckley, K., Paltoglou, G., Cai, D. and Kappas, A. (2010) Sentiment Strength Detection in Short Informal Text. Journal of the American Society for Information Science and Technology, 61, 2544-2558. http://dx.doi.org/10.1002/asi.21416

  15. 15. Gupta, N., Gilbert, M. and Di Fabbrizio, G. (2013) Emotion Detection in Email Customer Care. Computational Intelligence, 29, 489-505. http://dx.doi.org/10.1111/j.1467-8640.2012.00454.x

  16. 16. Settanni, M. and Marengo, D. (2015) Sharing Feelings Online: Studying Emotional Well-Being via Automated Text Analysis of Facebook Posts. Frontiers in Psychology, 6. http://dx.doi.org/10.3389/fpsyg.2015.01045

  17. 17. Liu, H., Lieberman, H. and Selker, T. (2002) Automatic Affective Feedback in an Email Browser. In MIT Media Lab Software Agents Group’, ACM.

  18. 18. Lo, S.-K. (2008) The Nonverbal Communication Functions of Emoticons in Computer-Mediated Communication. Cyber Psychology & Behavior, 11, 595-597. http://dx.doi.org/10.1089/cpb.2007.0132

  19. 19. Ekman, P. and O’Sullivan, M. (1991) Facial Expression: Methods, Means, and Moues. Fundamentals of Nonverbal Behavior, 1, 163-199.

  20. 20. OMRON Global. (n.d.) https://www.omron.com/

  21. 21. Brooke, J. (1996) SUS—A Quick and Dirty Usability Scale. Usability Evaluation in Industry, 189, 4-7.

  22. 22. Bangor, A., Kortum, P.T. and Miller, J.T. (2008) An Empirical Evaluation of the System Usability. International Journal of Human-Computer Interaction, 24, 574-594. http://dx.doi.org/10.1080/10447310802205776

  23. 23. Bangor, A., Kortum, P. and Miller, J. (2009) Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. Journal of Usability Studies, 4, 114-123.

  24. 24. Thayer, J. and Johnsen, B. (2000) Sex Differences in Judgment of Facial Affect: A Multivariate Analysis of Recognition Errors. Scandinavian Journal of Psychology, 41, 243-246. http://dx.doi.org/10.1111/1467-9450.00193

  25. 25. Montagne, B., Kessels, R.P., Frigerio, E., de Haan, E.H. and Perrett, D.I. (2005) Sex Differences in the Perception of Affective Facial Expressions: Do Men Really Lack Emotional Sensitivity? Cognitive Processing, 6, 136-141. http://dx.doi.org/10.1007/s10339-005-0050-6

  26. 26. Hall, J.A. and Matsumoto, D. (2004) Gender Differences in Judgments of Multiple Emotions from Facial Expressions. Emotion, 4, 201-206. http://dx.doi.org/10.1037/1528-3542.4.2.201

  27. 27. Sawada, R., Sato, W., Kochiyama, T., Uono, S., Kubota, Y., Yoshimura, S., et al. (2014) Sex Differences in the Rapid Detection of Emotional Facial Expressions. PLoS ONE, 9, e94747. http://dx.doi.org/10.1371/journal.pone.0094747

  28. 28. Paiva, A., Dias, J., Sobral, D., Woods, S. and Hall, L. (2004) Building Empathic Lifelike Characters: The Proximity Factor. Workshop on Empathic Agents, AAMAS, 4.

  29. 29. Johnson, M. (1998) PCFG Models of Linguistic Tree Representations. Computational Linguistics, 24, 613-632.

  30. 30. Fogg, B.J. (2002) Persuasive Technology: Using Computers to Change What We Think and Do. Ubiquity, 2002, 2. http://dx.doi.org/10.1145/764008.763957

  31. 31. de Ca Ziesemer, A., Müller, L. and Silveira, M.S. (2014) Just Rate It! Gamification as Part of Recommendation. In: Kurosu, M., Ed., Human-Computer Interaction. Applications and Services, Springer, Berlin, 786-796.

  32. 32. Du, S., Tao, Y. and Martinez, A.M. (2014) Compound Facial Expressions of Emotion. Proceedings of the National Academy of Sciences of the United States of America, 111, E1454-E1462. http://dx.doi.org/10.1073/pnas.1322355111

  33. 33. Durlak, J.A., Weissberg, R.P., Dymnicki, A.B., Taylor, R.D. and Schellinger, K.B. (2011) The Impact of Enhancing Students’ Social and Emotional Learning: A Meta-Analysis of School-Based Universal Interventions. Child Development, 82, 405-432. http://dx.doi.org/10.1111/j.1467-8624.2010.01564.x

  34. 34. Angoff, N.R. (2013) Making a Place for Emotions in Medicine. Yale Journal of Health Policy, Law, and Ethics, 2, 8.

  35. 35. Perrin, A. (2015) Social Media Usage: 2005-2015. Pew Internet & American Life Project, Washington DC.

  36. 36. Williams, D., Ahamed, S., Chu, W.C.C., Wang, M.-T. and Chang, C.-H. (2014) Cloud-Based Synchronization for Interface Settings for Older Adults. International Journal of Hybrid Information Technology, 7, 29-42. http://dx.doi.org/10.14257/ijhit.2014.7.4.04