Journal of Computer and Communications
Vol.06 No.12(2018), Article ID:89164,19 pages
10.4236/jcc.2018.612002

Detecting Human Mood from Physiological Signal and Data Usage

Iftakhar Hossain, Tanzila Islam, Mohammad Raihan Ruhin

Department of Computer Science and Engineering, Jahangirnagar University, Dhaka, Bangladesh

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: November 8, 2018; Accepted: December 11, 2018; Published: December 14, 2018

ABSTRACT

As the days go by, there are technologies that are being introduced everyday, whether it is a tiny music player iPod nano or a robot “Asimo” that runs 6 kilometers per hour. These technologies entertain, facilitate and make the day easier for the human being. It is not arguable anymore that the people need these technologies with the smart systems to lead their regular life smoothly. The smarter the system is; the more people like to use it. One major part of this smartness of the system depends on how well the system can interact with the person or the user. It is not a dream anymore that a system will be able to interact with a human just the way that one human interacts with another. To make that happen, it is obvious that the system must be intelligent enough to understand a human being. For example, if we need a Robot that can have a random conversation with a human, the system must recognize and understand the spoken word to reply the human. And the reply will be based on the current mood and behavior of the human. In this scenario, a human uses his senses to receive the inputs such as voice through the hearing senses, behavior and movement of the body parts, and facial expression through seeing sense from the speaking human. And it is now apparently possible to take such inputs for a system which can be stored as data; later it is possible to analyze the data using various algorithms and also to teach the system through Machine Learning algorithms. We will briefly discuss issues related to the relevance and the possible impact of research in the field of Artificial Intelligence, with special attention to the Computer Vision and Pattern Recognition, Natural Language Processing, Human Computer Interaction, Data Warehouse and Data Mining that is used to identify and analyze data like psychological signals, voice, conversation, geo location, and geo weather, etc. In our research, we have used heart rate that is a successful physiological signal to detect human mood and used smartphone usage data to train the system and detect mood more accurately than other methods.

Keywords:

Mood Detection, Pattern Recognition, Euclidian Formula, Physiological Signals, Machine Learning, Data Mining, Natural Language

1. Introduction

Human mood plays a crucial role for a person in his daily life. It sets up how his rest of the day will go and plays a significant role in our lives, influences our behavior, drives social communication and shifts our consumer orientations. Without any argument, we can state that it would be an easier and different world if devices could understand the user more precisely. And to make that happen, it is obvious that the device must recognize the user’s mood. Supposing that Siri could identify the user’s mood, then it would try to cheer the user when the user is depressed and also it could make a usual conversation with the user like we do. The recognition of human mood would bring a new era to the current digital social ecosystem. These days in social sites, it is often seen that the people are sharing their mood with their thoughts. But there is no any way to justify the mood and state that the user shared is correct or not. Also, this sharing differs from user to user for the different personalities of different users. Such as, any less talking person would never share his/her mood while he/she is in a bad mood. A study tells us that 6.6% of people usually never like to share their mood [1] . Mood sensing can also enable users to digitally communicate closer to the way that they would communicate in real life. For mood sharing, an automatic mood sensor will not only improve the usability, but also more importantly, from the social barrier for a user to share their mood: we do not directly tell others our mood very often, but we do not try to conceal our mood very often either. To enable these scenarios, we consider a system that recognizes user’s mood by gathering their physiological signals [1] , pursuing conversation [2] , noticing the voice tone of the user [3] , and smartphone usage patterns [4] . Our proposed system is in its ability to peer into usage data, and user’s physiological signals and finally extrapolate the mood of the user. Common observations inspire our approach. Now we can assess physiological signals through latest smartphones like Samsung S5 which can measure heart rate. Some other devices such as wrist belts like Gear Fit can measure physiological signals and pass the information into the smartphone. Except these, our smartphones have robust information about us: where we are, what we eat, what we like, what we do, etc.

Furthermore, people use their smartphone differently when they are in different mood states. There is an application named Mood Scope which attempts to leverage these patterns by learning about its user and associating smartphone usage patterns with certain moods. Our system’s approach is not invasive to users; it does not require users to carry any extra hardware sensors as now it is available in smartphones to acquire physiological signals with the smartphones. Our proposed system passively runs in the background, monitors traces of users’ smartphone usage, keeps track of voice calls to justify the voice tone in different moods and keeps trace of the heart rate. The objectives of this research are:

・ To detect human mood.

・ To increase the efficiency of social robots.

・ To make easier interaction between human and computer or any other intelligent systems.

・ To establish a hypothesis for increasing the accuracy of the existing methods of mood detection.

The remainder of this paper is structured as follows: Section 2 finds the literature review, which labels background and relevant works. Section 3 presents a background study which briefly tells about the problem, the purpose of the study and the challenges which inspired us to choose this topic. Section 4 discusses about our proposed method with observation and the algorithm we used in our calculations. Finally, Section 5 concludes the paper by discussing the limitation and improvement opportunities of our proposed method.

2. Literature Review

2.1. Background

In this section, we have provided background regarding how mood is measured in psychology research and physiological research. In recent years, family therapists have sought to establish the credibility of their therapeutic approach by building the evidence base for models of practice. One such family therapy model is Olson’s Circumplex Model [5] . And considering physiological research, ECG, EMG, SC, and RSP these physiological signals applied with GA-KNN [6] method had generated trustworthy results. In our upcoming sections, we have elaborated “Circumplex Mood Model” and “Emotion pattern recognition with physiological signals” a bit more.

2.1.1. Circumplex Mood Model

The Circumplex mood model (Figure 1) employs a small number of dimensions to describe and measure mood. The model consists of two dimensions: the pleasure dimension and the activeness dimension. The pleasure dimension evaluates how positive or negative one feels. The activeness dimension measures whether one is likely to take an action under the mood state, from active to passive. As demonstrated in MoodScope [1] , users are able to consistently place discrete effects in the two dimensional space. The Circumplex model has also been well corroborated and widely used in other studies [5] [7] . Another common approach to describe affect is through the use of discrete categories [8] . A very popular example is Ekman’s six basic categories: happiness, sadness, fear, anger, disgust, and surprise [9] . This approach is intuitive and matches people’s daily experience as well. However, basic categories fail to cover a full range of people’s

Figure 1. The Circumplex mood model [1] .

affect displays and it is hard to decide on a common set of independent discrete mood categories. It is also difficult to quantify effect with discrete categories. Yet another approach used in psychology has been the Positive and Negative Affect Schedule (PANAS) [7] [8] model. The PANAS model is based on the idea that it is possible to feel good and bad at the same time [5] . But this is a complex concept for a simple user to adapt. Thus, we have avoided PANAS model.

2.1.2. Emotion Pattern Recognition Using Physiological Signals

Physiological data were acquired in four different affective states and GA-KNN [6] methods have been tested. Here, four significant physiological signals, including ECG, EMG, SC, and RSP, which is easy to obtain relatively, to recognize emotion. Based on the data processing, 193 features were obtained from the raw signals. To overcome the high-dimension classification difficulty, GA-KNN, a combination of predictor and dimension-reducing technique, was adopted. Recognition accuracy is up to 97% [6] which is much higher than previous studies. The classification results were quite encouraging, and showed the feasibility of a user-independent emotion recognition based on physiological signals. The experimental results show that it is feasible and effective to classify emotion. The modern intelligent optimization algorithm was used to find which features are significant for some emotion data. Although differences in the physiological response of the subjects were noticed, we also found similarities, e.g. joy was characterized by high SC and EMG-levels, deep and slow breathing and an increased heart rate. In contrast, anger was accompanied by flat and fast breathing.

We have considered parameters like heart rate as psychological signals, and smartphone usage data. But these parameters will turn this system physically heavy. So, still some of these parameters are under our observation for future improvements.

2.2. Relevant Work

In the paper “MoodScope: Building a Mood Sensor from Smartphone Usage Patterns” [1] , Robert LiKamWa, Yunxin Liu and others showed us that human mood can be detected almost accurately using a training data set. A specific user puts his day to day data to train the system. Initially they got the result, 66% accurate, but after considering a training data set for two months’ specific user activity, the result gained the accuracy of 93%. Their system works in two parts: a background logger and a mood journaling application; the background logger’s logs, social interactions. It logs the number of SMS a user sending each day or number of calls a user makes each day. It also logs many other interactions as parameters like call duration, emails, number of visited websites, etc. From these parameters with some statistical analysis, they output a user’s mood.

In the paper “Emotion Recognition based on 2D-3D Facial Feature Extraction from Color Image Sequences” [10] , Robert Niese, Ayoub Al-Hamadi and others proposed a system in which they could detect human emotion from a user’s 2D image. Their proposed method consisted with seven parts as shown in Figure 2. In the first part, they took a static 2D image. The proposed system uses the image to extract facial features and points. So, facial points like eye position, eyebrow position, and nose position got extracted using RGB color space in the first part. They named the second part of their proposed system “camera model”. It simulates a number of fundamental properties of the image capturing device. The camera parameters are gained in a calibration procedure in which they determine external and internal parameters. Calibration is a well-known approach in Photogrammetry and surveying [Albertz, J., Kreiling, Photogrammetric Guide, Herbert Wichmann Verlag GmbH, Karlsruhe, 1989].

In the third part, a 3D geometrical model of the user’s face got generated from the first part. Then all the first three basic stages generate the fourth stage, estimation of face pose. It’s a phase where the system could understand the position of the user’s face. In the next three stages, the system gradually generates 3D geometric feature, extracts vector feature and finally classify the emotion.

“From Joyous to Clinically Depressed: Mood Detection Using Spontaneous Speech” [11] is the paper where Sharifa Alghowinem, Roland Goecke and others showed us how to detect a specific human mood (depressed) from his/her speech pattern. They conduct their research in a clinical research facility in Sydney, Australia, offering specialist expertise in depression and bipolar disorder. Both types of patients with healthy controls as well as patients who have been diagnosed with pure depression were among the subjects. They detected human mood using Paralinguistic feature (acoustic cues). In their research, following Para-linguistic feature, they used duration, MFCC, energy and pitch variation as

Figure 2. Workflow of the suggested 2D-3D based method [10] .

parameters. To classify a human mood, they applied Hidden Markov Model and Gaussian True Model on those parameters and the results were truly promising.

In the paper “Human Emotions Detection using Brain Wave Signals” [12] researcher Ali S. Al Mejrad showed us the path to detect emotions by taking a look inside a human’s brain. In his system, the electrical activity of the human brain is recorded through the electrodes which are placed on the scalp of the brain. These recorded brain waves went for preprocessing.

In the preprocessing stage, Noise, Artifacts, and other external interferences are removed. Removal of noise was made by using Wavelet Transform and the artifacts were removed by using an Independent Component Analysis. After the preprocessing, researchers applied wavelet transform of the preprocessed signal for extracting the features from the EEG signals. Basic workflow of Human emotion detection using EEG signals is shown in Figure 3. Thus the statistical properties of the signals like Mean, Median, Variance, Average Power, Average Energy, Power Spectral Density Function, Skewness, and other parameters are gathered.

After this, the Wavelet transform coefficients are dimensionally reduced for simplifying the classification process and finally the user’s emotion got classified.

“Mood Detection: Implementing a facial expression recognition system” [13] is the paper where Neeraj Agrawal, Rob Cosgriff and Ritvik Mudur tried to detect human mood using facial expressions. In this paper, the researchers first took subjects the static image and then used it for image pre-processing. In the image pre-processing section, the location of the eyes is first selected manually. Images are then scaled and cropped to a fixed size (170 × 130) keeping the eyes in all images aligned. The image is a histogram equalized using the mean histogram of all the training images to make it invariant to light, skin color, etc. And after that a fixed oval mask is applied to the image to extract the face region. This serves to eliminate the background, hair, ears and other extraneous features in the image which provide no information about facial expression. Then they extract features from those pre-processed image and apply Gabor filter and their own Eigen Value. Researchers uniquely designed their own algorithm for their system. After certain experimentation, they verified that the results generated using their system, are trustworthy.

In the paper, “Detecting Emotion in Text” [14] researcher Kaitlyn Mulcrone

Figure 3. Basic workflow of Human emotion detection using EEG signals [12] .

showed us the way to detect human mood from text. This text can be from social media, from SMS or from email. With the help of vastly emerging field of Natural Language processing (NLP) and a part of it Linguistic processing, the researcher exhibited the route of detecting emotions through text. In the proposed system, the researcher detects emotion through annotation, emotional lexicon in text, providing the system emotion labeled database and by emotion detection case study. Detecting emotional jargons in the text, the proposed system finally classifies the emotion using the Vector Space Model, Reduction Methods (Latent Sentiment Analysis, Non-negative Matrix Factorization) and Valence-Arousal-Dominance.

In the paper “Physiological Parameter Monitoring from Optical Recordings with a Mobile Phone” [15] , Scully, C., Jinseok Lee and others monitored several physiological signals accurately using a mobile phone. Breathing rate, cardiac R-R intervals, and blood oxygen saturation were their main focus area. They recorded spontaneous finger color change using a Motorola Droid. ECG recordings (heart rate and respiration rate) were made with an HP 78354A acquisition system using a standard 5-lead electrode configuration. They also attached a respiratory belt around the subject’s chest to monitor breathing rate. After collecting those data, they extract those parameters. For experiments, Assessing HR, Heart Rate Variability (HRV), and Respiration Rate, they only used green band from the RGB video recordings. Then R-wave peak detection of the ECG signal and beat detection from the GREEN signal were performed using custom algorithms. Later on through their readings they gathered accurate results from those custom algorithms.

“Emotion Pattern Recognition Using Physiological Signals” [6] is the paper where Xiaowei Niu, Liwan Chen and other researchers recognized human emotions by playing songs for the subject and then analyzing physiological signals of the subject. As mentioned earlier, four music songs were used to make subject carefully handpicked in respect of four targeted emotion classes, joy, anger, sadness and pleasure. Then they used Music induction methods to arouse the inner feelings of the subject. Furthermore, they also provided an emotion elicitation protocol, verified to be effective in the preliminary study. While the subject listens to the music, four-channel they used biosensors to record electromyogram (EMG), electrocardiogram (ECG), skin conductivity (SC) and respiration change (RSP). The total 193 features were separately extracted from 4 physiological signals, ECG (84), EMG (21), SC (21) and RSP (67). Overall, they collected 25 recordings (25 days) for each emotion. The length of the recordings depends on the length of the songs. But it was later cropped to a fixed of length of two minutes per session and emotion. ECG was sampled at 256 Hz, the other signals at 32 Hz; the original data length is 30720. Then they computed those statistical features using several formulas like standard deviation, mean absolute value, etc. Then applying genetic algorithm and K-nearest neighbor classifier, they successfully detected human emotion from matrix feature subset. Among all the literatures we study, their results were most promising. The accuracy level of their system was 97%.

“Toward Detecting Emotions in Spoken Dialogs” [2] is the paper where we once again can see a system able to detect human emotion from subject’s speech. In this paper, researcher Chul Min Lee and Shrikanth S. Narayanan focused on acoustic correlation including pitch-related features, formant frequencies, timing features, voice-quality parameters and articulation parameters. In their research, they used training data set. The training data set was obtained from real users engaged in spoken dialog with a machine agent over the telephone using a commercially deployed call center application. To process these data objective measures such as ASR accuracy, total number of dialog turns, and rejection rate was used with a view to narrow down the inventory for potentially useful dialogs in our experiments.

Then this processed data went through two stages―word selection and salient word dictionary for lexical feature extraction. Then from those lexical features, they classify the emotional pattern and detect subjects’ emotion. The blog diagram of classification using Lexical information is shown in Figure 4.

Another popular way of detecting human mood is Image processing. So we want to use image processing a bit in our upcoming hypothesis.

The basis of detecting human mood is to detect some vital focal points. Using the Euclidian distance formula, distances between those facial points are calculated. Then relation between those distances determines the current mood of that specific human being. In Figure 5, six universal emotional expressions have been shown.

2.3. Motivation

To work on human mood, to strengthen current AI world a bit more we need a

Figure 4. Block diagram of classification using Lexical information [2] .

Figure 5. The six universal emotional expressions [10] .

system which can detect human mood as precisely as required. Rather than building or proposing a whole new system, we tried to propose a system with our hypothesis where we can ensure human mood detection with more trustworthy and precise results.

3. Problem Overview

3.1. Science of Human Mood

A mood is an emotional state that may last anywhere from a few minutes to several weeks. Mood affects the way people respond to stimuli. It is an emotive state and has been extensively studied in psychology [16] . The mood is related but different from another important affective state like emotion, personality, etc. in several important aspects [8] .

3.1.1. Lasts Lesser than Personality

Mood negates from personality primarily in that they are less static than personality and tend to change more. Although mood can last for an extended period of time, personality tends to be longer-lasting [16] .

3.1.2. Lasts Longer than Emotion

The mood is typically less intensely felt by an individual and tends to last longer than emotion, e.g. persisting for days or hours instead of minutes or seconds. It is normally a reaction to a cumulative sequence of events while emotion is a more spontaneous reaction or feeling caused by a specific event.

The mood is more internal, while emotion is more visible to others. Due to its long-lasting and private nature, mood reflects the underlying feelings of people. Psychology research has proposed and extensively validated several models to describe and measure affect.

3.2. Mood Detection

Human mood is a part of human psychology, which can be depicted as a sentimental state that directly influences our day to day life, having impact on our behavior, social communication, and shifting our consumer preferences. It is an affective state and has been extensively studied in psychology. The mood is quite adjacent but also quite different from another important affective state which is emotion. Vital differences between mood and emotion are their duration and intensity.

While mood lasts much longer and have a more intense impact on human brains and behavior, emotion lasts from the few moments for a few minutes [16] . The mood is normally a reaction generating from any continuous sequence of events while emotion is a more spontaneous reaction or feeling caused by a specific event. In the very end, the mood is more internal while emotion is more visible to others.

So, in the field of computer science mood detection is a process where a system successfully determines the current mood of a human. Here, this single line describes mood detection quite easily. But in the upcoming part of this thesis, we may find detecting human mood a bit challenging.

3.3. The Purpose of Mood Detection

If human mood can be detected accurately, there are numerous proposals available to implement. Some applications that we may consider:

1) Creating intelligent companion [17] .

2) Decrease the distance between human and their used smart devices.

3) Make smart devices, intelligent enough to understand the user more precisely.

4) Interactive control of music [18] .

5) Monitoring customer’s reactions at online shops to get product review or system efficiency.

So, undoubtedly mood detection is quite an important area to conduct our research. Both scientific and commercial purpose behind mood detection is undeniably great.

3.4. Research Question and Methodology

In the way of our research, we have faced some significant questions. Some important questions are:

・ Why mood detection?

・ What are the parameters?

・ Are the results trustworthy?

・ Can these results be more accurate?

・ Why choosing mood detection?

In our previous Section 3.3―“The purpose of mood detection” we have briefly described the answer. Considering the second question, we have used different parameters to detect human mood in the system. Some parameters such as call duration, number of SMS per day, number of calls per day, physiological signals (heart rate), etc. are our main field of study.

About the results, the results are quite trustworthy. Considering MoodScope [1] , its initial accuracy level was 66%, and after training up the system with training data set, the accuracy rises up, which is up-to 93%. Accuracy level in other applications is also quite promising.

The last question is our main concern, because our main goal is to detect human mood more accurately. By using several parameters (analyzing several data), we are positive to reach our goal. We are optimistic to justify our opinion in upcoming sections.

3.5. Differentiating Mood with Circumplex Model

This model is widely used to measure result separating into different dimensions. By doing that it is possible to distinguish between pleasure dimension and activeness dimension.

4. Methodology

4.1. Basic Proposal

To maximize the accuracy of the current existing systems such as Mood Scope [1] , we will consider some new variables to analyse. It is found that the physiological signals vary through changing of human mood. The existing method uses only other data such as email, SMS, phone call, etc. Physiological signals and smartphone usage data will be considered along with the existing one.

4.2. Physiological Signal

A mobile phone can serve as an accurate monitor for several physiological variables, based on its ability to record and analyze the varying color signals of a fingertip placed in contact with its optical sensor. We confirm the accuracy of measurements of breathing rate, cardiac R-R intervals, and blood oxygen saturation, by comparing to standard methods for making such measurements (respiration belts, ECGs, and pulse-oximeters, respectively). Measurement of respiratory rate uses a previously reported algorithm developed for use with a pulse-oximeter, based on amplitude and frequency modulation sequences within the light signal. We note that this technology can also be used with recently developed algorithms for detection of atrial fibrillation or blood loss [15] .

According to an analysis of findings from 44 studies published last year in the Journal of the American College of Cardiology [19] , evidence supports the link between emotions and heart disease. To be specific, anger and hostility are significantly associated with more heart problems in initially healthy people, as well as a worse outcome for patients already diagnosed with heart disease.

4.3. Proposed Method

Before stepping further into our proposed method, we should put some light on the word “Classification”. In easy words classify something is known as classification. If we say this a bit more precisely then classification is the process in which we separate objects from one another and assign them to mutually exhaustive and exclusive categories known as classes [20] . For example, a normal human can be of two classes; male or female. To detect human mood, we have considered five classes; very displeased, displeased, neutral, pleased and very pleased with our method. There are several ways to classify an object or a new instance. Naïve Bayes classifier is one of them. It gives us the way of combining prior probabilities and conditional probabilities. Conditional probability is the probability of an event given that another event has occurred. And the prior probability of an event is the probability of the event computed before the collection of new data. Given a set of classifications c1, c2, ... ck which have prior probabilities P(c1), P(c2), ... P(ck) and n attributes a1, a2, .... an which for a given instance have values v1, v2, ..., vn then we can calculate the Naïve Bayes classification by

P(ci) × P(a1) = v1|ci) × P(a2 = v2|ci) × ... × P(an = vn|ci)

We calculate this product for each value of i from 1 to k and choose the classification that has the largest value [20] . So, using a given data set (training data set), we can identify the class of a new instance. We have used this method to identify human mood from instances like day, No. of SMS, No. of calls, call duration, location, etc. These instances have several values. Heart bit rate is one of the important instances in our method. We know that normally low heart beat rate indicates tiredness, medium heart beat rate (60 - 100 bpm) indicates normal human mood and behavior, and moderately high heart bit rate indicates that the person is smiling or in joyous mood (in normal condition) [6] . These indications are considered with other instances, and from the training data set using Naïve Bayes algorithm, we approached to detect human mood. For example, a user has provided his mood state of the last few days manually along with the other attributes as shown in Table 1.

We have considered this data table as a training data set to detect the mood of that particular user. Now, users can know his current mood state in any particular day by putting above attributes, such as shown in Table 2.

Then the system will first calculate the prior probability and conditional probability from the given data set using a frequency table for the new instance. And then from the equation above, it will calculate the probability of all classes for the new instance. Finally, the class (mood state) that generates the highest probability will be considered as that user’s current mood. At the beginning of our research, we assumed that there will be a common pattern of those attributes among the users. But gradually we found that there is no certain pattern to detect a user’s mood. The relations between those attributes are completely user dependent and unique.

4.3.1. Observation

Circumplex model is simple, quick to administer, and describes a wide range of mood states. This makes it suitable for our extensive field study, where the participants are asked to input their modes multiple times a day.

We observed a couple of people for one week with different mood states using Circumplex model. While observing we have found that the result vastly varies

Table 1. Sample user data.

Table 2. New sample instance for classification.

from person to person. For an example, after observation of a male aged 35, we tried to determine his mood two weeks after. The result was accurate up to 93%. Later with the same knowledge of that person, we tried to detect a mood of a 24 years old female and this time the accuracy level decreased. Also, we saw that the result varies between male and female. After that we came up with a conclusion that the result varies for different personalities, different relationship status and different genders. Therefore, we finally decided that our proposed system will only work for an individual person after the system is trained for that person.

Here is the data set of 35 years old male with (Table 3) and without (Table 4) heart rate.

In the data set (Table 3, Table 4), the day had been classified in two ways, weekday and holiday. No. of SMS (sms_no) was divided into three parts―fewer (0 - 5), medium (6 - 7), higher (7 - 10 and above). No. of calls (calls_no) was also divided into three parts―fewer (0 - 4), medium (5 - 8), higher (8 - 10 and above). Average call duration (calls_du) was divided into three parts―small (10 sec-2 min), normal (2 - 5 mins), long (5 - 15 mins and above). Heart bit rate (heart rate) was divided into four categories―low (less than 60 Beat per minute), normal (60 - 100 bpm), high (100 - 120 bpm) and very high (120 - 150 bpm). Location (loc) was divided into three parts―work, home and tour. The class Mood State (state) which is our main concern was divided into five parts. It was divided into very displeased, displeased, neutral, pleased and very pleased state.

Both of these data tables were used to determine the mood state of that user (35 years old male) using Naïve Bayes Algorithm (basic Naïve Bayes calculation using conditional probability and prior probability) for a new instance without (Table 5) and with heart rate (Table 6).

Both of the data table gave us the same mood state “pleasure”. Although the first data table gave us the probability of 1.75%, whereas considering the second data table (applying heart rate) we got the probability of 0.875%. But, when we got confirmation from our users, we have seen results generated with heart rate are far more accurate than the results generated without heart rate. So, the positive impact of considering heart rate as a new variable can be easily seen. After testing our system for thirty new instances (with heart rate) for both male and female, we got 93% accuracy as twenty-eight results were accurate.

Table 3. Data set of male with heart rate.

Table 4. Data set of male without heart rate.

Table 5. New instance without heart rate.

Table 6. New instance with heart rate.

4.3.2. Graphical Representation

The following graphical representation (Figure 6) represents the accuracy of our results. Here we compared the accuracy of our systems attributes separately and altogether.

In Figure 6, vertically we represented the level of accuracy and horizontally we can see our attributes. From the graph, we can see that if we consider No. of SMS only ten out of thirty instances, we get only two results right. Thus the accuracy is 6.66%. Gradually we can see from the combination of No. of SMS and No. of calls the result is 16.66%. If we run the test considering No. of SMS, No. of calls and average of call duration the results are 36.66% accurate. Considering all the previous attributes with location, our results are 63.33% accurate (nineteen out of thirty instances were successfully detected). Certainly this level of accuracy is not our goal. So we considered heart rate as a new attribute. The astounding outcome is shown in Figure 7.

We can easily compare the graph with the previous graph. Because after we have considered heart rate as an attribute and ran the test, we astonishingly saw that out of thirty instances, twenty-eight instances were detected successfully. This graph is a mere representation of the accuracy level of our system. Because if we can train our system with the data set of two-three months for a specific user, we are confident enough that this system will be more robust and the level of accuracy will certainly be higher.

5. Conclusion and Future Work

5.1. Limitations

While conducting the research on Mood Detection, we have confronted some

Figure 6. Accuracy level of the system without heart rate.

Figure 7. Accuracy level of the system with heart rate.

limitations. In the very beginning, we have faced the problem of inadequate research resources; for example, Mood Scope [1] , Mobi Mood [18] , Physiological parameter monitoring [15] . And some other papers were not available for us. Though later on, with the help of our supervisor we overcome this obstacle.

Another key limitation of our research is that as none of our group member is related with android development, we were not able to practically implement our hypothesis. As Mood-Scopes API is available, it’s very much possible to verify our work by developing a simple application. But due to lack of time and inadequate knowledge in android development we were not able to validate our hypothesis, although we applied data mining algorithm (Naïve Bayes) to testify our data set.

Another reason for which we were not able to conduct our thesis in application level is the financial support. We had a plan to integrate emotion pattern recognition through image processing. For doing that we needed a high resolution camera and lots of other stuff (props) to build this system. But we cannot bear this expenditure. We put an end to our thesis in this proposal state and proved its accuracy with some sample data by comparing with some existing methods.

5.2. Future Work

In future, we intend to extend our research by implementing our hypothesis in practical life. Number of researches inclined to mood detection are increasing day by day. Moreover, the literature that helped us the most; MoodScope has its own API. Using that API, we would like to build our proposed system by merging image processed data. We are also hopeful of adding some new parameters, like mood in a specific place (environment), with companion or family members, at the workplace, user’s browsing behavior, user’s strategy of using devices, etc.

5.2.1. Social Robots

Human-robot interaction is an emerging area of study that is developing an understanding of how to build robots that are useful and effective in helping people to perform tasks in particular domains. To make a conversation with a human being the things we need are to understand the speaker’s speech, behavior and act to reply accordingly, which are already possible to get the data with various sensors and cameras and process with analyzing, using machine learning algorithms and image processing [21] .

5.2.2. A Car That Understands the Drivers’ Mood

A car that could understand those feelings might prevent an accident, using emotional data to flag warning signs [22] . Sensors could nest in the steering wheel and door handles to pick up electrical signals from the skin. Meanwhile a camera mounted on the windshield could analyze facial expressions.

Alternatively, if the driver exhibits stress, the vehicle’s coordinated sensors could soften the light and music, or broaden the headlight beams to compensate for loss of vision. A distressed state could be broadcast as a warning to other motorists by changing the color of the vehicle’s conductive paint.

5.2.3. Integrating with Image Processing

Revealing the background, we would also like to mention “The Mood Scope”. It works with the data which is directly or indirectly related to a user’s behavior to its cell phone. This smartphone application gathers and analyzes data from smartphone sensors.

From Figure 8, considering f1, f2, f3, f4, and f5 points from lips and a1, b1 points from eyebrows can almost specifically tell us current state of a human mood. Distance between these five points of lips and distance between two points from eyebrows have a relation between them. Because some exceptions aside all over the world for the relation between eyebrows and lips while a human express his mood or emotion are the same. At first we tried not to put image processing data in our proposed system. Because for image processing, a system must contain a high resolution camera which will not let a system be cost effective. But as our main goal is to detect human mood more accurately, we have to consider image processing as one of our main parameters. If any system

Figure 8. Extracted feature points [10] .

is trained with this analysis and with the help of our proposed system, it can gradually learn to detect the human mood more precisely.

5.3. Conclusion

We have proposed a system for the establishment of a new era of Human Mood Detection. This will not only affect the mood detection technology, but will also affect the entire Artificial Intelligence world. Our approach was combining physiological signals and pre-processed data from smartphones to generate more accurate results. As a result, the researchers will gain more confidence in their application and at the user level; those applications will be considered more trustworthy. We believe that a system being able to detect mood will bring a new era. To gather the current knowledge, we have looked over some papers described earlier in Background study. This study gave us an idea of how we approach and go ahead. In future, our aim is to conduct our thesis in application level.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Hossain, I., Islam, T. and Ruhin, M.R. (2018) Detecting Human Mood from Physiological Signal and Data Usage. Journal of Computer and Communications, 6, 15-33. https://doi.org/10.4236/jcc.2018.612002

References

  1. 1. LiKamWa, R., Lane, N. D., Lu, Y.X. and Zhong, L. (2013) MoodScope: Building a Mood Sensor from Smartphone Usage Patterns. 11th Annual International Conference on Mobile Systems, Applications, and Services, Taipei, 25-28 June 2013, 389-402.

  2. 2. Lee, C.M. and Narayanan, S.S. (2005) Toward Detecting Emotions in Spoken Dialogs. IEEE Transactions on Speech and Audio Processing, 13, 293-303. https://doi.org/10.1109/TSA.2004.838534

  3. 3. Narayanan, S., Pieraccini, R. and Lee, C. (2001) Recognition of Negative Emotions from the Speech Signal. IEEE Workshop on Automatic Speech Recognition and Understanding, Madonna di Campiglio, 9-13 December 2001, 240-243.

  4. 4. Musolesi, M., Mascolo, C., Rentfrow, P.J., Longworth, C., Aucinas, A. and Rachuri, K.K. (2010) EmotionSense: A Mobile Phones Based Adaptive Platform for Experimental Social Psychology Research. Proceedings of the 12th ACM International Conference on Ubiquitous Computing, Copenhagen, 26-29 September 2010, 281-290.

  5. 5. Clark, L.A., Tellegen, A. and Watson, D. (1988) Development and Validation of Brief Measures of Positive and Negative Affect: The PANAS Scales. Journal of Personality and Social Psychology, 54, 1063-1070. https://doi.org/10.1037/0022-3514.54.6.1063

  6. 6. Chen, L.W., Xie, H., Chen, Q., Li, H.B. and Niu, X.W. (2014) Emotion Pattern Recognition Using Physiological Signals. Sensors & Transducers, 172, 147-156.

  7. 7. Liu, Y., Lane, N.D., Zhong, L. and LiKamWa, R. (2011) Can Your Smartphone Infer Your Mood? PhoneSense Workshop, Seattle, WA, November 2011, 1-5.

  8. 8. Russell, J.A. (1989) Affect Grid: A Single-Item Scale of Pleasure and Arousal. Journal of Personality and Social Psychology, 57, 493-502. https://doi.org/10.1037/0022-3514.57.3.493

  9. 9. Tomkins, S.S. (1962-1963) Affect Imagery Consciousness. Vol. 1-2, Springer, New York.

  10. 10. Al-Hamadi, A., Panning, A. and Niese, B.M.R. (2010) Emotion Recognition Based on 2D-3D Facial Feature Extraction from Color Image Sequences. Journal of Multimedia, 5, 488-500.

  11. 11. Alghowinem, S., et al. (2013) From Joyous to Clinically Depressed: Mood Detection Using Spontaneous Speech. Proceedings of the Twenty-Fifth International Florida Artificial Intelligence Research Society Conference, Marco Island, 23-25 May 2012, 141-146.

  12. 12. AlMejrad, A.S. (2010) Human Emotions Detection Using Brain Wave Signals: A Challenging. European Journal of Scientific Research.

  13. 13. Cosgriff, R., Agrawal, N. and Mudur, R. (2009) Mood Detection: Implementing a Facial Expression Recognition System.

  14. 14. Mulcrone, K. (2012) Detecting Emotion in Text. CS Senior Seminar Paper, University of Minnesota, Morris.http://s3.eddieoz.com/docs/sentiment_analysis/Detecting_Emotion_in_Text.pdf

  15. 15. Scully, C.G., et al. (2011) Physiological Parameter Monitoring from Optical Recordings with a Mobile Phone. IEEE Transactions on Biomedical Engineering, 59, 303-306.

  16. 16. Shaw, L.L., Oleson, K.C. and Batson, C.D. (1992) Differentiating Affect, Mood, and Emotion: Toward Functionally Based Conceptual Distinctions. In: Clark, M.S., Ed., Review of Personality and Social Psychology, No. 13, Emotion, Sage Publications, Inc., Thousand Oaks, 294-326.

  17. 17. Tao, J. and Tan, T. (2005) Affective Computing: A Review. International Conference on Affective Computing and Intelligent Interaction, Beijing, 22-24 October 2005, 981-995. https://doi.org/10.1007/11573548_125

  18. 18. Hoggan, E., Oliver, N. and Church, K. (2010) A Study of Mobile Mood Awareness and Communication through MobiMood. Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, Reykjavik, 128-137.

  19. 19. Russell, J.A. (1980) A Circumplex Model of Affect. Journal of Personality and Social Psychology, 39, 1161-1178. https://doi.org/10.1037/h0077714

  20. 20. Bramer, M. (2013) Introduction to Classification: NaÏve Bayes and Nearest Neighbour. In: Principles of Data Mining, Springer, London, 21-37. https://doi.org/10.1007/978-1-4471-4884-5_3

  21. 21. Kidd, C.D. (2003) Sociable Robots: The Role of Presence and Task in Human-Robot Interaction.

  22. 22. Monks, K. (2014) New Technology Can Detect Your Mood. https://edition.cnn.com/2014/02/04/tech/innovation/this-new-tech-can-detect-your-mood/