Analysis Mode
Once you have clicked Start Analysis, you will be in Analysis mode. A status indicator will appear in the upper left corner of your screen that shows the length of the analysis as well as an option to Pause or Stop the analysis session. If you Pause the Analysis, you will be given an option to Resume and continue the analysis.

During the analysis session, numbers will appear to the bottom right of faces as they are identified and tracked. This number represents the weighted emotion score of the participant over the past few seconds. You will also be presented with a yellow Average number in the upper right of the screen. This is the weighted running average of all the participants in the call.

Our emotion score is based on a scale of 0 to 10, with 0 being absolutely negative (sadness, anger, disgust, contempt, fear), 5 being entirely neutral, and 10 being completely positive (happy/surprised).
Most participants will stay in the range of 3.5 to 7 throughout a meeting, but significant blips outside of this range or extended lengths of time on the low or high end can indicate that emotions were high and action may need to be taken to improve in the future.
Ending Analysis
Once you are ready to end your analysis, click Stop Analysis. The data from the session will be aggregated and transferred to your Dashboard for you to review.
Note: We do not receive any raw facial image data from your use of the Elevate app. All facial images used for recognizing emotions are stored on your own computer, along with details of each session the Elevate app was used.
We receive session details only. Facial images remain on your computer.
To see a list of sessions and go to their information on the Dashboard, you can open the History tab from the homepage of the application.
Notes on Performance Expectations
Limitations
Full screen mode: Supports up to 8 participants concurrently pictured on the screen when the Minimum System Requirements are met.
Partial screen mode: Participant’s images must be at least 150×150 pixels to be recognized accurately.
Emotion scoring accuracy may decline when:
- Participants face away from the camera
- Lighting is either excessively dark or bright
- Video images are affected by buffering, latency or freezing due to Internet bandwidth issues
- Participants substitute their face with an avatar
- Participants partially obscure their face
- In the current version, additional faces in background images or still image avatars will be detected as participants. Future versions will be able to automatically disregard these static images.