March 22, 2016

"A group of Australian researchers is investigating the possible applications of a technology that can gauge a user’s general mood based on the sound of their voice, both in its own right and in relation to other people."

the stack: The researchers emphasise that mood does not equate to emotion, since emotion is a transient and usually short-lived state, and not likely to be a major indicator over time in fields such as health monitoring or performance assessment. by Martin Anderson

'The context-aware aspect of the theory at hand involves the program, powered by Deep Neural Networks, taking into consideration not just an aggregate mean of the user’s tone of voice over varying periods, as measured against a reference index that may not be meaningful in any particular case, but against the user’s tone as compared to that of others with whom they interact:

‘When the user takes part in a phone conversation the system will use the emotional construct of the speech of the person at the other end, the listener, as the contextual information. If the listener is talking about an exciting event the user is expected to be excited or cheerful if he/she is in a positive mood, otherwise, it would be assumed that the user is in a negative mood.’

'Potential users might be alarmed to consider that the over-caffeinated exuberance of their more excitable friends could end up being considered as a reference point against which their own vocal tone falls short. The paper does not address how to overcome anomalies of this nature, but presumably variables for age and anomalous correspondents will be considered.

'The potential of technologies that can gauge long-term states of mind with any measure of accuracy have obvious application in the field of mental health. Prior work [PDF] in this field, led by Cambridge researcher Petko Georgiev, concentrates on individuating audio from ambient surrounding noise, such as car noise, and the Australian paper develops this theme further.'

"Context-aware Mood Mining" by Rajib Rana, et al, here

No comments: