Depression Detector

I identified two simple tasks that could be used for detecting depression: emotion-driven eye movement behavior and emotional-context speech analysis. Depressive thoughts have been shown to influence gaze behavior with respect to emotional stimuli, reducing the frequency that positive/happy stimuli are looked at and increasing attention towards negative/sad imagery. Likewise, when asked to recall information or tell a story, negative language is more often used for those high in depression ratings.

One of the undergraduates in the Behavioral Analytics team at MPCR (Evita Conway) volunteered to compete in the FAU Wave competition, which gives students four months to design an experiment with real-world applications and present their findings in a poster and pitch deck format. With my mentorship, Evita crafted the stimuli, designed the experiment, and collected participant data from the undergraduate subject pool. Depression rating was identified based on the self-report Personal Health Questionnaire Depression Scale (PHQ-8), with participants separated into high and low levels of depression. Analysis of the eye movement data using a logistic regression model correctly predicted the depression rating of participants at 80% accuracy, matching state-of-the-art performance. Based on these findings and with the help of Evita’s presentation skills we received first place in the Wave competition. After the competition, I used the logistic regression model as one input in the two-tiered neural network model developed for the task discrimination project. I also added the speech analysis task as a second, and a capsule-CNN saliency map network as a third input. While the speech task failed to predict above chance, the addition of the saliency map network improved overall depression detection rates to 85.2% when combined with the logistic regression model. Adding backpropagation, a method for weight optimization, to the second tier further improved accuracy rates to 90.1%.

I then redesigned the experiment and implemented a picture description task, where participants were asked to verbally describe a picture while having their eye movements tracked. Data from this version of the experiment was significantly cleaner than previously collected, and the addition of the third task on top of the previous two (emotional image fixation/recognition, and verbal recall) allowed me to create a model that detected High (moderately severe depression and above) vs Low (no depression) ratings at 93.4% accuracy, 97.7% sensitivity, and 89.2% specificity, with 71 participants. Subject augmentation was utilized to increase the effective number of subjects in analysis by six times. Presentation of this data in the Florida Blue Healthcare Innovation Competition resulted in us taking second place honors. Additional participants are being collected and a manuscript is planned to be drafted in the coming months

Description

  • Machine Perception and Cognitive Robotics Lab

  • GitHub link