Depuis la réalisation du projet THALIE, le monde numérique a beaucoup changé, l’IA et ses promesses sont arrivées. Néanmoins, les propositions sont toujours dans des laboratoires, et aucune solution numérique de suivi des troubles cognitifs à grande échelle n’est encore disponibles.
En fonction de l’évolution démographique en Europe, aux Etats-Unis et en Chine, le besoin de suivi systématique des troubles cognitifs des séniors est de plus en plus criant.
Ces dernières années, avec le support des technologies à base d’IA, des avancées majeurs ont été réalisées sur plusieurs points importants pour l’objectif visé.
- La preuve a été faite que les personnes âgées sont capables d’utiliser des solutions vocales et des agents dialoguant. Cette preuve n’était pas évidence il y a quelques années, en particulier pour les personnes de grand âge (cf. références 2).
- Plusieurs études ont montré que les troubles cognitifs ont un impact sur l’élocution de la personne qui peut être détecté par des solutions de reconnaissance de la parole avec des résultats confirmés par les tests conventionnels de type MMSE ou MoCa (cf. références 3).
- Des démonstrateurs à base d’agents dialoguant ont montré des résultats de suivi des troubles cognitifs comparables à ceux obtenus avec des tests conventionnels de type MMSE ou MoCa, mais sans les utiliser explicitement (cf. références 4).
- Seule une équipe de recherche en Chine a repris les travaux de THALIE, avec une approche et des résultats similaires (cf. références 5).
Le projet THALIE est allé une étape plus loin : nous avons montré que les tests validés de type MMSE ou MoCa pouvaient être administrés de manière semi-autonome par un agent dialoguant, avec des résultats comparables à ceux administrés par un personnel soignant (cf. références 1).
Cette approche présente les avantages suivants :
- utiliser une base de tests validés et reconnus par la profession.
- permettre une alternance de tests semi-autonome administré par un agent dialoguant ou par un personnel soignant sans perturber la personne suivie.
- fournir des résultats exploitables directement par les personnels soignant en charge de l’adaptation de la prise en charge.
- constituer une solution de référence pour la constitution de nouveaux tests en fonction des évolutions générationnelles.
Référence 1:
le projet THALIE publications et résultats
- Alzheimer actualités : Assistant vocal intelligent (Fondation Médéric)
Notre innovation pour le milieu médical est basée sur un assistant vocal intelligent, rendant plus fiables et reproductibles les tests des troubles du cerveau, qui existent sous forme papier. Elle libère le médecin et le neuropsychologue des tâches mécaniques, leur permettant de se consacrer exclusivement à l’observation de leur patient. Le même logiciel contrôle deux… Lire la suite : Alzheimer actualités : Assistant vocal intelligent (Fondation Médéric) - Projet THALIE : utilisabilité d’un logiciel d’évaluation des troubles cognitifs par des personnes âgées souffrant de démence de type Alzheimer
Projet THALIE : utilisabilité d’un logiciel d’évaluation des troubles cognitifs par des personnes âgées souffrant de démence de type Alzheimer. 10° colloque de psychologie ergonomique, Jul 2019, Lyon, France https://hal.science/hal-02502740 - Feasibility of the cognitive assessment of nursing home residents with mild-to-moderate cognitive impairment using the intelligent voice-guided digital assistant THALIE
THALIE performances were weakly correlated with MMSE score. THALIE gave a better overview per resident of the extent of impairment of their various cognitive functions than the MMSE used alone, and made it possible to identify less impaired functions. The feasibility of THALIE computerized cognitive assessment of residents in nursing homes is confirmed. It allows… Lire la suite : Feasibility of the cognitive assessment of nursing home residents with mild-to-moderate cognitive impairment using the intelligent voice-guided digital assistant THALIE - La Mêlée numérique récompense 12 projets innovants
SPIX healthcare (SimSoft3D), pour sa solution Thalie. Créée en février 2013 l’entreprise, basée également à la pépinière d’entreprises Prologue, à Labège (Haute-Garonne) développe des assistants virtuels intelligents pour l’industrie. Avec Thalie, la startup change de cible et cherche à transférer son expertise dans le transfert de savoir-faire dans le milieu médical, avec une application spécifiquement… Lire la suite : La Mêlée numérique récompense 12 projets innovants - Projet THALIE MSD-AVENIR
Grâce aux innovations du projet THALIE et l’étude clinique associée, nous progressons en maturité vers la fourniture d’un « logiciel assistant » pour soulager le clinicien des tâches mécaniques des tests cognitifs, lui permettre une plus grande concentration sur l’observation clinique de son patient, et ainsi préserver l’approche humaine de la prise en charge de… Lire la suite : Projet THALIE MSD-AVENIR
Référence 2:
les personnes âgées sont capables d’utiliser des solutions vocales et des agents dialoguant
Understanding the Use of Voice Assistants by Older Adults
Our preliminary and exploratory analysis indicates that there is potential for these speakers to play a positive, social role in the lives of older adults. The research also supports former research which indicates that that smart speakers- and voice technology more broadly- is enormously beneficial for older adults who have a physical disability.
https://arxiv.org/pdf/2111.01210.pdf
Potential and Pitfalls of Digital Voice Assistants in Older Adults
Although it boasts good usability in general, problems interacting with VAs are frequently observed, because the user is required to follow a pre-structured form of dialogue, thus limiting the conversational abilities of Vas. Older adults often have problems recalling the specific commands necessary to operate the devices. Another limiting factor can be the lack of added value, which may result in a preference for devices already used. VAs are perceived as time consuming and a lack of compatibility is criticized. Furthermore, a barrier for using VAs can be seen in reported fear of losing one’s own competences and autonomy, because the VA may take care of a number of tasks without considering the competencies of the user.
In a nutshell, we posit that future research designs should strongly rely on analyzing VA interactions in everyday ecologies and strictly apply participatory design elements where possible. Data protocols should include a balanced mixture of automatized data including emotional aspects as well as structured and open assessments.
https://www.frontiersin.org/articles/10.3389/fpsyg.2021.684012/full
Référence 3:
les troubles cognitifs ont un impact sur l’élocution de la personne
Artificial Intelligence May Detect Early Alzheimer’s in Speech Patterns
Subtle Speech Features May Signal Alzheimer Risk
Using sophisticated computer analysis of these recordings, scientists could determine and evaluate specific types of speech features, including:
- how fast a person talks
- pitch
- voicing of vowel and consonant sounds
- grammatical complexity
- speech motor control
- idea density
AI Can Spot Early Signs of Alzheimer’s in Speech Patterns
Study participants, who were enrolled in a research program at Emory University in Atlanta, were given several standard cognitive assessments before being asked to record a spontaneous 1- to 2-minute description of artwork.
“The recorded descriptions of the picture provided us with an approximation of conversational abilities that we could study via artificial intelligence to determine speech motor control, idea density, grammatical complexity, and other speech features,” Dr. Hajjar said.
https:// neurosciencenews.com/ ai-alzheimers-speech-23017
Early Detection of Cognitive Decline Using Voice Assistant Commands
Early detection of Alzheimer’s Disease and Related Dementias (ADRD) is critical in treating the progression of the disease. Previous studies have shown that ADRD can be detected and classified using machine learning models trained on samples of spontaneous speech. We propose using Voice-Assistant Systems (VAS), e.g., Amazon Alexa, to monitor and collect data from at-risk adults, and we show that this data can be used to achieve functional accuracy in classifying their cognitive status. In this paper, we develop multiple unique feature sets from VAS data that can be used in the training of machine learning models. We then perform multi-class classification, binary classification, and regression using these features on our dataset of older adults with three varying stages of cognitive decline interacting with VAS. Our results show that the VAS data can be used to classify Dementia (DM), Mild Cognitive Impairment (MCI), and Healthy Control (HC) participants with an accuracy up to 74.7%, and classify between HC and MCI with accuracy up to 62.8%.
https://ieeexplore.ieee.org/abstract/document/10095825
Verbal Fluency as a Screening Tool for Mild Cognitive Impairment
We used binary logistic regressions, multinomial regressions, and discriminant analysis to evaluate the predictive value of semantic and phonemic fluency in regards to specific diagnostic classifications. Setting: Outpatient Geriatric Neuropsychology Clinic. Participants: 232 participants (normal aging=99, a-MCI=90, AD=43; mean age =65.75 years). Measurements: Mini Mental Status Exam, Controlled Oral Word Association Test Results: Results indicate that semantic and phonemic fluency were significant predictors of diagnostic classification, and semantic fluency explained a greater amount of the discriminant ability of the model.
Conclusions: These results suggest that verbal fluency, particularly semantic fluency, may be an accurate and efficient tool in screening for early dementia in time-limited medical settings.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9153280/
Référence 4:
démonstrateurs à base d’agents dialoguant ont montré des résultats de suivi des troubles cognitifs comparables
Alzheimer’s Dementia Recognition through Spontaneous Speech
While a number of studies have investigated speech and language features for the detection of AD and mild cognitive impairment (Fraser et al., 2016), and proposed various signal processing and machine learning methods for this task (Petti et al., 2020), the field still lacks balanced benchmark data against which different approaches can be systematically compared. This Research Topic addresses this issue by exploring the use of speech characteristics for AD recognition using balanced data and shared tasks, such as those provided by the ADReSS Challenges (Luz et al., 2020, Luz et al., 2021). These tasks have brought together groups working on this active area of research, providing the community with benchmarks for comparison of speech and language approaches to cognitive assessment. Reflecting the multidisciplinary character of the topic, the articles in this collection span three journals: Frontiers of Aging Neuroscience, Frontiers of Computer Science and Frontiers in Psychology.
ADReSS challenge
The main objective of the ADReSS challenge is to make available a benchmark dataset of spontaneous speech, which is acoustically pre-processed and balanced in terms of age and gender, defining a shared task through which different approaches to AD recognition in spontaneous speech can be compared. We expect that this challenge will bring together groups working on this active area of research, and provide the community with the very first comprehensive comparison of different approaches to AD recognition using this benchmark dataset.
https://luzs.gitlab.io/adress/
Scalable diagnostic screening of mild cognitive impairment using AI dialogue agent
In our experiments, we introduce a method for comparing the efficiency of AI dialogue strategy against that of the human interviewers. Specifically, we defined conversational efficiency, which quantifies the efficiency of different intervention strategies (AI-simulated vs. observed) based on the AUC-gains resulting from the different data provision strategies. Additionally, we also introduce an off-policy evaluation strategy which provides a lower-bound confidence on the expected performance of the AI policy compared to that of the human interviewers. Despite the imperfectness of the dialogue simulators, we illustrate a way to empirically assess the margin of error between the AI and human policies in a per-turn and per-conversation basis.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7109153/
Health Professionals’ Experience Using an Azure Voice-Bot to Examine Cognitive Impairment (WAY2AGE)
Virtual Assistants (VA) are a new groundbreaking tool for screening cognitive impairment by healthcare professionals. By providing the volume of data needed in healthcare guidance, better treatment monitoring and optimization of costs are expected. One of the first steps in the development of these items is the experience of the healthcare professionals in their use. The general goal of the current project, WAY2AGE, is to examine healthcare professionals’ experience in using an Azure voice-bot for screening cognitive impairment. In this way, back-end services, such as the ChatBot, Speech Service and databases, are provided by the cloud platform Azure (Paas) for a pilot study. Most of the underlying scripts are implemented in Python, Net, JavaScript and open software. A sample of 30 healthcare workers volunteered to participate by answering a list of question in a survey set-up, following the example provided in the previous literature. Based on the current results, WAY2AGE was evaluated very positively in several categories. The main challenge of WAY2AGE is the articulation problems of some older people, which can lead to errors in the transcription of audio to text that will be addressed in the second phase. Following an analysis of the perception of a group of thirty health professionals on its usability, potential limitations and opportunities for future research are discussed.
Validation of a rapid remote digital test for impaired cognition using clinical dementia rating and mini-mental state examination: An observational research study
Digital screening tests such as M-CogScore are desirable to aid in rapid and remote clinical cognitive evaluations. M-CogScore was significantly correlated with established cognitive tests, including CDR and MMSE-2. M-CogScore can be taken remotely without supervision, is automatically scored, has less of a ceiling effect than the MMSE-2, and takes significantly less time to complete.
https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2022.1029810/full
Cognitive impairment screening using m‑health: an android implementation of the mini‑mental state examination (MMSE) using speech recognition
In this study, an m-health app that covers the complete workfow of a standard MMSE is presented. The application is designed to be used by a geriatrician or other health care professional taking and to assist in the examination of a patient. As the patients under consideration in this study are residents of a nursing home, with specifc levels of cognitive and motoric defcits, a broad scale of self-participation in the assessment is accounted for, with the examiner supervising the assessment. In this way, the current version of the app is not intended for unsupervised self-assessment. This app decreases the workload of the examiner, since it automates the scoring of the test and stores all the data (participant information and test results) in a database, making it is easy to share the test results paperless with colleagues and to consult them at a later time. In a small scale pilot study the MMSE app was used as the screening tool in a nursing home and the results were compared with the conventional MMSE screening on paper.
https://core.ac.uk/download/pdf/196519694.pdf
Référence 5:
une équipe de recherche en Chine a repris les travaux de THALIE
A voice recognition-based digital cognitive screener for dementia detection in the community: Development and validation study (2022)
Dementia has long become a global public health problem, which will not only seriously damage the quality of life of patients, but also cause a huge social burden (1, 2).
Screening can alert people to early signs of cognitive decline, leading to better allocation of healthcare resources and reduced healthcare cost. Early detection also provides an optimal window of early intervention and treatment which has been proven to slow down cognitive decline and reduce the risk of dementia conversion (3, 4). Therefore, timely identification of potential at-risk older adults is the essential first step to delaying dementia onset, so as to render support to individual older adults, caregivers, healthcare providers, as well as the whole society (5). As a result, researchers have emphasized the necessity of early screening, particularly the implementation of simple and efficient assessment tools in various healthcare settings.
Although traditional paper-pencil tests are commonly applied, most of them must be performed by trained assessors and take a long time for face-to-face administration (6, 7). In contrast, digital cognitive testing provides new opportunities for regular self-accessible assessment and remote monitoring of cognitive changes. Digital cognitive screening overcomes the various obstacles of traditional testing. First of all, it can be carried out with minimum or no assistance of non-professional staff or family members, which makes it more flexible for anytime- and anywhere-assessment, and hence feasible for regular monitoring of cognitive changes (8). This is particularly helpful in when the COVID pandemic has impacted adversely on the healthcare routines in the communities (9). In addition, the analytic platform equipped with the assessment tool allows not only efficient management of assessment data, but also automatic execution of the entire assessment process. As such, digital cognitive screeners are believed to have a promising prospect in facilitating large-scale cognitive screening in the community.
So far, digital cognitive assessments are mostly developed by adapting various cognitive tests (8). However, most of the presently available cognitive tests are touch screen-based, which requires operation from individual participant. This has resulted in lower test acceptance and performance due to computer illiteracy among older adults who are less familiar with operating digital devices (10, 11). Furthermore, even though mobile platforms can collect new data streams and achieve a high measurement accuracy, they are usually expensive to track longitudinal behavioral/cognitive changes, such as sensor data through wearables (12, 13). All the above challenges have been brought up as important obstacles to be overcome.
Therefore, in the present study, we developed a brief digital cognitive tool (digital cognitive screener, DCS) based on a voice-recognition machine-learning system that runs on mobile devices. The DCS simulates a standard face-to-face cognitive test conducted by professional testers. The DCS was adapted from the montreal cognitive assessment (MoCA) which is a well-validated cognitive test and can be conducted verbally through the telephone (14, 15). In this study, we aimed to investigate the reliability and validity as well as the feasibility of the DCS among community-dwelling older adults in China. We hypothesized that the DCS had good validity which can be demonstrated by the diagnostic accuracy by establishing sensitivity, specificity, receiver operating characteristics (ROC), area under the curve values (AUC) and optimal cut-offs.
https://www.frontiersin.org/articles/10.3389/fpsyt.2022.899729/full
Evaluating Voice-Assistant Commands for Dementia Detection
Early detection of cognitive decline involved in Alzheimer’s Disease and Related Dementias (ADRD) in older adults living alone is essential for developing, planning, and initiating interventions and support systems to improve users’ everyday function and quality of life. In this paper, we explore the voice commands using a Voice-Assistant System (VAS), i.e., Amazon Alexa, from 40 older adults who were either Healthy Control (HC) participants or Mild Cognitive Impairment (MCI) participants, age 65 or older. We evaluated the data collected from voice commands, cognitive assessments, and interviews and surveys using a structured protocol. We extracted 163 unique command-relevant features from each participant’s use of the VAS. We then built machine-learning models including 1-layer/2-layer neural networks, support vector machines, decision tree, and random forest, for classification and comparison with standard cognitive assessment scores, e.g., Montreal Cognitive Assessment (MoCA). Our classification models using fusion features achieved an accuracy of 68%, and our regression model resulted in a Root-Mean-Square Error (RMSE) score of 3.53. Our Decision Tree (DT) and Random Forest (RF) models using selected features achieved higher classification accuracy 80–90%. Finally, we analyzed the contribution of each feature set to the model output, thus revealing the commands and features most useful in inferring the participants’ cognitive status.
