In the Designing Innovative Social Robots through end-User ParTicipation (DISRUPT) project, we set out to co-create a blueprint for the development of social robots for older adults.
Why is this research important? Social robots have huge potential to support older adults. They are interactive, can be used to support activities of daily living, and may reduce stress and loneliness. However, despite their benefits, few older adults use them.
This project explores barriers and facilitators for social robot use and collaborates with experts to co-design a new method for developing social robots.
DISRUPT showcases the benefit of working with older adults as partners in research.
What we’ve done: Established an older adult advisory group, ‘The League’. League members provide valuable insights and expertise that ensure our research remains meaningful and reflective of older adults’ needs, values, and preferences.
We conducted surveys, co-creation workshops, and focus groups to explore key issues around the use of social robotic technologies from the perspectives of older adults, people living with dementia, and their care partners.
What did we find? Our research reveals that older adults:
Desire to use social robots for interaction, connecting with others, and companionship.
Want social robots to be expressive and produce emotionally appropriate responses.
Feel that they may be stigmatised for using a social robot in public.
Evaluate assistive technologies by their ability to promote independence, affordability, ease of use, and ethical considerations.
We are grateful to all who contributed to this work. A special thank you goes to the members of The League advisory group.
Thursday, April 17, 2025 12:00 PM – 1:30 PM PDT (time converter) Via Zoom
Everyone is welcome! Please register here for the Zoom details: bit.ly/ccpmri
Join us for an interactive conversation with experts to discuss how highly portable neuroimaging technology can enable research and understandings of the human brain.
The advent of highly portable MRI (pMRI) offers new possibilities for field-based neuroscience research that has largely been limited to urban medical centers. In this community conversation, experts and community members will discuss the promise of improving representation in and leadership of neuroscience research by Indigenous People, as well as communities for which such capabilities have historically been absent or difficult to access. This event will also address ethical, societal, and legal challenges of pMRI research, including pathways for bringing holistic worldviews to the research and health care conversation.
Panelists: Donnella S. Comeau, MD, PhD Attending Neuroradiologist, Beth Israel Deaconess Medical Center Vice Chair, Mass General Brigham Institutional Review Board Instructor, Harvard Medical School
Jonathan Jackson, PhD Founder and Research Principal, CRESCENT Advising, LLC Assistant Professor, Harvard Medical School
Shannon Kolind, PhD MRI Physicist Associate Professor, University of British Columbia
Francis X. Shen, JD, PhD Professor University of Minnesota
Angela Teeple, MLS Doctoral Student University of Minnesota
Moderator: Judy Illes, CM, PhD, FCAHS, FRSC Professor and Director, Neuroethics Canada University of British Columbia
2025 Distinguished Neuroethics Speaker: Laura Y. Cabrera, PhD, Dorothy Foehr Huck and J. Lloyd Huck Chair in Neuroethics, Associate Professor of Engineering Science and Mechanics, Philosophy, and Bioethics, Pennsylvania State University
Panelists:
Benoit-Antoine Bacon, PhD, President and Vice-Chancellor, University of British Columbia
Christopher R. Honey, MD, DPhil, Alcan Chair in Neuroscience, Professor and Head of the Division of Neurosurgery, University of British Columbia
Julie M. Robillard, PhD, Associate Professor of Neurology, University of British Columbia
Lakshmi N. Yatham, MBBS, FRCPC, EMBA, Professor and Head of the Department of Psychiatry, University of British Columbia
Moderator: Judy Illes, CM, PhD, UBC Distinguished Scholar in Neuroethics, Professor of Neurology, University of British Columbia
Overview: New developments in neurotechnology are blurring the lines between what we thought was possible and what we thought it was in the realm of science fiction. In this lecture, Dr. Cabrera will explore groundbreaking advancements in neurotechnology that are transforming science fiction into reality, as well as their profound implications for healthcare and beyond. In particular, Dr. Cabrera will focus on key ethical considerations to ensure these technologies are developed and used for the benefit of society.
Everyone is welcome! This public, in-person event is free, but registration is required. RSVP here: https://bit.ly/2025baw
Join us for an interactive virtual event about research being conducted by the UBC-led “Mend the Gap” team for spinal cord injury (SCI). Drs. John Madden, director of the research program, Wolf Tetzlaff, associate director and lead of the biology and medicine team, and Tanya Barretto, from the ethics and translation team will discuss new approaches being developed and tested for SCI. They will answer your questions, and listen to your insights on strategic directions, values, and priorities.
PANELISTS: John Madden, PhD, PEng Director and Professor Department of Electrical & Computer Engineering University of British Columbia
Tanya A. Barretto, PhD Postdoctoral Research Fellow Neuroethics Canada, Department of Medicine University of British Columbia
Wolfram Tetzlaff, PhD, MD Professor Department of Zoology and Surgery University of British Columbia International Collaboration on Repair Discoveries
MODERATED BY: Anita Kaiser Director of Research Canadian Spinal Research Organization
Samantha P. Go*, Dr. Tanya A. Barretto*, Salwa B.A. Malhas, and Dr. Judy Illes
*Joint lead authors
The worldwide incidence of spinal cord injury (SCI) is over 15 million, with 250,000–500,000 new injuries reported each year (1, 2). While a return to full mobility is elusive for many people, biomedical research based on animal models illuminate potential treatment options (3, 4). Although these models cannot replicate human SCI perfectly, they continuously increase understandings of the underlying complex mechanisms that are essential for the development of eventual treatment goals and options (e.g., inflammation, neurodegeneration, glial scar formation).
News media coverage is an important source of information about advances in SCI research for the public. To understand media portrayals of the SCI research landscape, we identified English-language news articles on animal and cell-culture models (cells grown in a petri dish) published between January 2012 to December 2022 in Canada, the United States of America, Australia, the United Kingdom, and Ireland. The 10-year period is coincident with high activity in the area of stem cell research. We chose the countries of interest because of their high rates of research publications. The search was achieved using the terms spinal cord, spinal injury, and research, and variants thereof on three news content aggregators (Factiva, LexisNexis, and Canadian Newsstream). A total of 94 news articles were analyzed by multiple reviewers.
News media reported on novel interventions such as devices and technologies, pharmacological treatments, biologic and synthetic materials, and physical and behavioural interventions (Figure 1).
Figure 1. News coverage of different interventions over time across 5 countries.
Articles reported on research intervening at different time points after injury, spanning a few hours to a few days, and on different severities of injury (e.g., complete or partial disruption of the spinal cord). The frequency of media publications decreased after 2016.
Unlike the generally balanced coverage of human trials in the same time period as this study (5), headlines covering animal research were often hyperbolic and overly optimistic (Table 1).
# of articles
%
Headline
Match
27
29
Hype
54
57
Mismatch
13
14
Tone
Optimistic
59
63
Balanced
33
35
Neutral
1
0*
Pessimistic
1
0*
Descriptive terms
Lay
14
15
Medical
6
6
None
70
74
Combination
4
4
Table 1. Communication elements in 94 news articles reporting on animal- and cell- based spinal cord injury research (* indicates <1%; boldface indicates most frequently coded).
Of the 94 articles analyzed, 65% (N=61) assessed risks and benefits of the research or addressed the difficulty in translating results from animal models to humans:
“…urged caution that rats’ nervous systems are not the same as humans, and that most spinal injuries involve extensive bruising rather than a neat cut…”
“Further research is needed to understand why the drug worked on some animals and not on others.”
Overall, advancements in animal research play a vital role in developing ground-breaking treatments that could significantly improve outcomes and quality of life for SCI patients. While always attentive to the three Rs for animal research (Replacement, Reduction and Refinement), researchers continue to push the boundaries of innovation and collaboration. To promote and ensure public trust, and especially trust from the SCI community, reporting on progress that is transparent, balanced, and evidence-based is the responsibility of scientists, journalists, funders, and supporting institutions alike.
Acknowledgements
The research was supported by the Government of Canada’s New Frontiers in Research Fund (NFRF), NFRFT-2020-00238. Dr. Judy Illes, Principal Investigator, is Distinguished University Scholar and UBC Distinguished Scholar in Neuroethics supported by the North Family Foundation.
References
National Spinal Cord Injury Statistical Center (2022) Traumatic spinal cord injury facts and figures at a glance. [Accessed February 26, 2024].
World Health Organization (2013) International perspectives on spinal cord injury. [Accessed February 26, 2024].
Lilley, E., Andrews, M. R., Bradbury, E. J., Elliott, H., Hawkins, P., Ichiyama, R. M., Keeley, J., Michael-Titus, A. T., Moon, L. D. F., Pluchino, S., Riddell, J., Ryder, K., & Yip, P. K. (2020). Refining rodent models of spinal cord injury. Experimental neurology, 328, 113273.
Barretto T.A., Tetzlaff W., & Illes, J. (2024) Ethics and accountability for clinical trials. Spinal Cord, 62, 192–194.
Go, S., Barretto, T. A., Malhas, S. B. A., & Illes, J. Shared responsibilities for news media coverage of spinal cord injury research. (Under review, Journal of Health Communications, July 2024).
Salwa Malhas is a Research Assistant at Neuroethics Canada. She supports the Ethics and Knowledge Translation initiative of the Mend the Gap.
Tanya Barretto is a Postdoctoral Research Fellow at Neuroethics Canada. She oversees the Ethics and Knowledge Translation initiative of Mend the Gap.
A central focus of the field of neuroethics lies at the intersection of existing and emerging technologies, and the promotion of brain health. Nearly all adolescents use at least one form of social media (97%) (1), and 14% of adolescents experience mental health conditions (2). During adolescence, there is a higher risk of developing mental illnesses (3). With the increasing usage of social media and the high incidences of mental health conditions observed in adolescents, parents wonder: How should they make decisions around their children’s social media use based on evidence of positive outcomes?
Can social media be used to promote good mental health in youth?
Social media can be an important facet of adolescent life (4). It is an opportunity for social interaction and to build communities with people they don’t necessarily see face to face. Durable support networks are meaningful resources that can help adolescents avoid the development of mental illnesses (5). And, importantly, social media provides an accessible way for youth to find and receive mental health support through the help of online communities — many of which include people with similar lived experiences (6). Mental health practitioners report that reduced isolation, social skill development, and accessible communication are benefits to the mental wellbeing of younger users. These online communities allow social media users to educate themselves on their mental health struggles, and provide them with a platform to discuss their experiences with others who can contribute resources and support that may otherwise be difficult to obtain outside of social media (6). Social media can also be a platform to advocate or to promote positive mental health strategies to peers, creating a venue for accessible mental health help.
Adolescents are also subject to external risk factors to developing mental health conditions, including academics and friendship dynamics, that require coping strategies to maintain good mental health (5). Social media serves as a potentially effective tool to alleviate stress, such as by providing distraction or relaxation, and could help adolescents cope with stressors. Younger people have reported that the main goals of using apps for mental health purposes are for helping them calm down, maintaining well-being in the long term, and to have access to other resources for support (8).
Is social media contributing to mental illnesses in young audiences?
Despite these potential benefits, social media has the potential to cause harm. Many parents opt to mediate or disallow social media use, and adolescents are particularly susceptible to social media’s influence due to the nature of this developmental period (3). Reports of self-esteem struggles, poor sleep habits, and the setting of unrealistic standards on social media have made some parents uncomfortable with their children’s use of it (3,5). Peers can use social media to drive social exclusion and influence risky behaviour — for example, by using externalized issues like bullying to encourage internalized issues such as struggles with self-image (5). Youth may also feel pressured to perfect the way they present themselves to their peers, which results in meticulously edited content. Over time, adolescents could develop a habit of comparing themselves to their online image as well as what they see online from their peers, and as social media use becomes more popular, this tendency for comparison is exacerbated.
Are social media platforms ethically responsible for safe social media use?
Social media platforms have an important role in making sure that their platform is beneficial, and not harmful, to adolescent users. As such, there is an ethical need to measure and assess the functionalities that these platforms have in place to address their role in this goal. For example, social media platforms often censor the content of social media posts with the intention of preventing harmful content from being shared or viewed – however, it is important to also note that the practice of censorship has not been strongly supported by evidence of positive outcomes. Content moderation has been shown to potentially counter positive mental health engagement by limiting access to helpful resources, regardless of intention (9). Because of the potential for these policies to shape adolescent mental health and overall social experience, social media platforms need to make conscious efforts moving forward to make safety policies with evidence supporting their effectiveness. While it is ethically necessary for them to consider strategies to maintain adolescent health, this is not always followed in practice, so it is important for parents and younger people to be informed on what types of content moderation policies exist in the platforms they use. Being familiar with how social media platforms moderate content can help parents and adolescents be best equipped to decide on a safe platform that works for them.
Should children be on social media?
Using social media without internet safety or mental health education can be detrimental; however, it is important to acknowledge that adolescents also report that social media is meaningful to them for connecting with their peers and engaging with online discourse. Additionally, research suggests that the method of parenting and mediating social media influences how younger people use these platforms (7). For example, parental mediation that supports adolescents’ autonomy may lead to less time on social media, less risky social media use, and a reduction of anxiety or depression symptoms. Supporting autonomy can look like developing age-appropriate rules around social media as well as taking into consideration the views of adolescents (7). As such, the way parents respond to social media use can be a formative factor for a child’s interactions and attitudes towards social media.
The experience that users have with social media varies with the functionalities and features that the platforms have. When choosing mental health supports on their phones, adolescents prioritize features such as accessibility, quality of intervention, security, customizability, and usability. The credibility and safety of an app is also valued among younger users (8). Understanding what adolescents value in social media, as well as what platforms they are likely to choose, can allow parents to better understand how to assess platforms for safety and mental health support capabilities.
Whether or not social media should be used and/or moderated is a personal parenting decision that should be well-researched and discussed amongst both parents and their children. Informed use for both parties contributes to the overall safety when engaging with social media for social or mental health support. While using social media can pose risks for people of any age, it is important to consider the benefits that may be realized for youth when they are given the opportunity to learn how to navigate social media in a safe and healthy way.
Nesi J. The Impact of Social Media on Youth Mental Health: Challenges and Opportunities. North Carolina Medical Journal. 2020 Mar 01;81(2):116–21.
Barry CT, Sidoti CL, Briggs SM, Reiter SR, Lindsey RA. Adolescent social media use and mental health from adolescent and parent perspectives. Journal of Adolescence. 2017 Sep;61(1):1-11.
O’Reilly, M. Social media and adolescent mental health: the good, the bad and the ugly. Journal of Mental Health. 2020 Jan 28;29(2):200-06.
O’Reilly M, Dogra N, Hughes J, Reilly P, George R, Whiteman N. Potential of social media in promoting mental health in adolescents. Health Promotion International. 2019 Oct;34(5):981–91.
Beyens I, Keijsers L, Coyne SM. Social media, parenting, and well-being. Current Opinion in Psychology. 2022 Oct;47:101350.
Kabacińska K, McLeod K, MacKenzie A, Vu K, Cianfrone M, Tugwell A, Robillard JM. What criteria are young people using to select mobile mental health applications? A nominal group study. Digital Health. 2022 May;8.
Zhang CC, Zaleski G, Kailley JN, Teng KA, English M, Riminchan A, Robillard JM. Debate: Social media content moderation may do more harm than good for youth mental health. Child and Adolescent Mental Health. 2024 Feb;29(1):104-106.
Katelyn Teng is an undergraduate research assistant in the NEST Lab under the supervision of Dr. Julie Robillard. She is pursuing a BSc in Neuroscience at the University of British Columbia, and is passionate about mental health advocation and technology’s role in patient experience. Outside of work and school, she can be found baking sweet treats, collecting her favourite vinyl records, and with friends and family.
Everyone is welcome! This public in-person event is free, but RSVP is required: https://bit.ly/2024baw Overview The placebo effect is powerful in many neurological and psychiatric disorders and clinical trials often use placebos when developing and testing new treatments. Some people question the ethics of including a placebo group in research, while others would argue that to not do so is ethically fraught. In some cases, estimating the placebo effect and uncovering its underlying mechanisms may depend upon the use of deception, but this may be in conflict with basic principles of autonomy.
Deception requires careful thought as to whether it is necessary, and if so, how it will be managed in an ethically acceptable manner. While there have been advances showing that genetic factors that may contribute to the placebo effect, the idea that placebo responders should be excluded from clinical trials may be scientifically unsound and may further violate the principle of social justice. The use of placebos in clinical care is more controversial and while there may be benefits, there may be risks and additional ethical challenges. Health care providers need to be sensitive to the impact of deception not only on their own relationship with patients, but also on potential effects on trust of the profession as a whole.
A. Jon Stoessl, CM, MD Dr. A. Jon Stoessl is Professor and immediate past Head (2009-2023) of Neurology at the University of British Columbia (UBC). He was previously Director of the Pacific Parkinson’s Research Centre and Parkinson’s Foundation Centre of Excellence (2001-2014). Dr. Stoessl was Co-Director (2014-2019), then Director (2019) of UBC’s Djavad Mowafaghian Centre for Brain Health. He previously held a Tier 1 Canada Research Chair in Parkinson’s Disease. Dr. Stoessl is Editor-in-Chief of Movement Disorders, has served on numerous other editorial boards including Lancet Neurology and Annals of Neurology, previously chaired the Scientific Advisory Boards of Parkinson’s Canada and the Parkinson’s Foundation, is Past-President of the World Parkinson Coalition. He was Chair of the Local Organizing Committee and Co-Chair of the Congress Scientific Program Committee for the 2017 MDS Vancouver Congress.
Dr. Stoessl uses positron emission tomography to study chemical changes in the brain with the objective of gaining a better understanding of the causes and complications of Parkinson’s disease (PD) and its treatment, as well as how PD can be used as a model to better understand dopamine functions in the brain. He has published more than 300 papers and book chapters, and has been cited more than 31,000 times in the scientific literature with an h-index of 79 (Google Scholar).
Dr. Stoessl is a Member of the Order of Canada and was recognized by the Queen Elizabeth Jubilee Medal. He is a Fellow of the Canadian Academy of Health Sciences.
Brain Awareness Week Brain Awareness Week is the global campaign to foster public enthusiasm and support for brain science. Every March, partners host imaginative activities in their communities that share the wonders of the brain and the impact brain science has on our everyday lives.
It is not unusual for surgeons to begin operating on a brain tumor patient without exactly knowing the type of tumor they have, or the type of surgery required to safely remove it. Glioma is a common form of brain tumor and requires different surgical approaches depending on its subtype (1). Since it’s difficult to determine subtypes before surgery, tissue samples are often sent to a pathologist during the operation (1). This takes about 10 to 15 minutes as the patient’s brain lies exposed on the surgical table (2). Due to poor quality samples and stressful conditions, misdiagnoses can occur (2).
In 2023, researchers developed a new artificial intelligence (AI) tool nicknamed “CHARM” to identify glioma subtypes from tissue samples (1). If approved for clinical use, CHARM will facilitate fast and accurate diagnoses during surgeries – a significant advancement in neurosurgery.
This is just one example of AI’s tremendous potential in medicine. However, new discoveries come with new challenges – is the field of neuroethics ready to overcome them?
AI as support tools, not replacements
The notion that AI might completely replace humans in healthcare is intriguing. However, given the current state of AI technology, it is far more productive to discuss its role as support tools rather than replacements.
This is the view advocated by digital health researcher Dr. Emre Sezgin (3). Rather than treating AI as replacements for doctors, Sezgin emphasizes a human-in-the-loop approach in which AI tools support healthcare providers towards better decision-making. The AI tool can offer recommendations, and the healthcare provider can evaluate its outputs and make the final judgement. The approach can supposedly help in “reducing potential errors or biases”(3).
The human-in-the-loop approach is supported by evidence. In one study, no single automated system could outperform human radiologists in breast cancer screening, but the accuracy of screening improved when the radiologist and the AI worked together (4).
Despite its positive potential, Dr. Sezgin notes that there are important organizational challenges to adopting more AI tools in healthcare (3). The algorithmic systems would require rigorous evaluation to ensure they meet safety standards (5–7). Hospitals and clinics would need to review their policies to make sure that new AI practices are aligned with local laws and regulations while also preparing to deal with issues of information security, liability, and service reimbursement, to name a few (8,9). The list goes on.
Should AI technologies be safe, effective, and available within healthcare, their success can still be undermined by our unconscious biases towards AI.
Automation bias is the human tendency to accept an algorithm’s recommendations without sufficiently questioning or verifying it (10). This can lead us to miss important errors. In healthcare, using an erroneous automated decision support tool can increase the likelihood of following incorrect advice by 26% compared to working without the tool (11). In a human-in-the-loop approach, healthcare providers are at risk of automation bias when making the final judgement based on AI recommendations. This creates a dilemma.
When the algorithmic recommendation is correct, it can be invaluable for improving efficiency and accuracy; yet when it makes an error, it may cause mistakes that would not have otherwise occurred.
Of course, AI tools should be held to high safety standards. Their rate of error should be very low – but they might never be perfect. Paradoxically, the more reliable the AI algorithm, the higher the likelihood that its human users will overlook an error (10). If an AI tool is highly accurate, there is less incentive to spend time and effort verifying its outputs. This is sometimes referred to as automation complacency rather than bias, but both involve similar attention processes (10).
Organizational challenges to medical AI are important to address. However, it is equally important to address the risk of psychological challenges like automation bias to ensure the best use of medical AI.
How do we minimize automation bias towards medical AI?
It is important to train and educate healthcare providers about newly adopted AI tools (12). Within this training, it is also crucial to acknowledge the risk of automation bias and teach ways to mitigate it – for example, by setting high standards for verifying AI recommendations.
Another solution is to design computational tools with features that minimize automation bias. AI tools that provide confidence estimates alongside their recommendations can help healthcare providers gauge its reliability and prompt them to verify low confidence recommendations (13). However, the paradox is apparent: even high confidence recommendations may have errors, and a high confidence estimate may discourage verification and lead to worsening automation bias.
Recent movements toward Explainable AI may also help ease the “black box” factor around AI, helping healthcare providers verify the AI algorithm’s decision-making process for greater accuracy.
Human-in-the-loop approaches to AI in healthcare are promising. Sometimes the machine in the loop needs debugging – so, too, does the human. We need to examine our psychological biases in order to ensure a smooth-running system.This blog post is based on the original essay ‘All in our heads: Cognitive biases as psychological barriers to the successful adoption and use of medical artificial intelligence’ by Cindy Zhang, nominated as Honourable Mention in the 2023 Neuroethics Essay Contest by the International Neuroethics Society (INS) and International Youth Neuroscience Association (IYNA).
References
1. Nasrallah MP, Zhao J, Tsai CC, Meredith D, Marostica E, Ligon KL, et al. Machine learning for cryosection pathology predicts the 2021 WHO classification of glioma. Med. 2023 Jun 29;S2666-6340(23):00189–7.
3. Sezgin E. Artificial intelligence in healthcare: Complementing, not replacing, doctors and healthcare providers. DIGITAL HEALTH. 2023 Jan 1;9:20552076231186520.
4. Schaffter T, Buist DSM, Lee CI, Nikulin Y, Ribli D, Guan Y, et al. Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Network Open. 2020 Mar 2;3(3):e200265.
8. Abràmoff MD, Roehrenbeck C, Trujillo S, Goldstein J, Graves AS, Repka MX, et al. A reimbursement framework for artificial intelligence in healthcare. npj Digit Med. 2022 Jun 9;5(1):1–6.
9. Wolff J, Pauling J, Keck A, Baumbach J. Success Factors of Artificial Intelligence Implementation in Healthcare. Frontiers in Digital Health [Internet]. 2021 [cited 2024 Jan 7];3. Available from: https://www.frontiersin.org/articles/10.3389/fdgth.2021.594971
10. Parasuraman R, Manzey DH. Complacency and Bias in Human Use of Automation: An Attentional Integration. Hum Factors. 2010 Jun 1;52(3):381–410.
11. Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association. 2012 Jan 1;19(1):121–7.
12. Wartman SA, Combs CD. Reimagining Medical Education in the Age of AI. AMA J Ethics. 2019 Feb 1;21(2):E146-152.
13. McGuirl JM, Sarter NB. Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Hum Factors. 2006;48(4):656–65.
The need for online information about Alzheimer’s Disease and dementia
Persons living with dementia and their care partners often wish for access to more and better information about living with the condition [1–3]. While healthcare providers are a valued resource for this information, their time is limited, and access can be challenging. At least 40% of older adults seek health information online [4], though this number may be much higher in some populations [5,6]. Online information about dementia exists on many virtual platforms, including social media, and varies widely in quality [1–3,7–10].
A recent analysis identified a number of barriers to online information access for persons living with dementia: information is targeted towards care partners and medical practitioners, rather than persons with lived experience; information can be pessimistic and hard to decipher; information is inaccurate or overly simple; and information is untrustworthy [11]. There is a clear demand for easily accessible, accurate dementia information that can be customized to a variety of user information needs.
ChatGPT: a new information source
ChatGPT (Conversational Generative Pre-training Transformer) is an online tool launched by OpenAI in November 2022. Users can engage in typed back-and-forth dialogue with the system through a web browser. Unlike a typical search engine query like a Google search, the platform retains information across an interaction, which creates a more natural and conversational experience. The “machinery” of ChatGPT is a generative artificial intelligence, meaning it uses machine learning models to create novel and data-driven content. These models have been trained on a dataset created from online materials and refined through human feedback [12,13]. The exact content of this dataset has not been publicly released.
How does ChatGPT stack up as a source of online information about dementia?
In a recent study, our research team at the Neuroscience, Engagement, and Smart Tech Lab at Neuroethics Canada asked how ChatGPT compared to other sources of online information about dementia. To create a set of questions that real users would likely have about dementia, we collected Frequently Asked Questions from the webpages of three national dementia organizations in Canada, USA, and Mexico. We posed these questions to ChatGPT-3.5 in April 2023. Responses from ChatGPT were evaluated using a standard tool previously developed by the NEST lab to assess the quality of online health information [14].
Strengths of ChatGPT-3.5: We found that ChatGPT, like the Alzheimer’s organizations, provided generally accurate information, directed users to bring their questions to a physician, and did not endorse commercial products.
Strengths of Alzheimer’s organization websites: Organizations were more likely than ChatGPT to state the limits of scientific evidence explicitly and produced more readable responses (i.e., responses had a readability score corresponding to a lower grade level). They were also more likely to link to local, specific, and actionable resources for support.
Conclusion
This research represents one snapshot of behaviour from a generative artificial intelligence tool: ChatGPT-3.5. This platform, and others, will continue to change over time and may produce different responses with different prompts or in languages other than English.
This work can support:
Persons living with dementia and their care partners in screening potential sources of dementia information online;
Healthcare providers as they advise persons living with dementia and their care partners; and
Non-profit providers of dementia support services as they create helpful resources for their communities.
There is an ethical imperative to include persons with lived experiences of dementia in the creation of technologies to support them [15]. These perspectives are critically important for tools at the intersection of generative artificial intelligence and digital health. Understanding the online information available to these families is a first step in prioritizing their needs and perspectives in technology research and development.
Jill Dosso, PhD is a Postdoctoral Fellow in the Neuroscience, Engagement, and Smart Tech (NEST) lab at the University of British Columbia and BC Children’s Hospital. In her work, she studies the perspectives of persons with lived experience on emerging technologies to support brain health across the lifespan.
References
[1] Allen F, Cain R, Meyer C (2020) Seeking relational information sources in the digital age: A study into information source preferences amongst family and friends of those with dementia. Dementia19, 766–785.
[2] Montiel-Aponte MC, Bertolucci PHF (2021) Do you look for information about dementia? Knowledge of cognitive impairment in older people among their relatives. Dement Neuropsychol15, 248–255.
[3] Washington KT, Meadows SE, Elliott SG, Koopman RJ (2011) Information needs of informal caregivers of older adults with chronic health conditions. Patient Educ Couns83, 37–44.
[4] Yoon H, Jang Y, Vaughan PW, Garcia M (2020) Older Adults’ Internet Use for Health Information: Digital Divide by Race/Ethnicity and Socioeconomic Status. J Appl Gerontol39, 105–110.
[5] Levy H, Janke AT, Langa KM (2015) Health Literacy and the Digital Divide Among Older Americans. J Gen Intern Med30, 284–289.
[6] Tam MT, Dosso JA, Robillard JM (2021) The impact of a global pandemic on people living with dementia and their care partners: analysis of 417 lived experience reports. J Alzheimers Dis80, 865–875.
[7] Robillard JM (2016) The Online Environment: A Key Variable in the Ethical Response to Complementary and Alternative Medicine for Alzheimer’s Disease. J Alzheimers Dis51, 11–13.
[8] Robillard JM, Johnson TW, Hennessey C, Beattie BL, Illes J (2013) Aging 2.0: Health Information about Dementia on Twitter. PLOS ONE8, e69861.
[9] Robillard JM, Illes J, Arcand M, Beattie BL, Hayden S, Lawrence P, McGrenere J, Reiner PB, Wittenberg D, Jacova C (2015) Scientific and ethical features of English-language online tests for Alzheimer’s disease. Alzheimers Dement Diagn Assess Dis Monit1, 281–288.
[10] Robillard JM, Feng TL (2016) Health Advice in a Digital World: Quality and Content of Online Information about the Prevention of Alzheimer’s Disease. J Alzheimers Dis55, 219–229.
[11] Dixon E, Anderson J, Blackwelder D, L. Radnofsky M, Lazar A (2022) Barriers to Online Dementia Information and Mitigation. In CHI Conference on Human Factors in Computing Systems ACM, New Orleans LA USA, pp. 1–14.
[12] Introducing ChatGPT, Last updated November 30, 2022, Accessed on November 30, 2022.
[13] Forbes, The Next Generation Of Large Language Models, Last updated February 7, 2023, Accessed on February 7, 2023.
[14] Robillard JM, Jun JH, Lai J-A, Feng TL (2018) The QUEST for quality online health information: validation of a short quantitative tool. BMC Med Inform Decis Mak18, 87.
[15] Robillard JM, Cleland I, Hoey J, Nugent C (2018) Ethical adoption: A new imperative in the development of technology for dementia. Alzheimers Dement14, 1104–1113.
Stefanie Blain-Moraes, Assistant Professor, McGill University, Canada; Young Scientist during the Session on “Human-Centred High-Tech: Neurotechnology”. At the World Economic Forum – Annual Meeting of the New Champions in Dalian, People’s Republic of China 2017. Copyright by World Economic Forum / Ciaran McCrickard
Stefanie is the leader of the Biosignal Interaction and Personhood Technology (BIAPT) Lab at McGill University. The BIAPT Lab’s objectives are to understand neurophysiological (nervous system function) and physiological bases (bodily function) of human consciousness. They aim to translate this understanding into technologies that improve the quality of lives of non-communicative persons and their caregivers.
An individual's level of consciousness - their ability to perceive themselves and their environment - is typically assessed by their appropriate response to the environment.
Their work aims to assess consciousness and establish a prognosis (prediction of the course of a disease) for recovery of consciousness in behaviourally unresponsive patients, determine neural correlates of consciousness (relationships between mental and neural states), and understand the implications of caring for behaviourally unresponsive patients.
A behaviourally unresponsive patient has compromised functioning in their linguistic and behavioural communication, which limits their ability to reveal conscious states to others - this increases the reliance on inferring residual consciousness through relevant proxies1.
At this point, there may be more questions on how the BIAPT Lab strives to address these and other complex objectives. In my interview with Stefanie, we gauge her perspectives on questions at the intersection of consciousness, ethics, and technology (note: Stefanie’s response to a question follows immediately after the italicised question).
How do we know if behaviourally unresponsive patients are conscious? How do we know whether they have the potential to eventually recover consciousness?
There are a growing number of individuals who live with conditions that make them behaviourally unresponsive. Caring for these individuals poses challenges for the caregiver, as most of the necessary communication surrounding care is up to their interpretation. The individual’s unresponsive state raises questions about personhood and moral status as well.
How do we view personhood?
The static view of personhood has a strict view on the relationship between personhood and moral status. Within the static view, we have the individual and the relational; the former is defined by culture. In the latter view, personhood and cultural concepts depend on the behaviourally unresponsive individual’s relation to others. On one hand, you have someone referring to an individual as their mother, even when their capacity has gone. On the other, the individual’s lack of capacities implies they are not a person. An example of this would be an individual in a vegetative state who has no reflexive experience.
Is this still the view on personhood?
Over the past few decades, there has been a gradual shift away from the static view of personhood. In 2018, I published a paper that argued that consciousness could be dissociated from personhood, and there is a responsibility to assume personhood in the absence of consciousness. Now, we are more focused on the situated, dynamic view of personhood. It opens a conceptual space between personhood and moral status. We’ve seen that it’s not a static phenomenon, as it is something that waxes and wanes. Personhood fluctuates based on context.
Has this shift from the static to situated view affected the types of questions you’ve asked?
There is a need for our questions to be multidisciplinary. We are currently developing a technology called biomusic, which translates meaningful changes in the autonomic nervous system into auditory output. Caregivers are critical to this project, as we work with them to understand the person to determine the type and genre of music they would output.
Biomusic technology can be used to monitor physiological reactions, which could provide caregivers with the opportunity to accomplish other tasks without being directly beside the individual. It can also detect signals related to emotions, and augment the relationship between caregivers and the individual they are caring for.
There is a need for our questions to be multidisciplinary.
How did your formal training as an engineer supplement your work in the field of ethics, and now neuroethics?
It has helped shape the types of questions I ask. For brain-computer interfaces, we ask questions on: Who has access? How should they be used? How do we give access? On consciousness, we are keen on detecting levels of consciousness in minimally conscious people and how it affects medical decisions. Working on ethical issues happened as a response to all these questions.
What’s next for you?
In disorders of consciousness, we are looking into ethical issues concerning the Adaptive Recognition Index for neuroprognostication (prediction of recovery from disorders of consciousness caused by severe brain injury2) in Canadian critical care intensive care units. Right now, we are conducting data collection in 5 intensive care units across Canada, and hoping to eventually develop a framework that is more pro-active than reactive.
Another project is looking at transcranial alternating current stimulation (tACS) in the upper superior parietal lobe of the brain. Earlier work has found that coma patients have woken up, so we hope to maximise an individual’s capacity for consciousness and utilise tACS as ignition for an individual to potentially recover. We are also looking into the ethical implications of this.
End of interview.
Stefanie’s dedication to enhancing interactions between non-communicative individuals and their caregivers could prevent caregiver burnout, and reduce the risk of neglect or family abandonment of behaviourally unresponsive individuals. Stefanie and the BIAPT Lab continue to develop novel technologies to assess levels of consciousness and cognition in non-communicative individuals.
We are so grateful to learn alongside Stefanie as she is here with us in Vancouver (and now off to Montréal!). We are looking forward to what’s to come next.
To learn more about Stefanie’s work, read her citations and visit the BIAPT Lab’s website.
References
(1) Farisco, M., Pennartz, C., Annen, J. et al. Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Med Ethics 23, 30 (2022). https://doi.org/10.1186/s12910-022-00770-3
(2) Fischer, D., Edlow, B. L., Giacino, J. T., & Greer, D. M. (2022). Neuroprognostication: a conceptual framework. Nature reviews. Neurology, 18(7), 419–427. https://doi.org/10.1038/s41582-022-00644-7
I am a Research Assistant at Neuroethics Canada. I would like to acknowledge the notes and suggestions I’ve received from Marianne Bacani, Viorica Hrincu, Anna Nuechterlein, and Katelyn Teng. Special thanks as well to Stefanie for her patience, taking the time to chat, and for the blueberry coconut cake!