OC11 - Monitoring and surveillance of suicide

An Investigation of the Factors That Predict the Acceptance of Artificial Intelligence Systems for Suicide Prevention
August, 30 | 12:00 - 13:00

Suicide is one of the leading causes of premature mortality, making suicide prevention a global health priority. Substantial effort has been made to improve the detection and assessment of suicide risk, and the treatment and management of individuals at risk of suicide, but these methods remain limited. Artificial intelligence (AI) systems have been developed to address the limitations. Current Artificial Narrow Intelligence (ANI) systems use complex algorithms to perform a specific task while future-envisioned Artificial General Intelligence (AGI) systems will be able to perform a broad spectrum of tasks and evolve their functional capabilities beyond human-level intelligence. Despite their emerging utility in suicide prevention, the acceptance of ANI and AGI systems remains unknown. The present study assessed the acceptance of ANI and AGI systems for suicide prevention, by investigating the factors that predict acceptance. Based on the framework, Unified Theory of Acceptance and Use of Technology, the factors considered included: (i) performance expectancy, (ii) effort expectancy, (iii) social influence, (iv) facilitating conditions, and (iv) trust. Individuals from the Australian general community were invited to complete an online survey, indicating their acceptance of six hypothetical suicide prevention scenarios that described the use of AI systems (3 scenarios described ANI systems, 3 described AGI systems). Data from 65 participants were analysed. Results indicated that performance expectancy, social influence, and trust predicted the acceptance of ANI systems for suicide prevention, but only trust predicted the acceptance of AGI systems. Overall, participants demonstrated greater acceptance of ANI systems than AGI systems. A key finding is the importance of trust in determining the acceptance of AI systems for suicide prevention, irrespective of whether the systems will possess capabilities beyond human-level intelligence. The present study contributes to our knowledge base on the acceptance of AI systems for suicide prevention and reinforces the need for AI system developers and other stakeholders to ensure that systems are designed and implemented in a manner that is explainable and transparent to ensure their acceptance, and ultimate adoption.

Speakers