SEMINAR. The following is an overview of the seminar ‘Artificial intelligence, research and ethics: An incompatible triad?’, which took place in physical and streaming forms on Monday, September 18. The seminar was part of the series ‘Research Ethics – open faculty meeting’, and was organized by the Council for Research Ethics and the Sahlgrenska Academy Research Support Office.
- You can also watch a video of the seminar via this link until Tuesday, October 17: https://play.gu.se/media/Artificial+intelligence%2Cresearch+and+ethicsA+an+incompatible+triadF/0_prywaqtv?st=320
Dean Agneta Holmäng began by noting that rapid developments within artificial intelligence have brought many new opportunities, from simplifying human contact via social media to facilitating healthcare diagnoses and making work more efficient with automated tasks.
“However, these rapid changes also raise various practical ethical issues related to safety risks and other effects that are being discussed by researchers worldwide,” she said.
The seminar’s moderator was Jaquette Liljencrantz, a PhD in anesthesiology and intensive care, and an adjunct senior lecturer at the Institute of Clinical Sciences.
Accelerating development
The first presentation was given by Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology. He emphasized the extremely rapid pace of developments within AI right now. As an example of how he noticed this trend himself, he mentioned his book ‘Tänkande maskiner: Den artificiella intelligensens genombrott’ (‘Thinking Machines: The breakthrough of artificial intelligence’). A couple of years after the book was published in 2021, he began to question whether he should recommend the book for teaching, as certain sections were no longer relevant. However, an updated second edition will be published shortly.
Another example is that of AI researcher Ajeya Cotra, who estimated in a 2020 article that AI would be more intelligent than humans thirty years later, in 2050. Just two years later, she had to significantly revise her conclusion: In 2022, this tipping point was expected to occur around 2030 instead.
“The usual language of near-term versus long-term AI risk has become increasingly misleading,” commented Professor Häggström. “The crucial point could very well be just a few years down the road… maybe even this decade.”
He also highlighted ChatGPT4, which since its launch in March 2023 has surprised many with its unprecedented capabilities.
Concerned scientists
Many AI researchers have been caught off guard by the incredibly fast pace of developments. Last spring, an open letter was signed by more than a thousand respected researchers and entrepreneurs within data science and AI, including Professor Häggström. The letter warned that humanity risks losing control. The researchers called for a slowdown to allow society to analyze the consequences of AI’s rapid and broad adoption. Professor Häggström explained that the letter attracted considerable attention, and was quoted by Ursula von der Leyen during her opening speech to the European Commission in early September.
“Personally, I’m in favor of going a little bit slower to give policy-makers the time to catch up with what is happening and make the right decisions. There are market incentives and there’s so much momentum for research, and it’s not as easy as just pulling the emergency brake. But we could try.”
AI requires reflection
The next speaker, Justin Schneiderman, is a senior lecturer in experimental multimodal neuroimaging and has also been appointed as an advisor to the faculty on AI issues. He focused more on the opportunities that AI can offer within Sahlgrenska Academy’s research fields.
When it comes to ChatGPT, Justin explained that it can make several aspects of researchers’ work easier, including working with literature reviews or summarizing longer arguments in the form of core messages in point form. However, those who use ChatGPT need to take an ethical view and be aware of which types of texts and data are fed into the tool. The risk cannot be ignored that everything is being saved by Open AI (the company behind ChatGPT), and may be distributed and used in different ways.
Research funder draws up policy
Research applications are another possible use for ChatGPT and other AI tools. Within the near future, research funders are likely to have policies in place on how they view this use. As an example, Justin highlighted the Swedish Foundation for Strategic Research, which recently published its thoughts on AI within research preparation in an editorial on its website. The Foundation welcomes input from anyone within an interest in the work involved in this policy.
Legislation-related work is also underway within AI, with a new ‘AI Act’ having been proposed and currently being processed by the EU.
Image analysis predicts cardiovascular disease
Justin also highlighted two current AI models trained with machine learning that may be of great significance for research within Sahlgrenska Academy’s fields, and ultimately for healthcare.
The first example he gave was Automated Retinal Disease Assessment (ARDA), in which algorithms were trained on 284,335 patients. Based on images of the retina, this AI tool can now predict heart attacks and other events related to cardiovascular disease with 70% certainty.
The second example he mentioned is called AlphaFold, an AI program that can predict the structure of proteins. The program has been trained on more than 144,000 protein structure models, and has repeatedly proven superior in the annual Critical Assessment of Techniques for Protein Structure Prediction (CASP) competition.
Researcher training course in the spring
A more local example of how AI can bring new opportunities for research is the extensive SCAPIS project, funded by the Swedish Heart Lung Foundation and led from Gothenburg. The project will allow huge quantities of data (including imaging) on more than 30,000 individuals to be analyzed, providing new insights. Justin’s own research – which involves analyzing images of the brain taken using magnetoencephalography – generates huge amounts of data for each research subject, and he hopes that AI can make it easier to identify patterns and relationships in the data.
He also mentioned some of the challenges that are making the implementation of AI within healthcare more difficult:
“There’s a lot of hype, a lot of excitement around AI, and it gets a lot of people interested,” he said, before going on to highlight reproducibility as a problem for research based on AI. “But to see an AI through to clinical value, there’s a lot of work that’s comparatively unappealing, and that’s something that lacks funding in any clear way.”
The application window for elective postgraduate courses is now open, with new options including a course in AI. Apply by October 21: https://fubasextern.gu.se/fubasextern/info?kurs=SK00037
BY: ELIN LINDSTRÖM