Responsible AI News

RAI UK allocates £12 million to UK projects that are addressing AI advances

RAI UK allocates £12 million to UK projects that are addressing AI advances

RAI UK, an acronym for Responsible AI UK, has awarded funds worth £12 million to UK projects that are addressing the rapid advances in the Artificial Intelligence sphere. This forms part of the £31 million programme, slated to run for four years. Projects that have secured funds in the round cover three crucial markets: law enforcement, health & social care, and financial services.

RAI UK is backed by UKRI, an acronym for UK Research & Innovation. More projects have been funded as they explore ways to amplify public voices and leverage AI to drive productivity.

Gopal Ramchurn, the Professor of AI at the University of Southampton & the Chief Executive Officer of RAI UK, has said that the projects ideally bring together specialties from varied fields, adding that the projects have been chosen as they address the most pressing challenge in society. Gopal is confident about the projects that deliver interdisciplinary research that addresses complicated socio-technical challenges.

RAI UK is simultaneously working to develop a research program to support ongoing initiatives like the Alan Turing Institute, AI Safety Institute, and Bridging Responsible AI Divides UK.

Dr. Kedar Pandya, the Executive Director of EPSRC & Senior Responsible Owner of UKRI Technology Missions Fund, has backed AI by saying that the technology has the potential to drive positive impacts across the economy and society. Kedar added that the allocated funds will assist projects in using AI within specific contexts.

Approximately £3.5 million was allocated last year for the PROBabLE Futures Project. It focuses on the uncertainties of using Artificial Intelligence for law enforcement.

Professor Marion Oswald MBE has backed the implementation in law enforcement, saying that it will help authorities tackle digital data overload and increase operational efficiencies and unknown risks. Marion has said there are dire consequences, too; however, they look to collaborate with authorities to develop a framework with responsibility at the center of the mechanism.

That is crucial to deal with real-life implications that an AI model could have if the framework is faulty.

Approximately £3.5 million have been allocated to address challenges in LLMs for medical and social computers. They are being adopted at a faster pace without considering repercussions. Maria Liakata, hailing from Queen Mary, University of London, has addressed this aspect and said that they are addressing the socio-technical limitations that could challenge the trustworthy and responsible use, especially in legal and medical cases.

The remaining amount has been allocated to the Participatory Harm Adulting Workbenches & Methodologies project. The aim is to maximize the benefits while minimizing the technology’s potential threats.

Dr. Simone Stumpf has said that they aim to put the auditing power back in the hands of people, adding that they are essentially looking to develop a featured workbench of tools that will help everyone participate in the audit process irrespective of their knowledge about and experience with AI tools.

What is your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
ToAI Team
Fueled by a shared fascination with Artificial Intelligence, the Times Of AI journalists team brings together various researchers, writers, and analysts. We aim to provide a comprehensive knowledge of AI for a broad audience of the Times Of AI. Through in-depth analysis of the latest advancements, investigation of ethical considerations around AI development, AI governance, machine learning, data science, automation, cybersecurity, and discussions about the future impact of AI across various sectors, we aim to empower readers with the details they need to navigate this rapidly evolving field.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *