Responsible AI News

ESG research shows serious industry gaps in responsible AI

ESG research shows serious industry gaps in responsible AI

The spread of artificial intelligence (AI) has invaded almost every industry, altering corporate processes and redefining the limits of possibility. However, as technology advances, ethical AI practices become increasingly important.  Tech Target’s Enterprise Strategy Group (ESG), with sponsorship from Qlik, conducted a thorough study that highlights this point.

The study sheds light on the present level of responsible AI adoption and highlights the urgent need for strong ethical frameworks, open and honest operating procedures, and industry-wide cooperation to successfully negotiate the challenges posed by integrating AI into business operations.

The ESG study report reveals a variety of useful data on the acceptance, obstacles, and strategic activities around responsible AI. One of the most surprising findings is the widespread use of AI technology, with 97% of surveyed firms actively engaging in AI. Furthermore, 74% have already integrated generative AI technology into production, indicating a considerable shift toward AI-driven operations across many industries.

The study highlights a distinction between strategy and investment. Even though 61% of businesses are spending a sizable amount on AI, a troubling 74% of them acknowledge that they do not have a thorough, organizational-wise strategy for responsible AI.

This emphasizes how important it is to close the gap between strategic planning and financial expenditure in order to guarantee the moral use of AI technologies. In addition to that, the research clarifies the many obstacles that arise when an organization is implementing AI in its systems.

Around 86% of respondents struggle with transparency and have difficulty explaining the content or work done using AI. Furthermore, almost 99% of companies are struggling to comply with the complex web of AI standards and laws, highlighting the complex regulatory environment surrounding AI technology.

Despite these issues, there are still almost 74% of organizations that are using responsible AI, indicating a rising realization of its critical value. Nonetheless, more than a quarter of firms have encountered negative operational impacts, regulatory issues, and market delays as a result of insufficiently responsible AI measures.

According to the Principal Analyst at ESG, their research not only demonstrates inadequate execution of responsible AI practices but also validates the increasing prevalence of AI implementation across sectors. The urgency for an effective data governance system and a solid foundation that upholds ethical principles increases in tandem with the acceleration of AI initiatives within organizations. The objective of this study is to offer organizations direction on how to foster ethical innovation that is consistent with their organizational objectives and moral values.

What is your reaction?

In Love
Not Sure
ToAI Team
Fueled by a shared fascination with Artificial Intelligence, the Times Of AI journalists team brings together various researchers, writers, and analysts. We aim to provide a comprehensive knowledge of AI for a broad audience of the Times Of AI. Through in-depth analysis of the latest advancements, investigation of ethical considerations around AI development, AI governance, machine learning, data science, automation, cybersecurity, and discussions about the future impact of AI across various sectors, we aim to empower readers with the details they need to navigate this rapidly evolving field.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *