Artificial intelligence is moving at a brisk pace, and organisations around the world are scrambling to deploy the latest in AI to ensure they are not left out. If last year was all about generative AI, this year the conversations have moved on to agentic AI, or AI that can assist humans in automating specific tasks. There is no uniformity in the deployment of advanced AI among organisations of all sizes; this is perhaps owing to the absence of a global standard for scaling and deployment of AI in the enterprise segment.
In times like this, credible voices and their views on the rapid developments in AI, gaps in execution, and challenges pertaining to governance need to be heard. Kelly Forbes, a member of the AI council at Qlik, firmly believes that as AI gets smarter, we need to get even smarter while working with AI. Forbes feels that corporate-led AI councils play a role in shaping best practices and ensuring compliance with regulations for the world to follow.
On the sidelines of the Qlik Connect 2025 in Orlando, indianexpress.com sat down with Forbes to understand the challenges faced by enterprises in the ever-evolving landscape of AI. Below are the excerpts from the conversation.
Bijin: From the governance lens, what do you think enterprises are struggling with when it comes to scaling AI, despite having their defined strategies in place?
Forbes: I think Mike (CEO of Qlik) has put it in the right way in saying that we currently have AI available, but actually the adoption implementation is very small. So we haven’t reached the level that we need to be reaching. Part of that is because we do have a few AI constraints. Sometimes it’s a deeper understanding of local policies or regulations data, or there might be issues around infrastructure. On a practical level for businesses, a lot of the time, I think it’s just a lack of understanding how AI can support them within their local capacity. We are now finally at a stage where most companies can recognise the role the AI will play, and now they’re figuring out what that actually means in ‘my context, for my company and for the processes’. That is taking time.
Bijin: For our readers, can you simplify what the AI Council is, and with reference to Qlik, what exactly is the AI Council?
Forbes: The AI Council has brought four of us together with very different expertise and experiences to guide and support Qlik on this journey. AI is evolving quite fast, and governance needs to adapt to that. There are a lot of questions around governance, infrastructure, what the technology can do, and how you can best support businesses. What might be happening here in the US is different from what might be happening in India or in the Middle East. The AI Council has been able to show value and support in that interaction and journey.
Story continues below this ad
Bijin: During the keynote address, there was a distinction outlined between Gen AI and agentic AI. When we talk about agentic AI, what kind of safeguards or policy framework should be in place to ensure that these autonomous systems remain accountable?
Forbes: Last year we were speaking about generative AI. This year we’re speaking about agentic AI; probably next year we’re going to be speaking about new developments. It’s moving very fast. But safeguards remain the same. What we’re seeing more with agentic AI is that we are seeing more autonomy given to AI. It requires less human input. The moment you do that, the machine will run on its own. We have to make sure that it’s running well and not making mistakes, and how are we keeping it accountable? There are a lot of practical processes, and most of them we are implementing. Having a framework, having tools supporting businesses, and keeping ahead in terms of what those standards and foundations should look like. I had a meeting just today with a few of the colleagues from the team, and they were explaining everything they’re doing around implementing different processes into their work to make sure that AI safeguards are evolving at the same time that AI is evolving. You had Gen AI and agentic AI, but your safeguards are currently being updated at the same time as well.
Bijin: You (Qlik) are working towards extending business intelligence to organizations worldwide. When we talk about business intelligence, there’s also this decision intelligence that you’ve been pushing. How can companies ensure that AI-driven decisions they are making are ethical, fair, and explainable?
Forbes: The first step is to identify what that should look like. How our work should be and what are the standards here that we’re trying to meet. I know Qlik is very actively working on keeping updated with international standards, as well as NIST (National Institute of Standards and Technology), which is the US government agency that is telling you, having frameworks around the AI, what they should look like, and what the ethics and principles are. A lot of the internal work is looking after this, trying to align, and then teaching their customers and working with them to achieve that. It’s something that they’re constantly educating businesses on. Last year we had a whole panel with the AI Council on responsible AI. We talked a lot about what generative AI would represent from a responsible point of view and what the practices and processes were that we needed to put in place.
Story continues below this ad
Bijin: In a broader sense, when we talk about regulation and governance at this point, it’s pretty fragmented because there’s no global standard yet. Every country, even the EU, has a different way of looking at it. Are we any closer to a unified global standard on AI ethics and governance?
Forbes: It’s a very good question, and we did discuss that today. The answer is probably no, not yet, mostly because we still see governments trying to do things in different ways, so we still see fragmentation. And bodies that can set the standards are still working out how to do that well. At least for this year, we are not likely to see much of that, although we did see the AI Act coming out of Europe. What happens is the AI Act has what’s called the Brussels effect, which is like they did with the General Data Protection Regulation that regulates data. They did it in Europe, but it has an effect everywhere across the world. In some ways, you might think that the AI Act could be setting that there is a framework there indirectly.
Bijin: Talking about AI regulation versus innovation, in your experience, how best can policy makers support innovation without letting the risk with systems slip through the cracks?
Forbes: I think there are good models to always look up to by governments that are doing that really well. When I see what the Singaporean and the UAE governments are doing, it’s a sign of good leadership at that intersection of innovation and regulation. It’s not an easy thing to do because you have to balance it out. You don’t want to overregulate and then impose restrictions on innovation. They’re doing things like sandboxes, for example, and ways to engage industry in programmes where you kind of test the technology. The industry is teaching you and learning before you actually impose strict regulations. Japan is also doing a lot of that. Those countries are very much abreast of the potential of AI, and they don’t want to risk that.
Story continues below this ad
Bijin: When we talk about generative AI model deployment with respect to governance, what kind of challenges have you come across? These models come with a host of issues like bias and factual inaccuracies; sometimes the training data lacks quality.
Forbes: The challenges start because of the nature of AI right now. You’re generating and creating things, and there’s a lot of autonomy with that. You can imagine, for example, they can create a whole new piece of art based on Picasso’s drawing, and then you have copyright issues. Or you could ask a question, and it can completely hallucinate. You have biases and other issues as well. What is very much needed to prevent that is awareness and ensuring that people that interact with technology have the necessary training and understanding of these limitations. When generative AI first came out, I worked on a project which supported ASEAN countries, all the Southeast Asian countries, on how they would adapt their policies and regulations to generative AI, because what they had before on their traditional AI wouldn’t be as appropriate anymore.
Bijin: Talking about safeguards, the terminology of ‘human in the loop’ has been romanticised by Big Tech. Do you think human oversight as of today is sufficient, or is there any need for a more rigorous mechanism in place?
Forbes: I think that we need a better level of training and people awareness in working with the tools. I don’t think that the majority of people know exactly how the systems work, and so it does become risky if those systems are applied in what the AI Act classifies as high-risk situations. We’re gradually getting into a more mature level of being able to monitor and check when things do go wrong. But yes, at the same time the AI gets smarter, we need to get smarter working with AI.
Story continues below this ad
Bijin: Coming back to the AI Council, what role do corporate-led AI councils like yours play in shaping real-world governance practices in a broader sense?
Forbes: I think it’s a very crucial role now that companies operating in this space must bring external expertise to guide their work. I hear a lot from investors in the financial sector that they want to see some sort of assurance that you have processes or you have people in place guiding you along the process. Investors want to see from companies: who do you have on the board and on the leadership team? Do you have the right people guiding your company practices or processes? That is a role that the AI Council has here, and it’s shaping up to be the right practice across the world. You pick up a lot on that from the AI Act now, with recommendations that are coming out of governments, which is essential that you have the right people in the right positions.
Q: Now we are seeing that job roles are being transformed. What kind of cross-disciplinary training should be prioritised at this point in time to balance this fast-paced job transformation?
A: I think there’s two sides to this. AI will augment us, and it will bring extreme levels of efficiency for the people that know how to work with AI well. For a lot of people that don’t have the necessary access or skills, it will become very difficult, and we might see some inequality there. The question becomes, how do we make sure that everyone is having that training and preparation to use and work with AI? We think about that in a global context, in countries or remote regions where people don’t yet have full access to the internet. How can we bring in AI and expect that it’s not going to disrupt the workforce when the reality is that the world is already not on an equal path? At the same time that we bring in the technology, we need to be upskilling people. The countries that are doing that very well are countries that have partnerships with governments. They are skilling people. There’s a lot of education awareness that we see happening in leading economies.
Story continues below this ad
Bijin: If you had to predict, what’s the next big ethical question the AI industry will face in maybe 2026?
Forbes: Big question. I wouldn’t attempt to answer that question. I think that we’re going to see exponential growth now. It’s just going to accelerate. It is very hard to project in exactly the right direction. There have been statements from AI experts about AGI that we are not far from that within a few years. Some other experts disagree. We have to observe and see. I think that anyone that speaks to that with certainty is probably lying.
The author is attending Qlik Connect 2025 in Orlando, US, at the company’s invitation.