Dr. Ajay Sood, prior to being appointed as India’s third Principal Scientific Advisor in 2022, was one of India’s foremost physicists at the Indian Institute of Science, Bangalore. He explains to The Hindu where India is positioned in the domains of artificial intelligence and quantum computers and why the world is on the cusp of major technological change. Excerpts:
How do you view the emergence of DeepSeek? Should India be worried and do we have the resources to catch up?
DeepSeek is of course, a wake up call, but maybe more for the Americans than for us. I find the claim that it was developed for $5 million implausible and it’s probably much more. Recent reports seem to attest to that and they have assembled 15,000 to 20,000 GPU (Graphics Processing Units) in some other context and they are using it. Nevertheless, there is innovation involved in making them work in parallel. Instead of doing 660 billion parameters, they did it sector-wise and had ways to connect them in a parallel. That’s the breakthrough. Though I must add that others have been using this approach too. Even in India. In our AI Mission, we have addressed several issues. We have the computing resources and this mission will develop the necessary foundational AI models. You need the data centers where all these computers are available. The government has floated a tender for private sector to set up compute facilities. And the government will be the buyer of that. The government may not need all, but whatever it wants. So 18,000 GPU worth of compute is already being planned. There are seven to eight private sector players in India setting up these computing facilities. To develop foundational Large Language Models (LLM) you need the data set to train your model. How will you train without compute facilities? That’s what is being addressed now.
The way India has now announced an AI Mission and decided to have its own foundational AI models, it seems that we are in a reactive mode. Is it essential that India develop its own models, rather than tweak what’s available to custom applications?
Having foundational models is mandatory. In fact, we had decided on an AI Mission in 2019 itself at the PM-Science, Technology, Innovation Advisory Council (PM-STIAC), in which I was a member. But we lost two and a half years to COVID-19. I would not say that we are chasing AI.
Whether we must have our own foundation model or take an open source model and reconfigure is the debate. We have our own demands. We will have our own use cases. Our demography is very different. Our diversity is very different. All that will be needed when you want to train a model. Open AI model will not be trained on our culture. However, we must do both.
But do you really see having our own foundation models leading to a commensurate gain to our economy, jobs?
The answer is yes. If you do not do, you will remain in the service mode. You will never come out with the breakthroughs in the development of the technology. We have reached a level, but that’s not the end of the world. You can’t just parachute on something and say ‘I’ll start working from wherever you are and build on that.’ It doesn’t happen like that in technology and science.
As a scientist, do you see the development of AI as generally good for humanity?
My answer is yes. We need to see it as something that supports and not replaces. If AI can automate routine processes, then our people can start from there. There is nothing wrong.
AI appears to be qualitatively different from software because it can often come up with results that people cannot always comprehend.
I agree, but that is why you need explainable AI and people are developing it now. If I get an answer, I should be informed of the source that guided the AI to that answer. The Ministry of Electronics and Information Technology have a report on this, that was up for public consultation until February 28. It has taken care of all these things.
Recently, Microsoft claimed success with developing Majorana 1, the first quantum chip to be powered by a Topological Core architecture. How significant is that – especially when quantum computers still aren’t solving real-world meaningful problems?
Topological qubits [a way to store quantum information that could lead to more powerful computers] were theoretically known. They have demonstrated an 8-qubit machine. If that is as successful as they say it is, scaling up will be much faster. Topology protects the qubits from defects and disturbances, making them more rugged. That is the whole point of a topo-conductor, and is a triumphant moment. Thirty years ago, this was basic science, which seemed so hypothetical and impossible and today it is a reality. I think it’s amazing.
Do you think everybody working on quantum computers should pursue this direction of science? Should India too work on Majorana?
There are groups in India who are working. We are not at absolute zero. Question is they have to take it to technology. Microsoft’s group of theoretical physicists and computer and materials scientists all worked on this for 15 years. But they had not given up even though all the hype was on superconducting qubit [a parallel approach to quantum computing], that’s the point you have to see. They knew that they were on the right path. We should not be in a rush to just condemn something or throw something. So it’s not that superconducting qubits will be out. We have to see because in quantum computing, we still don’t know which model will win. And then the next paradigm is quantum AI. This could mean training AI models using quantum computers, I can’t even imagine what that will mean. Maybe whole new ways of understanding models and training. That’s the beauty of this field. We are at the cusp of a new era.
Published – March 09, 2025 10:03 pm IST