Towards Regulation of Artificial Intelligence in India. Starting with how 'AI' is being understood by regulators.
- Ashwini Sharma

- 13 hours ago
- 3 min read
Updated: 4 minutes ago
The proposed Indian AI Bill defines “Artificial Intelligence (AI)” as “computer systems or applications capable of performing tasks that typically require human intelligence, including but not limited to decision-making, language processing and visual perception.”
This definition is deeply unsatisfactory - not merely in its phrasing, but in what it reveals about how the Indian regulator imagines the nature, scope, and future of AI systems. Definitions are never neutral; they disclose the mental model of the drafter. Here, that model is both dated and constricting. A successful regulatory framework for AI in India necessarily requires apt definitional capture of the subject matter of regulation.
First, the definition implicitly suggests that AI qualifies as such only to the extent that it replicates or displaces human intelligence. If an AI system does not mirror human cognition, the implication is that it falls outside the conceptual core of “AI.” This is a fundamental misunderstanding. Modern AI systems are valuable precisely because they do not operate like humans. They excel at scale, speed, dimensionality, precision, and pattern recognition far beyond human cognitive limits. To treat AI as a disembodied analogue of human intelligence is to anthropomorphize a technological phenomenon that is, in reality, orthogonal to human cognition in many respects.
Lawmakers are, of course, entitled to define concepts as they see fit. But the real question is not whether a definition can be chosen; it is whether the definition is capable of covering the phenomenon it seeks to regulate, not just as it exists today, but as it is already evolving.
Second, the proposed definition evaluates AI almost exclusively through the lens of outputs - “decision-making, language processing and visual perception.”
This output-centric framing is analytically thin. AI systems are not merely output-producing tools; they are inferential systems built on architectures that enable learning, optimisation, adaptation, and environment-shaping behaviour. An epistemically sound understanding of AI requires attention to what the system is, not merely what it appears to do.
Admittedly, legislatures have historically avoided deeply architectural or technical definitions, preferring functional abstractions. That instinct is understandable. However, abstraction should not come at the cost of conceptual distortion. A definition that ignores the system-level nature of AI risks misclassifying or altogether excluding entire categories of AI systems -particularly embodied, distributed, and self-organising systems that do not neatly reduce to human-analogous “tasks.”
To assess the Indian proposal properly, comparison with the EU AI Act becomes unavoidable. The EU Act defines an “AI system” as:
“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The contrast is instructive. The EU definition deliberately decouples AI from human intelligence, focuses on autonomy and post-deployment adaptiveness, and explicitly recognises that AI systems can influence both physical and virtual environments. In doing so, it accommodates self-learning robots, cyber-physical systems, and other advanced AI forms without requiring constant definitional revision. It is, therefore, markedly more future-proof.
In short, while the Indian definition reflects an older, anthropocentric imagination of AI as “human intelligence in silicon,” the EU approach recognises AI as a class of machine-based inferential systems whose significance lies not in imitation of humans, but in their capacity to operate autonomously, adapt dynamically, and restructure environments. The difference is not semantic - it is conceptual.





Comments