India’s enterprises are digitising at unprecedented speed. Artificial intelligence is being embedded into underwriting, fraud analytics and customer onboarding. Cloud adoption is expanding. Data ecosystems are increasingly interconnected. Digital capability is no longer optional it defines competitiveness.
As technology scales, governance must scale with equal intensity.
The Reserve Bank of India’s FREE-AI Committee, constituted in December 2024 and reporting in August 2025, has confirmed what many practitioners have long sensed: the adoption of AI in financial services brings transformative opportunities, but if deployed without guardrails, it can exacerbate existing risks and introduce new forms of harm. The Committee’s framework anchored in seven foundational principles, or Sutras, and structured across six strategic pillars offers the financial sector a rigorous architecture for navigating this challenge. This article examines the governance imperative through the lens of that framework.
The Misalignment Between Digital Speed and Governance Depth
Regulatory expectations are evolving beyond periodic compliance certifications. Supervisors now assess operational resilience, cyber preparedness, outsourcing risk and algorithmic accountability in real time. Governance structures designed for slower business cycles are struggling to keep pace with continuously adapting digital systems.
This misalignment creates structural risk. In many organisations, risk registers are updated quarterly, control testing remains episodic, and reporting is retrospective. Meanwhile, digital platforms operate continuously. The speed differential between technological execution and governance oversight introduces blind spots that may only surface under regulatory scrutiny or operational stress.
The FREE-AI Committee notes this directly: without a formal AI policy, different teams within the same organisation may proceed with different interpretations of acceptable risk, leading to fragmented implementation and consumer harm. The absence of board-level oversight means that senior leadership may remain unaware of the reputational and regulatory consequences of their institutions’ AI deployment choices until it is too late.
The Dual Imperative: Innovation and Risk Mitigation as Complementary Forces
A critical correction to conventional governance discourse is warranted here. There remains a perception that stronger governance constrains innovation. The FREE-AI framework rejects this binary decisively. The Committee explicitly frames innovation enablement and risk mitigation as “not competing objectives, but complementary forces that must be pursued in tandem.”
The FREE-AI architecture is built on two complementary sub-frameworks. The Innovation Enablement Framework unlocks the transformative potential of AI by building shared data infrastructure, enabling AI sandboxes for experimentation, developing indigenous financial sector AI models, and fostering institutional capacity at every level. The Risk Mitigation Framework establishes governance, protection, and assurance mechanisms across the AI lifecycle. Together, they constitute what the Committee calls the FREE-AI vision: “a financial ecosystem where the encouragement of innovation is in harmony with the mitigation of risk.”
Organisations that treat governance purely as a compliance overhead will find themselves structurally disadvantaged. Those that embed governance into their digital and AI infrastructure will be better positioned to innovate responsibly, attract regulatory confidence, and sustain institutional trust. Clarity of risk appetite and control design supports faster decision-making, not slower execution.
Six Governance Imperatives for the AI Age
Drawing on the FREE-AI framework’s six pillars and seven Sutras, and extending the governance architecture appropriate for India’s digital economy, the following imperatives deserve priority attention.
1. Establish Board-Approved AI Governance Policies
Just as regulated entities have board-approved policies on credit, cybersecurity, and outsourcing, the FREE-AI Committee recommends that every institution establish a board-approved AI policy. This policy must explicitly articulate the institution’s position on AI governance, ethics, and accountability; define a risk classification framework that categorises AI use cases as low, medium, or high risk; specify operational safeguards, model lifecycle governance, and liability frameworks; and ensure alignment with the DPDP Act, RBI Master Directions, and national AI governance frameworks.
This is not a documentation exercise. It is the structural foundation for ensuring that boards exercise genuine oversight over AI adoption, and that AI risks are integrated into the institution’s overall risk mitigation framework rather than managed in isolation by technology teams.
2. Build Consolidated, Real-Time Risk Visibility
Boards and senior leadership require a unified view of operational, cyber, regulatory, algorithmic, and third-party exposures. Fragmented reporting across functions weakens strategic decision-making. In an environment of heightened supervisory oversight, incomplete visibility is itself a governance gap.
The FREE-AI framework’s Sutra 7 Safety, Resilience, and Sustainability calls for AI systems that can detect anomalies and provide early warnings to limit harmful outcomes. This implies that governance mechanisms must be augmented with AI-native monitoring tools, not simply faster human oversight. The goal is continuous control monitoring and real-time compliance validation that allow early detection of deviations before they compound.
3. Embed Accountability Within Digital and AI Workflows
Digital transformation distributes responsibility across technology, data science, compliance, and business teams. The FREE-AI Committee’s Sutra 5 Accountability is unambiguous: accountability rests with the entities deploying AI, cannot be delegated to the model or underlying algorithm, and must be clearly assigned regardless of the level of automation.
In practice, this requires clear ownership of controls, automated documentation of decisions, defined escalation pathways, and the establishment of an AI Adoption Committee (or equivalent body) that bridges functional silos across business, risk, compliance, and technology departments. Regulators increasingly examine not just the existence of controls, but the consistency of monitoring and the independence of validation.
4. Govern the AI Data Lifecycle End-to-End
Governance of AI is inseparable from governance of data. The FREE-AI framework calls for robust internal data governance frameworks across the entire data lifecycle, from collection to deletion. Data used for AI applications must be relevant, fairly representative, and ethically sourced. Weak controls at any stage whether due to poor quality checks or failure to adhere to consent obligations under the DPDP Act can undermine the integrity of AI systems and expose institutions to reputational, legal, and operational risks.
Institutions must also govern the use of third-party and off-the-shelf AI tools. The growing deployment of GenAI applications for internal functions such as document drafting, report summarisation, and data analysis represents an underappreciated governance exposure. Clear policies must ensure that sensitive customer and institutional data remains within secure environments under organisational control.
5. Design for Explainability, Fairness, and Human Centricity
Conventional GRC frameworks were not designed for AI-specific risks. Issues of algorithmic bias, model drift, explainability gaps, and discriminatory outcomes require governance capabilities that go beyond periodic compliance reviews.
The FREE-AI framework’s Sutra 6 Understandable by Design holds that understandability is fundamental to trust and must be a core design feature of AI systems, not an afterthought. Sutra 4 Fairness and Equity requires that AI systems be designed and tested to ensure outcomes are unbiased and do not discriminate. Sutra 2 People First asserts that citizens must be made aware when they are interacting with AI systems, and that human judgment must retain the authority to override AI, especially in high-stakes decisions.
For insurers, lenders, and other financial intermediaries, these principles carry significant regulatory weight. AI used in underwriting, credit assessment, fraud detection, or customer onboarding must be explainable, fair, and subject to human oversight not merely accurate. Governance must be AI-native in its design.
6. Build Institutional Capacity and Participate in Shared Infrastructure
Governance cannot scale without human capability to sustain it. The FREE-AI framework dedicates an entire pillar to Capacity promoting human skill development and institutional readiness to harness AI safely and effectively at every level, including the board.
Equally important is participation in the shared infrastructure that the FREE-AI Committee recommends for the financial sector: a publicly governed data infrastructure to democratise access to high-quality financial datasets; an AI Innovation Sandbox to enable responsible experimentation; and the development of indigenous financial sector AI models. These are not optional resources for large incumbents they are the infrastructure foundations that will determine whether smaller regulated entities can participate meaningfully in India’s AI economy.
Regulatory Convergence Is Accelerating, Not Approaching
The FREE-AI Committee confirms what the article originally described as an approaching convergence: the DPDP Act, RBI Master Directions on IT and cybersecurity, IRDAI’s digital governance frameworks, and SEBI’s oversight mechanisms are being actively updated to include AI-specific provisions. This convergence is not a future condition it is the present environment.
The FREE-AI framework recommends a more tolerant compliance approach for low-risk AI solutions to facilitate inclusion, while requiring comprehensive governance for high-risk applications. This risk-tiered approach provides an actionable architecture for regulated entities: not every AI deployment requires the same governance intensity, but every deployment requires governance proportionate to its risk classification.
Enterprises that treat compliance as a reporting exercise will find adaptation difficult as regulatory expectations become more granular, real-time, and AI-specific. Those that embed governance into digital infrastructure not alongside it will strengthen credibility and long-term agility.
Governance as the Foundation of India’s AI Economy
The RBI’s FREE-AI Committee offers a clear and authoritative signal: the responsible adoption of AI in India’s financial sector is a national priority, not a compliance footnote. The framework’s Seven Sutras Trust, People First, Innovation over Restraint, Fairness and Equity, Accountability, Understandable by Design, and Safety with Resilience are not abstract propositions. They are actionable principles intended to be woven through the entire lifecycle of AI systems, from development through deployment to retirement.
Governance is no longer a parallel function operating alongside strategy. In a digital economy powered by AI, it is foundational to strategy. Organisations that invest in integrated, AI-native oversight capabilities will reinforce resilience, protect institutional trust, and sustain competitive momentum.
Those that delay will discover that governance gaps expand as rapidly as technological ambition and that in a sector built on trust, the cost of that discovery is far greater than the cost of prevention.
India’s enterprises are digitising at unprecedented speed. Artificial intelligence is being embedded into underwriting, fraud analytics and customer onboarding. Cloud adoption is expanding. Data ecosystems are increasingly interconnected. Digital capability is no longer optional it defines competitiveness.
As technology scales, governance must scale with equal intensity.
The Reserve Bank of India’s FREE-AI Committee, constituted in December 2024 and reporting in August 2025, has confirmed what many practitioners have long sensed: the adoption of AI in financial services brings transformative opportunities, but if deployed without guardrails, it can exacerbate existing risks and introduce new forms of harm. The Committee’s framework anchored in seven foundational principles, or Sutras, and structured across six strategic pillars offers the financial sector a rigorous architecture for navigating this challenge. This article examines the governance imperative through the lens of that framework.
The Misalignment Between Digital Speed and Governance Depth
Regulatory expectations are evolving beyond periodic compliance certifications. Supervisors now assess operational resilience, cyber preparedness, outsourcing risk and algorithmic accountability in real time. Governance structures designed for slower business cycles are struggling to keep pace with continuously adapting digital systems.
This misalignment creates structural risk. In many organisations, risk registers are updated quarterly, control testing remains episodic, and reporting is retrospective. Meanwhile, digital platforms operate continuously. The speed differential between technological execution and governance oversight introduces blind spots that may only surface under regulatory scrutiny or operational stress.
The FREE-AI Committee notes this directly: without a formal AI policy, different teams within the same organisation may proceed with different interpretations of acceptable risk, leading to fragmented implementation and consumer harm. The absence of board-level oversight means that senior leadership may remain unaware of the reputational and regulatory consequences of their institutions’ AI deployment choices until it is too late.
The Dual Imperative: Innovation and Risk Mitigation as Complementary Forces
A critical correction to conventional governance discourse is warranted here. There remains a perception that stronger governance constrains innovation. The FREE-AI framework rejects this binary decisively. The Committee explicitly frames innovation enablement and risk mitigation as “not competing objectives, but complementary forces that must be pursued in tandem.”
The FREE-AI architecture is built on two complementary sub-frameworks. The Innovation Enablement Framework unlocks the transformative potential of AI by building shared data infrastructure, enabling AI sandboxes for experimentation, developing indigenous financial sector AI models, and fostering institutional capacity at every level. The Risk Mitigation Framework establishes governance, protection, and assurance mechanisms across the AI lifecycle. Together, they constitute what the Committee calls the FREE-AI vision: “a financial ecosystem where the encouragement of innovation is in harmony with the mitigation of risk.”
Organisations that treat governance purely as a compliance overhead will find themselves structurally disadvantaged. Those that embed governance into their digital and AI infrastructure will be better positioned to innovate responsibly, attract regulatory confidence, and sustain institutional trust. Clarity of risk appetite and control design supports faster decision-making, not slower execution.
Six Governance Imperatives for the AI Age
Drawing on the FREE-AI framework’s six pillars and seven Sutras, and extending the governance architecture appropriate for India’s digital economy, the following imperatives deserve priority attention.
1. Establish Board-Approved AI Governance Policies
Just as regulated entities have board-approved policies on credit, cybersecurity, and outsourcing, the FREE-AI Committee recommends that every institution establish a board-approved AI policy. This policy must explicitly articulate the institution’s position on AI governance, ethics, and accountability; define a risk classification framework that categorises AI use cases as low, medium, or high risk; specify operational safeguards, model lifecycle governance, and liability frameworks; and ensure alignment with the DPDP Act, RBI Master Directions, and national AI governance frameworks.
This is not a documentation exercise. It is the structural foundation for ensuring that boards exercise genuine oversight over AI adoption, and that AI risks are integrated into the institution’s overall risk mitigation framework rather than managed in isolation by technology teams.
2. Build Consolidated, Real-Time Risk Visibility
Boards and senior leadership require a unified view of operational, cyber, regulatory, algorithmic, and third-party exposures. Fragmented reporting across functions weakens strategic decision-making. In an environment of heightened supervisory oversight, incomplete visibility is itself a governance gap.
The FREE-AI framework’s Sutra 7 Safety, Resilience, and Sustainability calls for AI systems that can detect anomalies and provide early warnings to limit harmful outcomes. This implies that governance mechanisms must be augmented with AI-native monitoring tools, not simply faster human oversight. The goal is continuous control monitoring and real-time compliance validation that allow early detection of deviations before they compound.
3. Embed Accountability Within Digital and AI Workflows
Digital transformation distributes responsibility across technology, data science, compliance, and business teams. The FREE-AI Committee’s Sutra 5 Accountability is unambiguous: accountability rests with the entities deploying AI, cannot be delegated to the model or underlying algorithm, and must be clearly assigned regardless of the level of automation.
In practice, this requires clear ownership of controls, automated documentation of decisions, defined escalation pathways, and the establishment of an AI Adoption Committee (or equivalent body) that bridges functional silos across business, risk, compliance, and technology departments. Regulators increasingly examine not just the existence of controls, but the consistency of monitoring and the independence of validation.
4. Govern the AI Data Lifecycle End-to-End
Governance of AI is inseparable from governance of data. The FREE-AI framework calls for robust internal data governance frameworks across the entire data lifecycle, from collection to deletion. Data used for AI applications must be relevant, fairly representative, and ethically sourced. Weak controls at any stage whether due to poor quality checks or failure to adhere to consent obligations under the DPDP Act can undermine the integrity of AI systems and expose institutions to reputational, legal, and operational risks.
Institutions must also govern the use of third-party and off-the-shelf AI tools. The growing deployment of GenAI applications for internal functions such as document drafting, report summarisation, and data analysis represents an underappreciated governance exposure. Clear policies must ensure that sensitive customer and institutional data remains within secure environments under organisational control.
5. Design for Explainability, Fairness, and Human Centricity
Conventional GRC frameworks were not designed for AI-specific risks. Issues of algorithmic bias, model drift, explainability gaps, and discriminatory outcomes require governance capabilities that go beyond periodic compliance reviews.
The FREE-AI framework’s Sutra 6 Understandable by Design holds that understandability is fundamental to trust and must be a core design feature of AI systems, not an afterthought. Sutra 4 Fairness and Equity requires that AI systems be designed and tested to ensure outcomes are unbiased and do not discriminate. Sutra 2 People First asserts that citizens must be made aware when they are interacting with AI systems, and that human judgment must retain the authority to override AI, especially in high-stakes decisions.
For insurers, lenders, and other financial intermediaries, these principles carry significant regulatory weight. AI used in underwriting, credit assessment, fraud detection, or customer onboarding must be explainable, fair, and subject to human oversight not merely accurate. Governance must be AI-native in its design.
6. Build Institutional Capacity and Participate in Shared Infrastructure
Governance cannot scale without human capability to sustain it. The FREE-AI framework dedicates an entire pillar to Capacity promoting human skill development and institutional readiness to harness AI safely and effectively at every level, including the board.
Equally important is participation in the shared infrastructure that the FREE-AI Committee recommends for the financial sector: a publicly governed data infrastructure to democratise access to high-quality financial datasets; an AI Innovation Sandbox to enable responsible experimentation; and the development of indigenous financial sector AI models. These are not optional resources for large incumbents they are the infrastructure foundations that will determine whether smaller regulated entities can participate meaningfully in India’s AI economy.
Regulatory Convergence Is Accelerating, Not Approaching
The FREE-AI Committee confirms what the article originally described as an approaching convergence: the DPDP Act, RBI Master Directions on IT and cybersecurity, IRDAI’s digital governance frameworks, and SEBI’s oversight mechanisms are being actively updated to include AI-specific provisions. This convergence is not a future condition it is the present environment.
The FREE-AI framework recommends a more tolerant compliance approach for low-risk AI solutions to facilitate inclusion, while requiring comprehensive governance for high-risk applications. This risk-tiered approach provides an actionable architecture for regulated entities: not every AI deployment requires the same governance intensity, but every deployment requires governance proportionate to its risk classification.
Enterprises that treat compliance as a reporting exercise will find adaptation difficult as regulatory expectations become more granular, real-time, and AI-specific. Those that embed governance into digital infrastructure not alongside it will strengthen credibility and long-term agility.
Governance as the Foundation of India’s AI Economy
The RBI’s FREE-AI Committee offers a clear and authoritative signal: the responsible adoption of AI in India’s financial sector is a national priority, not a compliance footnote. The framework’s Seven Sutras Trust, People First, Innovation over Restraint, Fairness and Equity, Accountability, Understandable by Design, and Safety with Resilience are not abstract propositions. They are actionable principles intended to be woven through the entire lifecycle of AI systems, from development through deployment to retirement.
Governance is no longer a parallel function operating alongside strategy. In a digital economy powered by AI, it is foundational to strategy. Organisations that invest in integrated, AI-native oversight capabilities will reinforce resilience, protect institutional trust, and sustain competitive momentum.
Those that delay will discover that governance gaps expand as rapidly as technological ambition and that in a sector built on trust, the cost of that discovery is far greater than the cost of prevention.


















