Accountability by design: Regulatory challenges in infrastructure sectors

Accountability must be built into artificial intelligence (AI) systems from the earliest stages of design. 

By Sejal Gupta

The panel on “Accountability by Design: Shaping AI Regulations, Platforms, and Critical Digital Infrastructure” at the recent 7th Digital Citizen Summit (DCS) 2025, hosted by Digital Empowerment Foundation (DEF) in partnership with Centre for Development Policy and Practice (CDPP), highlighted that accountability must be built into artificial intelligence (AI) systems from the earliest stages of design. 

As AI becomes central to India’s critical infrastructure, discussions across sectors point to the same conclusion, ie, responsibility cannot be retrofitted.

Subhan Bakery

From automobiles to algorithms

The conversation repeatedly drew on the automobile sector, which gradually built a shared framework of responsibility, manufacturers meeting safety norms, owners ensuring vehicles, drivers following licensed rules, governments designing road systems and even pedestrians playing their part. This evolution showed how accountability works only when it is structurally designed into the system.

Speakers also cautioned that automobile design often prioritised user convenience over broader societal interests, offering a lesson for AI. This is why privacy by design and privacy by default must be foundational. The discussion highlighted how choices made at the model-design stage, such as adopting open-weighted or closed-weighted systems, directly shape transparency, customisation and oversight. These early decisions ultimately determine how responsibility is tracked and enforced throughout an AI system’s life cycle.

Data and computation safeguards

AI introduces risks that do not align neatly with traditional data categories. Beyond data at rest and in transit, AI involves “data during computation,” which requires new forms of protection. 

MS Admissions Admissions 2026-27

Privacy-enhancing technologies, such as federated learning, differential privacy, homomorphic encryption and secure multi-party computation, were highlighted as tools that integrate privacy into system operation. These are not after-the-fact safeguards but mechanisms embedded into AI processes, reinforcing the need for accountability by design.

Power sector

In the power sector, accountability must begin with grid design and planning. As energy underpins AI deployment, questions arose about whether the sector can support the rising computational demand. The Transparency, Traceability and Accountability (TTA) framework offers a structured approach: transparency in generation and distribution, traceability across the energy lifecycle and accountability for every participating entity. 

Yet, policy decisions often begin with intent rather than evidence. 

Distribution companies continue to face financial distress, and traceability mechanisms remain limited. The India Energy Stack aims to digitise operations through granular planning and smart grid technologies, but capital and capacity constraints slow implementation. 

These gaps show how accountability in infrastructure-heavy sectors depends on systemic design decisions rather than mere regulatory compliance.

Aviation sector

The aviation sector offered another example of accountability grounded in design. Built on safety, precision and international harmonisation, aviation relies on strict global protocols and shared learning. 

A recent global positioning system (GPS) spoofing incident in Delhi, caused by a small machine-learning-programmed device, highlighted how new vulnerabilities can exploit gaps in existing systems. Performance-based regulation and safety management systems allow innovation by defining desired outcomes rather than prescribing rigid procedures. 

However, fragmented oversight across the Directorate General of Civil Aviation (DGCA), the Bureau of Civil Aviation Security (BCAS) and the Ministry of Civil Aviation creates coordination challenges. The sector demonstrates that accountability is strongest when regulatory structures are integrated and responsibilities are aligned.

Contrasts in philosophy

Contrasts across sectors also illustrate how regulatory philosophy shapes outcomes. The financial sector has benefited from adaptive, light-touch regulation, voluntary compliance mechanisms and iterative learning, allowing markets to grow without widespread systemic collapse. 

This stands in contrast to the power sector, where rigid rules contributed to long-term distress and repeated bailouts. The comparison shows how regulatory design can either foster resilience or deepen dependency.

Distributed responsibility

A core theme throughout the panel discussion was that accountability in AI must be distributed across the entire ecosystem. Developers are responsible for design choices, data quality and bias mitigation. Deployers bear responsibility for context-specific implementation and continuous monitoring. 

Users must operate systems responsibly, while regulators must establish outcome-based standards and enable multi-stakeholder dialogue. Non-users often lack representation, creating a critical accountability gap in decision-making processes.

Practical constraints

Fragmented oversight structures, particularly in sectors like aviation, where DGCA, BCAS and the Ministry of Civil Aviation share dispersed responsibilities, weaken coordination and accountability. This is compounded by limited specialised technical capacity across institutions, constraining effective risk assessment and enforcement. 

Regulating emerging technologies further depends on voluntary adoption of global standards such as the National Institute of Standards and Technology (NIST) and the Institute of Electrical and Electronics Engineers (IEEE), sandbox testing mechanisms and certification models paired with light-touch regulation, tools that require strong institutional capability. 

Widespread reliance on closed-box AI models also limits transparency and customisation, while open-weighted alternatives demand significant resources and expertise. At the regulatory level, bodies like Telecom Disputes Settlement and Appellate Tribunal (TDSAT) face expanding mandates without commensurate staffing, and initiatives under the India AI Mission reveal uneven capacity across states, underscoring systemic constraints in embedding accountability within AI-driven sectors.

Cross-sector experiences show that strong design choices, integrated oversight, and transparent models are essential for resilience. As India accelerates its digital and energy transitions, embedding accountability by design will determine whether AI strengthens public trust or amplifies systemic vulnerabilities.

(Sejal Gupta is a Senior Research Fellow at the Centre for Development Policy and Practice (CDPP), Hyderabad. CDPP is an independent research organisation that works to influence public policy, with a focus on the development of vulnerable populations. The author is associated with the CDPP Regulation Project covering emerging, infrastructure and consumer sectors)

Back to top button