When Albania introduced Diella, an AI system designed to help citizens navigate public services, it marked more than just another government IT upgrade. Diella is not only a digital assistant embedded in the country’s “eAlbania” platform—it has also been formally appointed as the world’s first “Minister of State for Artificial Intelligence,” responsible for overseeing procurement processes to reduce corruption. For a small nation often overlooked in global tech conversations, Albania has put itself at the center of a debate with implications for governments worldwide: What happens when AI moves from helping citizens file forms to shaping the very way states operate?
The answer could redefine the social contract. AI in public service carries enormous promise: faster responses, reduced bureaucracy, cost savings, and data-driven policymaking. But it also introduces new risks: algorithmic bias, diminished human accountability, and the unsettling prospect of machines holding formal authority. Diella, as an early adopter experiment, gives us a glimpse of both futures.
Why governments are turning to AI
Governments everywhere struggle with inefficiency. Citizens routinely wait weeks—or months—for permits, licenses, or benefit approvals. Public servants spend countless hours verifying documents, processing forms, and answering repetitive inquiries. These bottlenecks don’t just frustrate individuals; they erode trust in institutions.
AI systems offer a way out. Virtual assistants like Diella can answer thousands of citizen questions simultaneously, reduce wait times, and process applications with minimal human involvement. For governments facing tight budgets, AI promises scalability without proportionally scaling headcount.
Beyond efficiency, AI provides new analytical power. Properly deployed, algorithms can help ministries anticipate unemployment trends, detect fraud in tax filings, or predict infrastructure needs. The capacity to turn raw administrative data into real-time insights could make governance not just faster, but smarter.
Diella as a case study
Launched in January 2025 by Albania’s National Agency for Information Society, Diella was initially designed as a digital assistant on the eAlbania portal. Its first task: help citizens access public services and issue digital documents more seamlessly.
In September 2025, Diella made headlines when Albania formally appointed it as “Minister of State for Artificial Intelligence.” This move was largely symbolic, but it conferred official responsibility over public procurement, one of the country’s most corruption-prone areas. The idea: an incorruptible algorithm, operating transparently, could make contracting fairer and more efficient.
International observers were divided. Some praised Albania’s boldness, calling Diella a pioneering leap into the future of digital governance. Others worried about the precedent: can a system designed and maintained by humans ever be truly impartial? And what does it mean for democratic accountability if a non-human agent holds executive authority?
Benefits of AI in public service
Accessibility
AI assistants can operate 24/7, in multiple languages, and through mobile interfaces—making government services available to rural communities, non-native speakers, and people with disabilities who might otherwise struggle.
Cost efficiency
Once deployed, AI systems can scale cheaply compared to human staff. This allows governments to deliver more services without increasing payroll expenses, an attractive proposition for cash-strapped administrations.
Transparency and anti-corruption
Diella’s procurement mandate highlights a powerful use case: reducing corruption. By automating contract evaluation with clear, auditable rules, AI can close loopholes that human officials might exploit. If designed with transparency in mind, algorithms could even publish their reasoning to the public, boosting trust.
Responsiveness during crises
Pandemics, natural disasters, or economic shocks overwhelm bureaucracies. AI systems can handle surges in citizen requests, help triage medical resources, or distribute emergency aid faster than human officials alone.
Risks and challenges
Algorithmic bias
AI is only as impartial as the data it is trained on. If past procurement decisions favored certain companies or excluded minority groups, an AI system could reproduce these biases at scale.
Accountability gaps
If an AI denies someone benefits, delays a permit, or awards a contract unfairly, who is responsible? The programmers? The agency deploying the system? Or the AI “minister” itself? Without clear accountability, citizens may find themselves trapped in bureaucratic limbo with no one to appeal to.
Security and manipulation
Hackers targeting an AI minister could wreak havoc. Malicious actors might manipulate procurement data, alter decision thresholds, or shut down services altogether. A single vulnerability in an AI-driven state could compromise millions of citizens.
Erosion of human judgment
Governments deal not only with data but with values, empathy, and context. Delegating too much to AI risks reducing governance to technical optimization, overlooking the human dimensions of justice and fairness.
Lessons from Diella
Diella is an instructive case for the global community. Appointing it as a minister was a powerful gesture, but it raised questions about democratic legitimacy. Other governments may prefer to frame AI as a tool, not an authority. Procurement is also an interesting test case. If AI can reduce corruption in contracting—long one of the most problematic areas of public administration—it will strengthen the case for wider adoption. Above all, transparency is non-negotiable. Citizens will only trust AI governance if they can see and understand how decisions are made. Black-box algorithms will undermine legitimacy.
The broader future of AI in government
Many governments are already using AI to manage traffic flows, optimize energy use, and monitor air quality. Extending these systems could turn cities into laboratories for responsive governance.
AI could help forecast economic downturns, anticipate migration patterns, or model the impact of climate change on infrastructure. These insights could make policymaking more proactive.
Just as Netflix recommends movies, AI could recommend training programs, healthcare screenings, or benefit programs tailored to each citizen’s needs. This could increase uptake and efficiency—but raises major privacy concerns.
Beyond administration, AI might eventually support democratic processes: moderating online debates, flagging disinformation, or even assisting citizens in drafting policy proposals. But here, the risk of manipulation is especially acute.
Guardrails for AI-driven government
If the future is to include more “Diellas,” guardrails are essential. AI should assist, not replace, human decision-makers—especially in sensitive areas like justice or social welfare. Systems must provide clear explanations for their decisions, and citizens should be able to appeal AI rulings to human officials. Strong privacy protections and security protocols are necessary to prevent misuse of sensitive citizen data. Governments should adopt AI charters that enshrine fairness, inclusivity, and respect for human dignity. And just as countries collaborate on nuclear safety, a global framework for AI in governance may be needed to set standards and prevent misuse.
Between promise and peril
Diella is both an experiment and a provocation. By naming an AI system as a minister, Albania has forced the world to confront difficult questions about authority, accountability, and trust in the digital age. The benefits of AI in government—efficiency, transparency, and scalability—are real and urgently needed. But the risks—bias, opacity, and the erosion of human judgment—are equally profound.
The future of AI in public service will not be decided by technology alone but by political choices: how leaders design, regulate, and communicate these systems to their citizens. Governments that strike the right balance—harnessing AI’s strengths while preserving accountability and human values—may create more responsive, fairer institutions. Those that do not risk replacing slow bureaucracies with fast but unaccountable machines.
Diella may be the first AI minister, but it will not be the last experiment of its kind. And as more governments test the boundaries of AI adoption, one truth becomes clear: technology alone won’t deliver trust, fairness, or legitimacy. These outcomes depend on careful design, governance, and human oversight.
At Zarego, we help public institutions and private organizations navigate this exact balance. We build responsible, secure, and transparent AI systems that not only improve efficiency but also uphold accountability and trust. From digital assistants and automated workflows to large-scale data architectures and ethical AI frameworks, we work with teams that want to move from experimentation to lasting impact.
If you’re exploring AI in government or public service, let’s talk.


