As a transformational technology, AI will have a global impact and considerable ramifications for national economies, democracies, and societies. Although many countries are developing AI governance frameworks, the regulation of AI is only one of the essential elements that countries need to consider to achieve AI sovereignty.
AI sovereignty is not a universally defined concept. Building upon previous work on digital sovereignty and, particularly, what I have described as “good digital sovereignty,” I here define AI sovereignty as the capacity of a given country to understand, develop, and regulate AI systems. As such, AI sovereignty is essential to retaining control, agency, and self-determination over AI systems.1 Importantly, AI sovereignty will likely become an increasingly relevant and strategic topic as AI technologies continue to develop, are adopted, and acquire a significant role in various aspects of society and democratic governance, beyond the digital economy. The impact of AI advancement includes a wide range of critical sectors such as defense, infrastructure management, healthcare, and law and justice.
A layered framework is needed to analyze which elements are essential to establish a country’s AI sovereignty. These elements make up what I call key AI sovereignty enablers (KASE) and include sound governance for data and personal data as well as algorithmic governance, strong computational capacity, meaningful connectivity, reliable electrical power, a digitally literate population, solid cybersecurity, and an appropriate regulatory framework.
I argue that sound governance,2 regulation, research, and development in all the elements of the AI value chain are essential not only to achieving economic growth, social justice, and industrial leadership, but also to asserting AI sovereignty and avoiding overreliance on exclusively foreign AI systems in a country, which would likely transform the recipient country into a digital colony. Importantly, the purpose of this article is not to advocate for AI autarchy, that is, fostering fully self-sufficient national AI industries, nor to deny the ample benefits that digital trade and cooperation can produce, but rather to discuss how countries could achieve a sufficient level of strategic autonomy, which entails grasping the functioning of AI systems, developing such systems rather than being mere consumers, and regulating them effectively.
Through careful consideration of each of the KASE and their interconnection, countries can build what I call an “AI sovereignty stack.” In this governance framework, the authorities in charge of overseeing each KASE should be enabled to cooperate with other authorities from different sectors (including in consumer protection, data privacy, financial services, energy, and telecom infrastructure) in order to facilitate smooth organization and information sharing. This layered structure may reduce the country’s exposure to the technological choices of foreign private or public actors and simultaneously increase its agency and self-determination over and though AI systems.
Luca Belli is a professor of digital governance and regulation at Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro, where he directs the Center for Technology and Society (CTS-FGV) and the CyberBRICS project
For countries in the Global South, AI sovereignty should be a policy priority. The KASE require considerable planning, resources, and implementation capacity, but they are highly strategic objectives that reinforce national sovereignty and empower countries to resist possible adverse conditions spanning from the extraterritorial effects of foreign regulation to the imposition of foreign sanctions and the increasingly frequent disruption of supply chains.
It is important to acknowledge that not every country may be interested or need a fully self-sufficient national AI industry. In the case of Brazil, the proposed KASE framework will illuminate whether Brazilian policy choices and governance arrangements can allow the country to assert AI sovereignty or will lead to AI dependency.
Data make up the lifeblood of AI systems. Access to diverse, high-quality data is essential for training and improving AI models. Importantly, depending on the type of AI at stake, the data fed into AI systems can be personal, governmental, confidential, and/or copyrighted, among other data types. The multitude of data types introduces a fair amount of complexity and the need for regulatory compliance in the processing of this information. Hence, developing AI capabilities and sovereignty requires both the availability of large volumes of heterogeneous data and control over such data, including rules governing how they are collected, stored, processed, and transferred to third countries.
Countries with large and diverse populations, as well as consolidated data collection practices and well-structured data policies, will indubitably have a competitive advantage in securing their AI sovereignty. However, few countries enjoy this privilege. So, countries should consider establishing shared data policy frameworks at the regional level or within existing international governance mechanisms, so that national data assets can be shared under agreed norms. (Latin America could craft its own regional framework, learning useful lessons from existing mechanisms, such as the Council of Europe’s Convention 108 and the Malabo Convention.) This strategy would allow the usage of much larger and diversified data pools, providing juridical certainty for AI researchers and developers while protecting the rights of personal data subjects, defending intellectual property rights, and preserving the public interest at the same time.
Particularly, sound data governance allows a country to protect its citizens’ data privacy, ensure national and informational security, and harness the value of data for national development. Brazil made considerable progress in data governance by structuring one of the most progressive and refined open data policies and by adopting a last-generation data protection framework, the Lei Geral de Proteção de Dados (LGPD). The enforcement of the LGPD, however, remains very embryonic, especially regarding new generative AI systems.
Furthermore, personal data collection is concentrated in the hands of a few foreign tech giants, primarily as a result of so-called zero-rating mobile internet plans, in which data usage for a few applications selected by the mobile internet operator—typically, dominant social media companies—does not count toward the users’ total data consumption. Thus, the government is unable to harness personal data as a national asset. Lastly, data security also remains very patchy, given the lack of a cybersecurity law and regulation on personal data security.
Software algorithms are the foundation of AI systems, enabling machines to perform tasks and make decisions. Importantly, algorithms can be both the subject and facilitator of regulation. On the one hand, the development and deployment of algorithms can at least partly give rise to risks and social problems, triggering the need for regulatory intervention. On the other hand, algorithms can support the regulatory intervention itself, as they are increasingly useful in the elaboration and implementation of regulation.
Algorithm development, deployment, and regulation are all equally important dimensions of algorithmic governance. Developing and owning proprietary software provides a considerable competitive advantage and allows a country to embed its normative values within the software. Investing in research and development of AI algorithms, while also addressing the potential risks that they pose, can enormously enhance a country's technological capabilities and reinforce AI sovereignty.
Hence, the promotion of multistakeholder cooperation to develop software algorithms can enhance AI sovereignty either when domestic players are stimulated to develop proprietary software or when software is developed open-source through a collaborative process embraced—or even led—by national stakeholders. President Luiz Inácio Lula da Silva’s first administration was a true pioneer in employing a collective approach to digital sovereignty, having promoted free and open software as a strategic objective for national development as early as 2003. Such a policy not only enhanced Brazil’s strategic autonomy from foreign software producers but also increased national understanding and development of software. Unfortunately, this policy was reversed by president Michel Temer’s administration in 2016, de facto bringing about the so-called platformization of the country’s public administration, relying primarily on foreign software providers.
Despite the political turbulence, over the past two decades, Brazil has developed several industrial policy instruments aimed at fostering the national software industry. However, the software development sector has not thrived as much as it could, primarily due to inconsistent policies and an absence of regulations focused on stimulating organic software development and implementation, including a lack of capital to jump-start the industry. Particularly, Brazilian software policies have lacked complementary instruments to stimulate supply and demand, especially compared to countries like China, where public procurements of nationally developed software are common; India, where digital public infrastructure has been established through the India Stack; or South Korea, where in the late 1990s capacity building efforts were organized to foster demand.
Training complex AI models and processing large datasets require substantial computational resources. Particularly, the most advanced AI systems, such as generative AI, can be remarkably computer-intensive due to their increased complexity. Ensuring continuous access to sufficient computational capacity should be seen as a key strategic priority.
The availability of high-performance computing infrastructure depends on access to multiple factors, spanning from semiconductors—including chips specifically designed for AI applications as well as latest-generation graphics processing units—to specialized servers tailored to AI specificities in data centers. In this respect, it is interesting to note that some of the first policies adopted by the current Lula administration have been the reintroduction of the national support program for the development of semiconductors (known as “PADIS”) as well as the suspension of the decision from former president Jair Bolsonaro’s administration to sell the National Center for Advanced Electronic Technology (Ceitec), which is the only semiconductor producer in Latin America.
The availability of cloud computing resources by itself is not enough to assert AI sovereignty; cloud providers have to also be fully compliant with national legislations. A telling example is the online education platforms that operate in Brazil. Two major U.S. tech companies supply these platforms nationally, but neither company even mentions whether they have been complying with the Brazilian LGPD, despite the law being fully in force since 2021.
Meaningful connectivity—allowing users to enjoy reliable, well-performing, universally accessible internet infrastructure for an affordable price—plays an instrumental role for AI systems to function optimally and be accessible to a wide population. Seamless connectivity facilitates data exchange, collaboration, and access to cloud-based AI services. It enables real-time applications and supports the development and deployment of AI technologies across various sectors, contributing to the construction of a country’s AI sovereignty.
Over the past ten years, Brazil has made enormous progress in promoting internet penetration. The cost of connectivity has considerably declined while the connected population has doubled in a decade. Yet, such a rosy picture belies less visible digital divides in the quality of internet access. Most of the internet-connected Brazilian population is de facto only partially connected due to low-quality access.
In fact, more than 70 percent of the Brazilian connected population, and around 85 percent of the lower income population, has access only to a reduced set of apps included in so-called zero-rating plans. As such, user attention and data collection are concentrated in a remarkably limited number of services—typically dominant social media platforms—making it particularly challenging for any other business to develop complete sets of personal data that can be used to train AI models.
As AI systems grow in relevance and size, they require a stable and robust supply of electrical power to operate effectively. Ensuring reliable power infrastructure and access to affordable electricity is necessary for maintaining uninterrupted AI operations. In this regard, Brazil is probably one of the best-placed countries to support the expansion of AI infrastructure: it is not only energy independent but also in recent years has reached approximately 85 percent of its annual energy production via renewables, especially hydropower.
However, the national power grid is not without criticism. In the short term, Brazil’s energy supply is relatively secure thanks to the complementarity of various energy sources to hydropower, but the lack of structural planning and the possibility of adverse effects from hydrology—which has been observed in recent years—can considerably increase the cost of energy. Hence, despite having developed a strong power infrastructure, Brazil’s capability to support the deployment of power-hungry technologies requires stronger contingency planning to prevent potential dependencies on external sources.
Enhancing the digital literacy of the population through capacity building, training, and multigenerational education is essential not only to achieving a skilled AI workforce but also to fostering cybersecurity and, ultimately, national sovereignty. Investing in AI education, research, and development helps nurture a pool of talented AI professionals, while spreading an understanding of how to make the best use of technology. A sound education strategy is therefore vital to upskill the national population from passive consumers of digital technology to prosumers that can develop technology and innovate.
A robust talent pipeline of AI researchers, engineers, and data scientists enables a country to develop and maintain its AI capabilities, increasing its ability to export technology and reducing its likelihood of becoming a digital colony. It is highly promising that the recently elected federal government in Brazil has already adopted a new National Policy for Digital Education.
However, digital literacy is only a priority for new generations of students, ignoring the fact that virtually no one in Brazil—as in most other countries—has received this type of education, making the majority of the population digitally illiterate. Such a situation is particularly risky in the context of accelerated digital transformation and automatization, where it is necessary for all individuals to understand the functioning of technology, especially those whose labor, social, and economic conditions are likely to be affected by the deployment of AI systems.
AI systems are susceptible to cybersecurity threats and can be used to perpetrate cyber attacks, and AI critical infrastructure can come under attack. Brazil has recently enacted personal data protection laws as well as a considerable number of sectoral cybersecurity regulations, spanning the telecom sector, the banking sector, and the electricity sector. While such progress has allowed the country to climb the International Telecommunication Union’s Global Cybersecurity Index, this positive advancement must be considered again with a grain of salt.
Indeed, Brazil still lacks a cybersecurity law and a national cybersecurity agency, although both have been recently proposed by a study produced by the Center for Technology and Society at Fundação Getulio Vargas and by a draft bill formulated by the Brazilian presidency. The existence of a highly fragmented approach to cybersecurity—driven by the initiatives of sectoral agencies with no general competence in cybersecurity and frustrated by the lack of a coherent national strategy—represents a big vulnerability. Brazil has not yet managed to create a solid governance framework to connect, coordinate, and leverage the incredible amount of talent that it produces for the cybersecurity sector.
A comprehensive governance framework that encompasses ethical considerations, data protection laws, and AI regulations is crucial for AI sovereignty. The Brazilian National Congress is discussing a new bill for an AI regulatory framework to help protect citizens’ rights, promote fairness, and prevent discrimination and other potential risks, establishing sustainable, clear guidelines and standards for the development, deployment, and use of AI technologies.
While this ongoing initiative is laudable, it is not yet clear to what extent it can effectively regulate AI. The latest version of the proposed bill provides a necessary level of flexibility on key issues such as AI systems transparency, data security, data governance, and risk management. However, such flexibility, which is critical for the law to adapt to technological evolution, must be matched with a mechanism that allows specification through regulation or standardization.
In the absence of such specifications, the law risks being ineffective. The recent Brazilian experience regulating data protection illustrates that the adoption of modern law and the establishment of a new regulatory authority are only the beginning of the regulatory journey. The usefulness of underspecified legislation could be jeopardized if the pressing task of specifying the law is delegated to a regulator that seems “ineffective by design.”
Importantly, these AI sovereignty enablers are interconnected and mutually reinforcing. This consideration is particularly relevant, as legislators and governments around the world devise measures to regulate AI technology. Unfortunately, policymakers often overlook the importance of other elements of KASE. Understanding the interconnectedness of the KASE and leveraging their interdependence through an integrated approach are essential factors to achieving AI sovereignty and avoiding digital colonialism.
However, such an approach seems to be absent from the current Brazilian strategic vision for AI. Indeed, the 2021 Brazilian Artificial Intelligence Strategy has been widely criticized for including only general considerations about how AI could be implemented in several sectors, without defining how policymakers could better coordinate, assess, and allocate responsibility for the strategy’s implementation.
The Brazilian administration should consider implementing these principles in the next revision of its strategic approach to AI. An integrated approach considering the KASE is instrumental to achieving AI sovereignty, developing indigenous AI capabilities, diversifying supply chains, increasing the digital literacy of the population, fostering strategic investments and partnerships, and safeguarding the security of critical AI infrastructure.
Not all countries will be able to elaborate and implement the necessary strategic, policy, and institutional changes allowing them to build an AI sovereignty stack. Such an effort might be especially herculean for countries in the Global South, which typically depend on foreign technologies. However, a careful mix of creative thinking and much-needed political vision regarding technological development may allow low-income countries to overcome some of the most burdensome obstacles, for instance by using open software to reduce the financial costs of procuring foreign software. The elaboration of an AI sovereignty stack, therefore, should be seen as a goal that all governments should strive to achieve even if not easily accomplished for every country.
Ultimately, countries that possess strong capabilities in the KASE areas are not only better positioned to maintain control over their AI technologies, policies, and data, but they also will likely increase their technological relevance, reducing dependence on external sources and preserving their national interests and autonomy in the AI landscape. Countries lacking such capabilities need to reconsider thoroughly their strategic approaches to AI in order to minimize the considerable risks of AI dependency that will likely exacerbate the already ongoing phenomenon of digital colonization.
1 The right to self-determination is a so-called primary principle or principle of principles, as it plays an instrumental role to allow individuals to enjoy their human rights, thus being an enabler of other fundamental rights. For this reason, it is enshrined as the first article of both the Charter of the United Nations, the International Covenant on Civil and Political Rights, and the Universal Declaration of Human Rights. According to these three international legal instruments, states have agreed that “all peoples have a right to self-determination” and that “by virtue of that right they are free to determine their political status and to pursue their economic, social and cultural development.” It is essential to emphasize the relevance of the internal dimension of self-determination, that is, the individual right to freely determine and pursue one’s economic, social, and cultural development, including by independently choosing, developing, and adopting digital technologies.
2 For the purposes of this article, governance is defined as the set of processes and institutional mechanisms that stimulate, facilitate, organize, and coordinate the stakeholder interactions of different stakeholders in a political space, to confront different opinions and interests regarding a specific issue and, ideally, achieve the proposal of the best possible regulatory solution. Regulation is intended as the product of governance, consisting of an ample range of instruments that can foster the stability and proper functioning of complex systems, where the presence of multiple actors with varying or divergent interests can naturally lead to instability and dysfunction.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.