Artificial general intelligence (AGI) It is in the spotlight globally, both for its transformative potential and for the ethical and legal challenges it poses. The rapid advancement of technology and the growing interest of governments, technology companies, and society at large have led to the creation of forums and regulatory frameworks focused on ensuring that the development of these tools is safe, inclusive, and beneficial for all of humanity.
As technical developments advance, international organizations and legislators strive to establish clear rules, agreed standards and recommendations for the stakeholders involved. The race to achieve AI systems that are faithful to human reasoning has accelerated the debate about their real impact, risks, and the opportunities they offer.
A scenario of technical advances and business ventures

Interest in IAG is reflected in the strong investment from companies like OpenAI and Meta, which seek systems capable of matching the flexibility and adaptability of the human intellect. Although there is still no fully agreed definition, common goal is to develop models capable of addressing a wide variety of tasks, from problem-solving to complex decision-making. While current systems have achieved impressive milestones—think of models that beat chess champions or make accurate scientific predictions— They are still far from matching all the capabilities of the human mind..
Within this context, innovative experiments have been presented such as the Centaur model, a AI system capable of replicating human behavior patterns in psychological experimentsDeveloped by an international team, this model has been trained to mimic human decisions across multiple contexts, demonstrating generalization and transfer capabilities that until recently seemed unattainable. However, criticism persists regarding the depth of its understanding: while it can predict actions, it does not necessarily imply that "understand" the underlying mental mechanisms.
Regulations and governance: the European response
La The European Union has taken the lead in the regulation of artificial intelligence with the publication of the AI Act and the recent Code of Practice on General Purpose AI. This code, the result of joint work by independent experts, industry, academics and civil society representatives, offers voluntary guidelines designed to protect the transparency, security and copyright in the most powerful AI systems. The goal is to enable model providers to easily demonstrate compliance with legal and administrative obligations, reducing bureaucratic burdens and increasing legal certainty.
The regulation establishes varying degrees of scrutiny based on the risks associated with the use of AI, and imposes significant fines for non-compliance. Suppliers are required to assess and mitigate systemic risks, from threats to fundamental rights to security issues, and encourages clear and accessible documentation of all related processes.
Meanwhile, large technology companies have expressed reluctance to accept the regulations, considering them restrictive, and some industrial players are asking for more time to adapt. However, the European Commission remains on course, defending the importance of the Innovation goes hand in hand with security and the protection of users.
Environmental, military and social impact of the IAG
The deployment of artificial general intelligence technology presents not only technical challenges but also tangible impacts in areas such as the environment and international security. Electricity consumption linked to the development of advanced models is remarkably high, adding pressure to the global carbon footprint. The growth in the size and complexity of AI models requires increasingly powerful computing infrastructure, which can contravene sustainability goals if not managed properly.
On the other hand, the integration of AI into military systems raises major ethical and strategic questions. From automatic threat detection to the use of deepfakes for information manipulation, AI capabilities can both strengthen nuclear deterrence and introduce new vulnerabilities. Automatic errors or biases in early warning systems could, in the worst case, lead to disproportionate responses or even fatal misunderstandings in crisis situations.
Education, trust and multi-stakeholder participation
International institutions, such as the International Telecommunication Union (ITU), insist that responsible development of IAG must be accompanied by training and education efforts for users, professionals, and policymakers. Training programs and global coalitions seek to prepare society for the new challenges posed by artificial intelligence, promoting critical thinking and transparency in interactions with these technologies.
Collaboration between different actors (public, private sector, academia and civil society) is considered essential to ensure that the IAG moves forward. equitably and with social purposeThe importance of adapting technological solutions to local contexts is emphasized, avoiding errors resulting from a lack of diversity in training data and promoting the development of international standards that enable interoperability and trust.
The development of artificial general intelligence presents multiple challenges, but also opportunities to improve the quality of life and advance key areas. International collaboration, appropriate regulation, and the active participation of all sectors will be essential for this technology to contribute positively to global well-being without leaving anyone behind.