What AGI means in general
AGI usually means AI that can learn and reason across many domains at a human-like or higher level, not just one task. There is still no single official global definition.
Is AGI Safe? Research Map
This page is a structured guide to the AGI conversation: what AGI means, what used to count as AGI, what many people count today, what AGI may actually become, who is building it, what progress has happened, and what good or bad outcomes are plausible.
Mark each item you read. Progress is saved in your browser using local storage.
Full progress
0/120 (0%)
What AGI Is
0/4 (0%)
Progress So Far
0/9 (0%)
Main Players
0/7 (0%)
Influential Voices
0/8 (0%)
Nature & Energy Impact
0/6 (0%)
Question Explorer
0/80 (0%)
Sources
0/6 (0%)
Loading saved progress...
AGI is not one settled definition. It is an evolving target. These frames help you navigate the definition drift that causes most public confusion.
Section progress
0/4 (0%)
AGI usually means AI that can learn and reason across many domains at a human-like or higher level, not just one task. There is still no single official global definition.
Earlier eras often framed AGI as 'machine intelligence in principle' and debated tests like broad conversational competence or general problem-solving, long before modern deep learning.
Today, people often use operational criteria: multimodal ability, transfer to new tasks, tool use, planning, and reliability under real-world constraints. Most researchers still treat current systems as pre-AGI or proto-AGI.
A practical AGI may arrive as a gradual capability stack rather than one sudden moment: stronger reasoning, agency, memory, and autonomy appearing at different speeds across domains.
AGI progress is best read as a timeline: capability advances, benchmark pressure, policy response, and safety framework updates happening together.
Section progress
0/9 (0%)
1950
The field starts with broad questions about whether machine behavior can be judged as intelligent.
1955-1956
The proposal suggests that aspects of intelligence can be described precisely enough for machines to simulate them.
2023
Governments and labs begin formalizing frontier AI risk language and cooperative safety commitments.
2024
The EU approves risk-tiered AI regulation, including obligations for high-risk systems and specific prohibited practices.
2025
New model generations from multiple labs improve reasoning, coding, and multimodal performance.
2025
Open-weight releases continue to narrow capability gaps in some areas while broadening access.
2025
Evaluation focus keeps moving toward harder tasks like software engineering, robust reasoning, and adaptation.
2025-2026
Major labs publish or revise frameworks for high-risk capability monitoring and deployment thresholds.
2026
The report compiles current evidence on capability trends, risk pathways, and governance priorities.
AGI development is not one company. Frontier labs, open-weight ecosystems, and regulators are all shaping the trajectory.
Section progress
0/7 (0%)
Frontier model developer
Publishes flagship general-purpose models and safety policy updates, including preparedness work tied to deployment decisions.
Frontier research and deployment lab
Develops Gemini model families, publishes AGI framing research, and updates a frontier safety framework.
Frontier model and safety-focused lab
Builds Claude models and publicly documents policy commitments through Responsible Scaling updates.
Open-weight ecosystem driver
Pushes broad AI distribution through consumer products and open-weight model strategy around Llama.
Frontier model entrant
Competes on general-purpose model capability with rapidly iterated Grok releases.
European model builder
Focuses on fast model iteration and enterprise deployment options, including open and commercial releases.
Guardrail and policy institutions
NIST, OECD, EU institutions, UNESCO, and national governments shape how AGI-like systems may be governed and audited.
This section maps influential viewpoints across tech leadership, AI research, and philosophy. It is not endorsement of any single position; it is a source-linked snapshot of the main arguments shaping the AGI debate.
Section progress
0/8 (0%)
Tech leadership (OpenAI)
Why influential: Leads one of the most influential frontier AI labs.
Frames AGI as close enough to plan for now, while arguing for iterative deployment so society can adapt and safety work can evolve in practice.
Tech leadership (Anthropic)
Why influential: Runs a major frontier lab with strong public safety positioning.
Argues that powerful AI could generate extraordinary public benefits, but only if catastrophic-risk and governance problems are treated as first-class engineering goals.
AGI research (Google DeepMind)
Why influential: DeepMind co-founder and Chief AGI Scientist.
Pushes operational definitions of AGI progress (performance, generality, autonomy) and links progress tracking to risk-aware deployment decisions.
AI science and safety governance
Why influential: Turing Award laureate and one of the most cited researchers in modern deep learning.
Warns that capability races without control methods and institutions are dangerous, and calls for stronger coordination, regulation, and technical safeguards.
AI pioneer
Why influential: Foundational deep-learning researcher with major public influence.
Publicly supports treating extreme AI risk as a global priority alongside other societal-scale threats.
Philosophy of mind
Why influential: Leading contemporary philosopher on consciousness and AI.
Argues current language models are probably not conscious, while emphasizing that machine consciousness is a serious live question requiring conceptual and empirical tests.
Moral and existential-risk philosophy
Why influential: Major figure in long-term risk and AI-risk discourse.
Argues that reducing existential risk can carry extraordinary moral value because it protects the entire long-term future of humanity.
Philosophy of mind and AI ethics
Why influential: Prominent philosopher focused on AI, consciousness, and identity.
Argues that as AI approaches human-level cognition in some domains, philosophy of mind is necessary for evaluating machine consciousness and human-centered safeguards.
AGI infrastructure has a real environmental footprint. This section separates risks, opportunities, and action levers so the tradeoffs are clear and source-backed.
Section progress
0/6 (0%)
Risk
AI-heavy data centers are increasing electricity demand quickly, which can strain grids and make decarbonization harder if added demand is met by high-emission generation.
Risk
Even when data centers are a modest share of global electricity use, they are geographically concentrated, so local communities can face outsized pressure on grids and infrastructure.
Risk
Cooling systems, chip supply chains, and hardware turnover can increase water stress, mineral extraction pressure, and hazardous e-waste unless mitigated with better design and policy.
Opportunity
AI applications in industry, transport, and buildings can unlock substantial emissions reductions if adoption is broad and paired with strong enabling policies.
Opportunity
Grid forecasting, demand flexibility, and system optimization can improve renewable integration and reliability, reducing waste and curtailment in power systems.
Action
Transparency, efficiency standards, cooling choices, cleaner power procurement, and location planning determine whether AI infrastructure worsens or improves environmental outcomes.
Navigate by group. Open any question to see a direct answer, then use the linked sources to go deeper.
Section progress
0/80 (0%)
These are the first questions most people ask before discussing risks or benefits.
Group progress
0/8 (0%)
AGI is AI that can generalize, reason, and adapt across many tasks instead of being narrowly specialized.
No. Current systems can be powerful but still show uneven reasoning, limited reliability, and domain-specific weaknesses.
Not yet. Different labs and researchers use different thresholds, which is why AGI debates often talk past each other.
No. Benchmarks are useful signals, but broad real-world robustness and transfer matter more than single test scores.
Most technical definitions do not require consciousness; they focus on capability and behavior.
Most current framing treats it as gradual, with different abilities improving at different times.
Definitions shift because model capabilities shift, so old tests become too easy or too narrow.
AGI usually means roughly human-level generality, while superintelligence implies capability that substantially exceeds human experts across most domains.
People ask where the current frontier is and whether progress is slowing, steady, or accelerating.
Group progress
0/8 (0%)
There is no broad scientific consensus that AGI has been reached; evidence shows major progress but still meaningful gaps in reliability and generalization.
Reasoning quality, coding assistance, multimodal understanding, and tool-using workflows have all improved substantially.
Consistent long-horizon planning, strong robustness under distribution shift, and dependable truthfulness remain difficult.
They are necessary but insufficient because systems can overfit benchmark styles and still fail in messy real environments.
They are useful for one important slice of intelligence, but not a full proxy for broad scientific, social, and physical-world competence.
Yes. Compute, data quality, and algorithmic improvements jointly drive capability gains.
It could. Technical bottlenecks, data limits, cost, power constraints, and regulation can all slow deployment speed.
Yes. New training methods, better model architectures, and stronger tool-use loops could create sudden capability jumps.
This group covers expected benefits people hope AGI-like systems will deliver.
Group progress
0/8 (0%)
Yes, especially for hypothesis generation, simulation, and narrowing huge search spaces in biology, materials, and physics.
Potentially, through better diagnostic support, workflow automation, and drug discovery acceleration if safely validated.
Likely yes. Routine digital tasks are among the first to be automated or heavily assisted.
It could provide personalized tutoring and translation support, especially where teacher access is limited.
It can improve grid forecasting, materials research, and system optimization, though its own energy footprint also matters.
Yes, if tool costs keep falling and open ecosystems remain healthy, productivity gains can spread beyond frontier labs.
Yes. Better speech, vision, translation, and adaptive interfaces can help people with disabilities and language barriers.
Historically, automation replaces some tasks but also creates new roles around orchestration, safety, and AI-enabled services.
These are concern-driven questions, from immediate harms to low-probability catastrophic scenarios.
Group progress
0/8 (0%)
Yes. Lower-cost generation of persuasive text, audio, and video can increase manipulation at scale.
Potentially. More capable systems can help defenders, but they can also lower barriers for attackers.
Yes, if gains are concentrated among firms and countries that control compute, data, and distribution channels.
Yes. Reliability remains imperfect, so over-trusting AI in medicine, law, or critical infrastructure can cause serious harm.
Yes. AI can make large-scale tracking and profiling cheaper unless governance and rights protections are strong.
Yes. Coordinated influence operations, synthetic media, and narrative targeting can stress democratic processes.
Some researchers argue the risk is serious enough for global priority, while uncertainty remains high and debated.
Yes. Optimization errors, brittle goals, and deployment mistakes can produce large unintended consequences.
These questions focus on whether advanced models can be made predictable, steerable, and auditable.
Group progress
0/8 (0%)
It means shaping model objectives and behavior so systems reliably follow human intent and constraints.
Probably not. It is an ongoing engineering and governance process because capability and deployment contexts keep changing.
Most experts treat current testing as necessary but incomplete, especially for novel capabilities and long-horizon behavior.
Preparedness frameworks evaluate dangerous capability thresholds and define safeguards or restrictions before deployment.
No absolute guarantee exists today; the goal is risk reduction through evaluation, monitoring, and containment measures.
They can increase misuse surface, but they can also improve transparency, third-party audits, and competition in safety tools.
Audit capacity is improving, but independent technical capability and international coordination are still limited.
Good policy aims to slow only high-risk deployment pathways while allowing low-risk productivity gains.
People often ask what bad actors can do with stronger AI and how defenses can keep up.
Group progress
0/8 (0%)
Yes, advanced models can accelerate software generation, including potentially harmful code if guardrails fail.
This is a major concern area; labs and governments monitor high-risk bio capabilities and access controls.
Yes. Personalized phishing, voice cloning, and fake support flows can be automated at lower cost.
Not alone. It helps in some workflows, but attackers can remove or avoid watermarks.
They can reduce some risk, especially for high-capability systems, but controls are imperfect and require constant updates.
Yes. AI can improve monitoring, incident response, and automated hardening if deployed responsibly.
It can, but resource asymmetries remain a challenge for independent defenders.
Both. Useful offensive and defensive assistance already exists, while frontier autonomy risk remains a forward-looking concern.
This group covers common concerns about employment, wages, and who captures value.
Group progress
0/8 (0%)
A full 'all jobs' scenario is unlikely in the near term; task replacement and job redesign are more realistic than universal unemployment.
Roles with repetitive digital tasks, structured communication, and routine analysis tend to be exposed earlier.
Both are possible: productivity can raise wages in some sectors while automation pressure can reduce bargaining power in others.
Yes, if gains translate into broad access, investment, and social policy instead of concentrated rent capture.
Outcomes depend on access to infrastructure, education, and affordable tools; divergence is possible without targeted policy.
Yes. Lower-cost automation can let small teams perform work that previously required larger organizations.
Yes. Digital reasoning, critical thinking, and AI-assisted workflows are becoming baseline skills.
It is one policy option discussed in high-automation scenarios, but there is no global consensus that it is the only answer.
These questions ask who sets the rules and whether global coordination can keep up with capability growth.
Group progress
0/8 (0%)
Regulation is likely to be shared across national governments, sector regulators, standards bodies, and international forums.
Only partially. Frontier development is global, so unilateral policy has limits without cross-border cooperation.
It is a major foundation, but frontier capabilities may still require iterative updates, standards, and enforcement capacity.
Possible, but difficult. Trust, verification, and strategic competition make treaty design complex.
Yes, especially if states view frontier AI as decisive for military, economic, or intelligence advantage.
Verification improves with independent audits, model evaluations, and disclosure standards, but remains incomplete.
A mixed approach is emerging: openness for low-risk work and tighter controls for dangerous capability pathways.
Yes, but they work best when backed by enforceable rules, procurement standards, and market incentives.
These are the questions people ask about family, school, work, and trust.
Group progress
0/8 (0%)
Use AI as support, not final authority, in high-stakes decisions. Independent verification remains essential.
More likely it will restructure these professions by automating parts of the workflow while humans keep accountability roles.
With guidance and limits, yes. Without guidance, risks include over-reliance, privacy leakage, and misinformation.
It can, depending on data policies. Privacy outcomes depend on governance, product design, and user controls.
Yes, especially for personalized explanation and practice, but quality depends on source grounding and your own verification habits.
It may support low-intensity guidance, but clinical care still requires trained professionals and safety oversight.
In most cases, upgrading skills for AI-enabled workflows is safer than panic-switching careers.
Learn practical AI literacy, verify outputs, protect personal data, and follow policy developments in your region.
People really ask these. They are useful because they reveal hidden fears about control, identity, and power.
Group progress
0/8 (0%)
Software can already automate management decisions, but legal accountability still sits with humans and organizations.
No evidence supports an immediate takeover scenario; institutional change is slower and mediated by law and infrastructure.
No. Systems do not independently gain sovereign infrastructure control without human operators and permissions.
Not in a literal telepathy sense. Privacy risks are mostly about data collection and inference, not mind-reading physics.
Recommendation systems can influence behavior, but personal choices and platform governance still matter.
The practical concern is not movie-style evil intent, but harmful behavior from poorly specified goals or malicious human use.
Policy may restrict unsafe manual driving in some contexts, but those decisions would be political and legal, not AGI decrees.
A better response is informed risk management: policy engagement, technical safety work, and practical resilience planning.
Primary sources and official publications used to build this page.
Section progress
0/6 (0%)