Zhivko Georgiev, G Consulting, Bulgaria (in person)
Lyubomir Ivanov, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Bulgaria (in person)

The article examines the boundaries of the cognitive and epistemic capacities of artificial general intelligence (AGI), while rejecting both excessive technological optimism and radical skepticism. Its central claim is that AGI should not be understood as an autonomous arbiter of truth. Although contemporary AI systems possess considerable analytical power and are already transforming scientific and professional labour, their cognitive effectiveness remains structurally dependent on pre-existing data, models, interpretive frameworks, and external regimes of verification. Access to vast quantities of information, therefore, must not be conflated with access to truth. Knowledge is not merely an accumulation of facts, but content that has been tested, contextualized, interpreted, and institutionally validated. For this reason, the article insists on a clear distinction between processing existing knowledge and producing genuinely new, reliable, and empirically confirmed knowledge. Furthermore, a key distinction is made between AGI as a theoretical horizon and contemporary advanced AI systems, including large language models and tool-augmented or agent-based systems, which constitute the empirical basis of our analysis.

The argument is developed through a critique of two dominant positions in public discourse on artificial intelligence. On the one hand stands a maximalist view, according to which AGI is on the verge of replacing the scientist, the expert, and the analyst. On the other stands the claim that AI will never become anything more than a statistical mechanism for rearranging already existing information. Against both extremes, the article proposes an intermediate and more plausible position: today’s advanced AI systems are indeed transforming intellectual labour, but not as fully independent cognitive agents. Rather, they function as infrastructures for accelerated analysis, synthesis, search, and recombination across large bodies of information. In this sense, artificial intelligence is neither a magical substitute for human thought nor a useless statistical automaton. It is a powerful instrument whose effectiveness unfolds within already existing epistemic and institutional conditions. From this follows the article’s core epistemological conclusion: without sustained links to observation, experimentation, and external validation, even the most advanced AI remains dependent on truths that it has not established for itself.

Quantum mechanics is presented as a paradigmatic example of this dependency. Despite its extraordinary predictive success, its interpretations remain deeply contested. Here the limitations of purely textual and logical analysis become particularly visible. A system that operates on symbolically and digitally represented knowledge may organize arguments, classify hypotheses, and identify logical gaps, but it cannot independently decide which interpretation is correct when decisive evidence requires new empirical input. In scientific disputes involving competing hypotheses that remain partly compatible with existing observations, what ultimately matters is not further textual recombination, but new facts, new measurements, and new experimental regimes. For this reason, AGI, insofar as it operates primarily on already available knowledge, should be regarded more as an accelerator of research than as a final judge between rival explanatory frameworks.

At the same time, the article rejects the stronger skeptical claim that artificial intelligence is inherently incapable of contributing to scientific discovery. On the contrary, it argues that there is already substantial empirical evidence that AI can play a meaningful role in the scientific process. Examples such as AlphaFold demonstrate that AI systems are capable of producing results of high scientific value, while systems such as Co-Scientist show that language models connected to search tools, code execution, documentation access, laboratory automation, and observational instruments can participate in complex experimental tasks. The conclusion is formulated carefully: AI does not replace the scientist, but when integrated with instruments of observation and verification, it can become a meaningful component of the cycle linking hypothesis, experiment, and result. The problem, therefore, is not an absolute inability to move beyond existing knowledge, but the fact that such movement depends on integration with the material world, with experimental practice, and with external mechanisms of validation. Without such integration, AI remains an intelligent system of recombination; with it, it may contribute to the discovery of new regularities.

A particularly important dimension of this article is its differentiation between domains of knowledge. In the natural sciences, the performance of AI is easier to assess because these disciplines more often involve clearly defined objects, standardized datasets, and robust procedures of validation. In the humanities and social sciences, the situation is more complex, since their subject matter includes meanings, values, historical context, cultural specificity, and linguistic ambiguity. As the article emphasizes, the difference is not one of kind but of degree: uncertainty and the need for empirical grounding exist in both spheres, yet in the social sciences and humanities the weight of context is significantly greater. Consequently, AGI may be highly useful in legal research, document analysis, large-scale data processing, scenario modelling, and research support, but it should not be treated as the final bearer of contextually valid understanding. The more decisive historical depth, value interpretation, and social meaning become, the more indispensable human judgment and oversight remain.

Central to the article is also the distinction between information and reliable knowledge. The principal resource of AI systems is information, but not all information qualifies as validated knowledge. From this follows the risk of what the article calls “illusions of understanding”: situations in which the quantity and apparent sophistication of generated analyses create the impression of deep insight without genuinely delivering it. This is compounded by the danger of so-called model collapse. When generative systems are recursively trained on content produced by earlier generations of models, quality may deteriorate and representations of reality may become progressively distorted. In a broader sense, this means that an informational environment saturated with synthetic, unreliable, or difficult-to-verify content does not merely hinder artificial intelligence; it may systematically undermine its cognitive reliability. Thus, the epistemic question of truth becomes inseparable from the problem of vulnerability to disinformation, data poisoning, and recursive cycles of informational contamination.

From here the article turns to its second major field of concern: cybersecurity and the asymmetrical distribution of risks. According to the authors, the most serious danger lies not necessarily in the abstract idea of a machine “escaping control,” but in the question of who will use such systems and for what purposes. AI may lower the threshold for planning, automating, and scaling cyberattacks, assisting in reconnaissance, code generation, and the coordination of malicious operations. This implies that the benefits of advanced AI systems will not be distributed evenly. Wealthier states and larger organizations will be better positioned to invest in defense, whereas weaker actors may become disproportionately vulnerable. The risk is further intensified by the quantum factor, since pressure on existing cryptographic standards is already institutionally recognized. Combined with the forms of automation enabled by AI, this may produce a qualitatively new level of security risk.

In conclusion, this article rejects both the mythologization and the dismissal of AGI. It acknowledges that future systems may extend current limits through self-improvement, inter-agent coordination, and deeper integration with the physical world, yet it also stresses that there is presently no basis for speaking of a fully autonomous, uncontrolled, and epistemically self-sufficient AGI. The more plausible trajectory is one of increasingly complex sociotechnical assemblages in which human control, institutional regulation, and infrastructural constraints remain indispensable. Nonetheless, historical developments in such domains are often non-linear, with gradual changes sometimes leading to abrupt transformations.

The most significant future questions, therefore, do not concern technical capability alone. They also concern the quality of information, resilience against disinformation, cybersecurity, the institutional organization of knowledge, and the geopolitical distribution of epistemic power. As the article persuasively demonstrates, the future of AGI is not only a technological issue, but also an epistemic, institutional, and political one.