RC-AGI 2026 will be a multidisciplinary conference bringing together experts in diverse areas, including Computer Science, Mathematical Logic, Philosophy, Futurology and Law.
- Is the creation of AGI something realistic or something from the realm of science fiction?
- Does it make sense to discuss the consequences of AGI's emergence, given that AGI may prove impossible?
- Definition of AGI: Is AGI a computer program and if yes, what are the characteristics of that program?
- Is there a difference between AGI and Superintelligence? Will we first create intelligence that is at a human level and only after time will incomparably greater intelligence appear?
- How will the world look like when AGI is here?
- What do we want the future to be? What we expect from AGI? What do we want the world to become?
- Do we wish the world to change dramatically and become much better and fair, or, conversely, are we conservative and prefer things to remain unchanged as far as possible?
- Should AGI be obedient and if yes, whose orders should it obey? The orders of its creators? But who are the creators – those who wrote the program code or those who paid for the writing of that code? Should AGI be ready to do anything we tell it to do or should there be things it must never do, regardless of who gives the orders?
- Should AGI be an Open-Source project?
- Does the creation of AGI involve hazards? Can something go wrong so it turns out that we have not created the right AGI?
- Should we create AGI hastily? Are we too eager to reap the benefits AGI will bring about? Can it happen that "haste makes the waste", i.e. how likely is that hasty creation compromises the quality of the AGI we are going to create?
- Are there ways to slow down the creation of AGI in order to prevent potential errors? If yes, what can these ways be?
- Can we simply forget about creating AGI and continue to live in a world where humans think and work themselves, and do not expect someone else to do it for them?
- Once we create AGI, shall we be able to improve it? Shall we be able to make significant changes? Should we set limitations to what we can do in order to protect ourselves from potential problems?
- Should the AGI creation process be subject to regulation? How can this process be regulated?
- Should AGI be patentable?
- Does it make sense to set rules regarding the behaviour AGI should follow? Or is it meaningless because AGI will be too smart and almighty to follow the rules we may try to impose after we have already created it.
- When creating AGI, can we embed in it certain rules which AGI will be forced to obey and will not be able to override?
- How can we embed rules in AGI? What kind of character do we want our AGI to have and how can we embed that character in it?
- Some AGI character traits are already known. They have been described and we know how to regulate them. Which are they? On the other hand, which are the traits that we still have to describe and regulate? One example of an already known character trait is greed. In reinforcement learning (RL), greed is regulated by the discount factor. Another known character trait is curiosity. Again, in RL we set a factor which determines the extent to which the agent would be willing to try something new in order to gain more experience or, conversely, would continue to operate on the basis of the experience it has already gained.
- Multi-agent models. In this case, how will AGI treat the other agents? Should AGI be communicative? Will it be friendly and helpful? Should it be obedient and whose orders should it obey? Should AGI be stern or pliable?
- The World Model (WM) approach vs. the Large Language Models (LLM) approach.
- How can we ensure that AGI is smart? Should AGI be able to understand what is going on and make test runs of all possible future developments (the WM approach) or should it simply "guess" the right action on the basis of approximation (the LLM approach)? Should AGI think in a single-step or multi-step manner?
