1 Introduction

It is widely maintained that humanity today is united into one entity whose brain represents the collective knowledge of humanity, with a collective intelligence far greater than that of the individual (see Weschsler, 1971; Muthukrishna & Henrich, 2016; Mulgan, 2018). According to Ray Kurzweil, the continuous development and growth of knowledge follows an exponential curve, culminating in a point known as technological singularity. Crossing this threshold will substantially change human reality (see Kurzweil, 2005). The concept of singularity, understood as a technological breakthrough that brings unforeseeable changes, is not new and was already present in the middle of the twentieth century in the views of John von Neuman, the father of modern computer science (see Ulam et al., 2013). Technological singularity is frequently linked or even identified with the concept of an intelligence explosion, a hypothetical scenario in which an artificial intelligence system starts to improve its own intelligence in a recursive manner (e.g., Good, 1966). This self-improving cycle yields an artificial superintelligent agent that surpasses humans in intelligence quantitatively as well as qualitatively and thus possesses unimaginable cognitive and knowledge boundaries. The emergence of a general form of intelligence that exceeds human understanding and control will be the moment of reaching the technological singularity. Thus, since the 1960s, the emergence of artificial general intelligence (AGI) has been closely linked to knowledge growth and technological development.

However, Kurzweil solidified this idea by tying the exponential curve of technological development to Moore’s law and computational capacity (Kurzweil, 2005). Moore’s law states that transistors on a microprocessor, and thus computational power, double approximately every two years (Moore, 1965; Kurzweil, 2005). Since modern processors already consist of billions of transistors, the perspective of an exponential increase in the number of transistors on a microprocessor raised reasonable expectations for an increase in computational power large enough to cross technological singularity (Chalmers, 2016; Bostrom, 2014; Müller & Bostrom, 2016). However, the increase in computational power in processors has slowed down recently, as the technological limits of packaging transistors on such small surfaces as microprocessors are being reached (Mack, 2011). Consequently, quantum technologies that are unburdened by the space limitations of transistors have gained widespread interest as potential successors to transistors capable of sustaining exponential growth in computational capacity (Möller & Vuik, 2017).

Indeed, to date, quantum computers, compared to classic machines, have already shown outstanding improvements in a variety of computational problems that demand an extensive amount of computing resources, such as integer factorization, combinatorial optimization, differential equation solvers, many-body systems simulations; they also have shown a significant acceleration in various artificial intelligence (AI) algorithms (for a review, see Bharti et al., 2022). In 1998, Nick Bostrom linked superintelligence development with an increase in computational power driven by the development of quantum technology (Bostrom, 1998). Later, the opinion that quantum computers will solve problems that no classical computer could solve (called quantum supremacy; Preskill, 2013). Examples of cognitive enhancements include pharmacological neuroenhancement, i.e., the use of psychoactive substances to enhance vigilance, concentration, memory or mood by healthy individuals; genetic enhancements, i.e., the use of genetic engineering to change physical appearance, metabolism or improve physical and mental abilities (Kutt et al., 2015); and the use of cognitive technology that actively affects human cognitive processes (Gunia & Indurkhya, 2017). All processes intended to result in enhancement must be voluntary and go beyond the natural and nonpathological limitations of human cognition (Hauskeller, 2014). However, these perspectives seem to be time-consuming and ineffective, considering the main assumptions of superintelligence.

Collective superintelligence is a form of group intelligence that emerges from the gradual improvement of networks and organizations that link individual human minds together with various artifacts. Instead of a single individual, an entire system of properly organized and networked individuals might attain a form of superintelligence (Bostrom, 2014). This idea includes a broad vision of the networked devices connected to the brain. Ray Kurzweil noted his belief that nanobots, nanosized robots, will inhabit human bodies, keep them healthy, and constantly transmit human brains into the collective cloud (Kurzweil, 2005). Understood in this way, superintelligence seems easily achievable, but such a solution would require computation on an unprecedented and probably unattainable scale.

Another way superintelligence may emerge is mind uploading or whole-brain emulation. Mind uploading is the process of transferring the mind from a biological brain to another substrate (Koene, 2013). The process involves scanning the physical structure of the brain to create an emulation of a mental state, including long-term memory and human self-identity. The data obtained are then transferred or copied to a digital format (Goertzel & Ikle, 2012). Such an approach requires combining knowledge from various disciplines: neuroinformatics, neurobiology, nanotechnology, virtual reality, and philosophy of the mind, to copy every function of each neuron and the more general mind functions that emerge from neuron interactions. The key factor within this idea is the medium – hardware and software – into which the mind will be transferred. It is unknown whether the capabilities of classic computers based on transistor processors are sufficient to emulate the mind, or whether mind uploading requires a different solution, including, for example, quantum computers.

Superintelligence usually refers to artificial superintelligence. Bostrom (2014) noted that artificial superintelligence starts with a computer that is marginally smarter than a human and ends with an AI system a billion times smarter. Good Old Fashioned Artificial Intelligence Systems, in his opinion, did not focus on learning, uncertainty, or concept formation, perhaps due to technical limitations. In contrast, the ability to learn would be an essential feature of the system that aims to attain artificial superintelligence. It should be emphasized that AI does not have to resemble the human mind and that artificial superintelligence can thus develop on different principles than human intelligence.

Bostrom (2014) highlights different forms of superintelligence: (1) speed superintelligence: a system that can perform all the cognitive tasks that the human intellect can, but much faster. It would be the simplest form of brain emulation that exceeds human intelligence quantitatively but not qualitatively; (2) collective superintelligence: a system composed of a large number of smaller intellects in such a way that the overall performance of the system across many general domains is far superior to all current cognitive systems. Such a form of superintelligence would bind various intelligent structures, both human and non-human, and could be constantly improved with new components. Nevertheless, as mentioned before, collective superintelligence seems difficult to achieve; (3) quality superintelligence: a system that is both quantitatively and qualitatively superior. It is a form of superintelligence that goes beyond human capacity, combining elements of animal intelligence with other cognitive abilities that are currently incomprehensible. Like in the previous case, the creation of such a system would require enormous computing power, which is not available right now. Although Bostrom (2014) did not focus on the technical aspects, he expressed his conviction that superintelligence will be realized digitally, through both hardware and software advantages of forms capable of running on computers (see. Table 1.). In conclusion, artificial superintelligence is supposed to be fast and flexible and it is assumed to be able to conclude a new quality.

Table 1 Hardware and software advantages of non-biological intelligence according to Bostrom, 2014

Superintelligence raises many doubts. It is unclear whether its actions will be sufficiently explainable to humans or whether they will become a factor of existential risk (Bostrom, 2013). It is an open question of which medium will enable the implementation of superintelligence, assuming its most probable source is the AGI. Both classical computers and the computing power of the brain have limitations, and the combination of these two mediums is difficult to implement. Even if the functionality of the biological and digital mind could be similar, the physical structure and main phenomena are different. With recent advances in quantum physics, and thus in quantum computing, the question arises as to whether quantum computers can be used to implement the properties attributed to superintelligence. In the following sections, we investigate this issue.

3 Quantum computers

Despite the name, quantum computers have little in common with computers used every day. Their foundations were established in the 1980s by Paul Benioff, Richard Feynmann, and David Deutsch (Benioff, 1982; Deutsch, 1985; Feynman, 1982), but recent latest technological advancements have brought an intensive acceleration in the development of quantum computing technologies (for a review, see Ladd et al., 2010).

3.1 Principles of quantum computing: qubit, superposition, and entanglement

The fundamental unit of information in quantum information processing is the qubit, a quantum analog of the bit in classical computers. A bit is a logical unit that represents the state of a system; this logical unit can take one of two values, usually referred to as 0 and 1. These values could be physically realized in various ways, e.g., through a voltage change. Just as principles of classical physics underlie the state of a bit, quantum physics is responsible for the state of a qubit. In contrast to classical physics, in quantum physics the state of a system is undetermined until it is measured; thus, the state of a system before measuring its state is expressed in terms of the probability of being in each possible state. As a result, the qubit has no value of 0 or 1, but is a linear combination, that is, a superposition, of the basis states \(|0\rangle\) and \(|1\rangle\), where the states \(|0\rangle\) and \(|1\rangle\) are quantum equivalents of states 0 and 1 from classical physics. This probability of being in each possible state can be easily expressed with a linear function \(|\psi\rangle= \alpha |0\rangle+ \beta |1\rangle\), where \(\psi\) is a wave of probability that a particle will be in a given state after measurement, and coefficients \(\alpha\) and \(\beta\) are complex numbers that represent probabilities of this particle being in \(|0\rangle\) and \(|1\rangle\), respectively. Such an intuitive representation of a complex phenomenon was proposed by Erwin Schrödinger in 1926 and later refined by Max Born (Born & Wiener, 1926). Since \(\alpha\) and \(\beta\) denote probabilities, they are subject to the law of total probability and must fulfill the following requirement: \({\left|\alpha \right|}^{2}+ {\left|\beta \right|}^{2}= 1\) (see quantum law of total probability; Yang, 2022). Physically, the qubit can be realized by any system that is quantum and has two different basis states, e.g. two energy levels in an atom.

The design of a system’s state in the form of superposition underlies one of the most often mentioned advantages of quantum over classical computing (Swami & Gill, 2011; Avaliani, 2004). In 1946 Felix Bloch proved that when \(\alpha\) and \(\beta\) fulfill the requirements of the quantum law of total probability, the qubit can be represented as a point on a three-dimensional complex sphere called the Bloch sphere (Bloch, 1946). Since there is an infinite number of points on the Bloch sphere (precisely, on any surface), theoretically a single qubit can store an infinite amount of information (see Nielsen & Chuang, 2010; Benenti et al., 2004). Thus, the phenomenon of superposition became the foundation for several propositions of quantum hypercomputation (e.g., Deutsch, 1985; Li et al., 2001). Another important characteristic of quantum states is quantum entanglement. Qubits can be assembled into complex particle systems that exhibit an important property called quantum entanglement (Nielsen & Chuang, 2010). In an entangled system, the state of the system is always expressed as a superposition of the states of all constituent, entangled particles. Thus, the quantum superstate of n entangled qubits, where each qubit is described by two complex numbers, is described by \({2}^{n}\) complex numbers (Grover, 1996). While doubling the number of bits in a classical computer doubles its processing power, adding qubits results in an exponential growth in computing power. Thus, quantum entanglement, combined with superposition, is a property expected to provide an exponential increase in computing power in quantum computers, ultimately leading to quantum supremacy (Jozsa & Linden, 2003).

3.2 What quantum technologies have solved so far

When in 1994 Peter Shor devised a quantum computer algorithm for integer factorization, it was considered a long-awaited breakthrough in the field of computational complexity (Shor, 1997). Up to that point, the integer factorization problem had no efficient algorithm; this made integer factorization impossible to solve in finite time on any machine (Briggs, 1998). Shor’s proposition, which took advantage of quantum effects to reduce the complexity of the solution to integer factorization, was the first practical proof that a quantum algorithm can significantly surpass the capabilities of algorithms running on classical machines. Now, quantum algorithms are widely maintained to bring powerful improvements to combinatorial optimization, differential equation solvers, and simulations of many-body systems (Rønnow et al., 2014). The commercial industries mentioned most frequently include materials and pharmaceuticals, banking and finance, advanced manufacturing, and cybersecurity (Bova et al., 2021). Existing quantum algorithms for the aforementioned applications have been collected and presented by Bharti et al. (2022).

To be practically relevant, quantum algorithms must run on some sort of quantum machine. Nowadays, the largest quantum computers are a 72-qubit Bristlecone processor created by Google and a 65-qubit Hummingbird by IBM. However, both have a small number of entangled qubits and therefore low efficiency. In addition to increasing the number of qubits, the efficiency of a quantum computer is affected by the quality and functionality of the quantum system at the single-qubit level. Key properties used to assess the quality of quantum systems include connectivity (also called system topology), which determines possible entanglement and directly affects the exponential increase in computing power (Hu et al., 2022); metrics of fidelity, which assess qubit error rates and are calculated as the ratio of the obtained value to the expected value (see Nielsen, 2002); relaxation time (T1), i.e. loss of energy from the system, and dephasing time (T2), i.e. the time during which qubit phase stays intact. Some of those metrics are part of the supposed-to-be universal, single-number metric proposed by IBM in 2019 known as quantum volume (Cross et al., 2019). Considering the quantum volume, the most powerful quantum computers are believed to be the 32-qubit Quantinuum System Model H2, which achieved a quantum volume of 65,536 (Moses et al., 2023), and the 20-qubit Quantinuum System Model H1-1, which achieved a quantum volume of 524,288 (Morrison, 2023). When the IONQ company announced in 2020 that they had built a 32-qubit computer with an expected quantum volume of 4,000,000, it seemed like a breakthrough in quantum technologies. At that time, Honeywell’s 10-qubit System Model H1 was considered the most advanced quantum computer, boasting a quantum volume of 128 (Moore, 2020). However, the stated expected quantum volume value of 4,000,000 is purely speculative and has not yet been confirmed by any scientific publication.

Having discussed the historical significance of Peter Shor’s quantum algorithm and the current landscape of quantum computing technology, it becomes evident that the realization of quantum supremacy and the intelligence explosion depends on two factors. The first factor is quantum algorithms that leverage the phenomena of superposition and entanglement to enhance computational capacities beyond the hypercomputational barrier, which is crucial for achieving technological singularity. An example of such a quantum algorithm, vastly superior to known classical solutions, is Shor’s integer factorization algorithm. The second basis is quantum machines that execute quantum algorithms. Without a well-functioning machine, even the most powerful algorithm will not have a chance to work, and quantum supremacy cannot be achieved. Nevertheless, the practical feasibility of large-scale quantum computers remains a significant challenge. The next section will explore the issues that limit the power of quantum computers.

3.3 Practical limitations of quantum computers

One of the most important practical issues limiting the computational capabilities of quantum computers is decoherence, i.e., the loss of quantum properties of a system due to interactions with the surrounding environment. Quantum coherence is a vital component of scalable quantum computation (Ladd et al., 2010). To date, the problem of decoherence is an important aspect even in the best modern quantum computers because the highest reported coherence times regarding quantum computers were T1 = 113µs and T2 = 122µs, achieved by the IBM Quantum Falcon (Jurcevic et al., 2021).

Very recent and pioneering studies suggested the possibility of improving qubit coherence even to one hour (for an overview, see Saki et al., 2019); nevertheless, as Ryan-Anderson noticed, it is highly unlikely that a physical qubit will ever be able to reach the precision demanded by large-scale computations. Rather than by reducing decoherence time, the reliable large-scale quantum computations might be achieved by efficient suppression of errors and decoherence with quantum error correction (QEC; see Ryan-Anderson et al., 2021 for an example). QEC usually involves encoding a bit of quantum information into an ensemble of qubits that act together as a single logical qubit. However, to truly bring a solution to the decoherence problem, the QEC algorithms must achieve high-level error reduction in a reasonable time using reasonable resources. To date, Google showed that by increasing the number of qubits by 32 they were able to reduce an error rate by 3.76% (0.114% points; Google Quantum, 2023). Most forecasts show that applying QEC to the existing quantum algorithms is expected to require the ensemble of at least 1,000 physical qubits to produce a single logical qubit (Krupansky, 2023).

Because the construction of a quantum computer with a sufficient number of physical qubits for QEC requires a significant breakthrough in quantum technologies, the physical realizability and practical applicability of QEC to reliable large-scale quantum computations are debatable (Krupansky, 2023). To date, there is no promising proposal for a quantum computer design capable of large-scale computations, which is necessary to achieve quantum supremacy and, consequently, superintelligence.

4 Limitations of computability

While there are currently no quantum computers capable of reliably performing large-scale computations, quantum algorithms undoubtedly play a central role in the discussion of quantum supremacy. Achieving quantum supremacy involves quantum algorithms providing effective solutions to problems that were previously deemed unsolvable (Copeland, 2002; Preskill, 2012). The limits of solvability have been established by mathematics; therefore, the discussion of superintelligence and quantum supremacy must be framed within the framework of computability theory.

The history of computability, and therefore formalized algorithms and computers, began in the mid-twentieth century. Kurt Gödel (Gödel & Feferman, 1986), Church, (1936; Turing, (1937) almost simultaneously attempted to define precisely the concept of computable expression. These attempts to define what precisely automatically computable means resulted in the world-famous Church-Turing Thesis (CTT) that defined the computability of function in terms of a so-called Turing machine: a function on the natural numbers can be computed by an effective method if and only if it is computable by the Turing machine (Copeland, 2008). The Turing machine is the most intuitive concept of computability that has been invented; it is a hypothetical state machine that performs a finite number of operations on an infinite one-dimensional tape that acts like the memory of a typical computer. The Turing machine performs the operations according to the provided instruction table that consists of a list of possible states of the machine and possible transitions to other states. At each stage of computation, the Turing machine can read a symbol from a tape cell, change the state based on the read symbol and the instruction table, write a symbol to the cell, or move on (for a broader description of the Turing machine, see De Mol, 2018). Since Turing machines are easy to analyze and capture intuitions about computation as a finite set of operations, it is believed that if a problem can be described in terms of an algorithm, then there must also exist a Turing machine that can execute this algorithm (Davis, 2013). To date, this belief, which reduces the computability of a problem to the existence of a Turing machine for that problem, has not been confirmed nor disproved.

With this formalization of automatically computable functions emerged the first evidence of mathematical problems that cannot be described by the classical Turing machine (Davis, 1965). While introducing the idea of the Turing machine, Alan Turing provided the first formal proof of the unsolvability of some problem; he showed that it is impossible to create the general algorithm that decides whether a given program will stop or run forever (i.e., the halting problem; Sipser, 1996). It is now known that there is an infinite number of such undecidable problems. To date, it is impossible to determine whether a Turing machine incorrectly defines computability, or whether there are indeed some problems in our world that can never be solved automatically.

The question about the possibility of an automatic solution to any problem raises yet another important question: are there any natural, physical limits of computation that are related to the finite nature of resources in the real world, such as space and time? The rationale for this question is simple: if computers were infinitely fast and their memory infinitely large, any method would suffice to solve the problem. However, the speed of computers is finite, the memory is finite, and the cost of electricity increases sharply with the consumed computing power. Therefore, different solutions to the same problem can be compared in terms of the resources they use. The aforementioned question led to the development of the field known as computational complexity which focuses on classifying problems according to the resources that Turing machines need to solve a problem (Smale, 1997). The amount of resources required depends on the size of the input data; thus, the computational complexity of an algorithm, i.e., the necessary effort to calculate an algorithm, is usually expressed as a function \(f\left(n\right)\), where \(f\) is a function that describes the number of operations to be performed when solving a problem and \(n\) is a size of the input data. Consider the following example to get a good understanding of the concept of computational complexity. Sorting an array of digits of length \(n\) using the bubble sort algorithm, in the worst possible case when elements of the array are arranged in decreasing order, requires \({n}^{2}\) operations (Astrachan, 2003). Thus, the so-called worst-case time complexity of bubble sort is \(f\left(n\right) = {n}^{2}\); this also means that there exists a Turing machine that can execute the bubble sorting algorithm, and this Turing machine will return the sorted array in time of \({n}^{2}\). The empirical boundary of what is called an effective solution to a problem is a polynomial function. However, many existing problems can only be described by a Turing machine that has much worse than polynomial computational complexity, e.g., \({2}^{n}\); this implies an exponential increase in time or storage with the increase in the size of the input data (Hartmanis & Stearns, 1965). Importantly, modern classical computers are almost equivalent to a Turing machine in terms of what they can compute and how fast. Thus, problems that are characterized by exponential computational complexity, e.g., aforementioned integer factorization, cannot be solved efficiently on classical computers (Brent, 2000).

In 1963 Patric Fisher introduced an abstract concept of a non-deterministic Turing machine (Fisher, 1963). Unlike the classical deterministic Turing machine, the non-deterministic Turing machine cannot be realized by any computer because the transition function of the non-deterministic Turing machine returns a set of all possible transitions to other states instead of one deterministic state; thus, a non-deterministic Turing machine can be described in a form of a tree of all possible operations and transitions, rather than in a form of a linear sequence of operations which can be performed by a classical computer. In this tree of possible solution paths, all branches are checked simultaneously; this enables non-deterministic Turing machines to have much better computational complexity than classical, deterministic Turing machines, which must perform operations linearly. The introduction of non-deterministic machines does not change the set of Turing-computable, i.e. solvable functions, because the unsolvability of these problems is not related to the number of resources. However, the computational complexity differs for deterministic and non-deterministic Turing machines (Garey & Johnson, 1979).

The field of computational complexity classifies algorithms according to the model of computation (deterministic and non-deterministic Turing machines) and resource demands (i.e., polynomial, exponential). The fundamental complexity class is P (from polynomial time). Problems within this class could be solved with a deterministic Turing machine in polynomial time. Likewise, the NP (from non-deterministic polynomial-time) class contains problems that could be solved with a non-deterministic Turing machine in a polynomial time. Since finding a solution by a non-deterministic Turing machine is proved to be equivalent to a verifying solution by a deterministic Turing machine (see Savitch, 1970), NP problems are usually defined as problems for which classic computers can efficiently check whether the given solution is true. One of the most famous and yet unsolved problems is the question of whether the classes of algorithms calculated with polynomial effort coincide for deterministic and nondeterministic Turing machines (see Baker et al., 1975; Feinstein, 2006). If so, this would imply that a simultaneous search for a solution across all possible solution paths takes the same amount of effort as a linear search across all possible solution paths. The proof of equivalence of P and NP classes would signify a fundamental departure from our current understanding of the world: it implies a world where there is no distinction between parallel and linear operations, and no distinction between solving the problem and merely verifying its correctness. As pointed out by Scott Aaronson, the absence of a distinction between finding a solution and verifying its correctness has profound physical and philosophical consequences, as it equates creation with the simple recognition of creation (Aaronson, 2013b; see also Granade, 2009). Until it is proven whether P = NP or P ≠ NP, technological development will remain at a crossroads, awaiting the delineation of further theoretical development paths and the establishment of irrefutable boundaries in computational capabilities.

4.1 Can quantum computers contribute to the emergence of superintelligence?

To synthesize the above discussion, mathematics suggests two distinct computational barriers: the first one limits the solvability of problems and indicates that some problems can never be solved. These are problems for which no Turing machine can be created. The second barrier is the limit of computational complexity. The distinction between P and NP complexity classes suggests that there are some problems that in the physical world have only ineffective solutions. Problems that can be solved only by non-deterministic Turing machines are in principle solvable, but their solution is not physically realizable. Achieving technological singularity through quantum supremacy seems to require crossing both barriers (Copeland, 2002; Goertzel, 2013). However, do these theoretical barriers delineate the actual physical computational barriers?

There are several important open issues regarding the relationship between the CTT, physics, and the possibility of hypercomputation, i.e., computation of unsolvable problems (see Syropoulos, 2008). The CTT itself does not state anything about the world. It does not refer to any epistemological, logical, or physical principles. To extend its influence to the empirical world, many attempts have been made to formulate a physical version of the CTT. One of the first formulations stated that any physical process can be simulated by some Turing machine (see e.g. Deutsch, 1985). This definition was quickly disproved with numerous examples of random and infinite processes in the physical world, for which a Turing machine certainly cannot be created. Piccinini (2011) distinguished between hard CTT formulations, which he called the bold physical CTT, and the modest physical CTT. The bold physical CTT refers to Deutsch’s formulation and states that any physical process is computable by a Turing machine. The latter one states that any physically computable function is Turing-computable. Since the modest physical CTT, like the hard CCT, is falsifiable and open to empirical refutation, but is less stringent, it qualifies as a valid scientific theory and appears to be the most effective tool for evaluating the possibility of hypercomputation proposed so far. If true, the modest physical CTT makes strong claims about the nature of computation, its physical boundaries, and the whole universe because it refutes the possibility of hypercomputation; given the modest physical CTT, the undecidable problems and exponential complexity are inevitably embedded in the world.

The previous decade brought numerous attempts to refute the modest physical CTT; many researchers have proposed various hypothetical physical worlds, techniques, and phenomena that would enable hypercomputation and thus falsify the modest physical CTT (see Németi & Dávid, 2006; Pitowsky, 1990; Shagrir & Pitowsky, 2003). These techniques involve: closed timelike curves (Brun, 2003), general relativistic spacetimes (Hogarth, 1992; Welch, 2006), infinite-precision real numbers (Blum et al., 1989), stuffing an infinite number of steps in a finite time (Hamkins, 2004) and quantum effects (Kieu, 2002; Ord & Kieu, 2005). However, as Scott Aaronson, a renowned computer scientist and expert in computability problems, shows, all these ideas introduce speculative changes to the laws of physics, or they use only theoretically possible phenomena that cannot be decided whether they belong to our universe (for review, see Aaronson, 2005); thus, rendering these phenomena in our world is highly unrealizable. Scott (and other physicists) point out that there are some basic but non-negligible physical constraints in our universe (Aaronson, 2005; Cotogno, 2003; Sandberg, 1999). One of the most important constraints is the Bekenstein bound; it asserts a particular limit on the amount of information that a finite amount of space can contain (Bekenstein, 1981). This bound implies a limitation of the maximum possible computation rate in finite-sized systems called Bremermann’s limit (Bremermann, 1967). Bremermann’s limit excludes the possibility of infinite precision or unbounded memory in devices with finite physical dimensions in any place in the whole material universe, even near black holes. In the essay entitled ‘The Myth of Hypercomputation’, Martin Davis pointed out that hypercomputability claims are usually based, mostly implicitly, on assumptions that involve infinite inputs or resources, e.g., infinite memory, infinitely many steps, infinitely small parts, infinitely fast information transfer, infinitely precise measurement of quantum states (Davis, 2004). These infinities are physically implausible considering modern physics and Bremermann’s limit (Müller, 2011; Barrow, 2005). While quantum information theory theoretically allows a single qubit to store an infinite amount of information, as discussed in the third Section, these theoretical possibilities are not realizable in the physical world (Ziegler, 2005). Therefore, there is no premise for the claim that superposition is a phenomenon that can be used to achieve quantum supremacy and intelligence explosion.

Contrary to popular belief that unsolvability is a kind of anomaly in the universe and humanity should seek a solution to this problem, some scientists consider unsolvability as a kind of elemental randomness in mathematics (for more details, see Chaitin, 2002a, 2002b), and thus as an inherent element of reality. In his paper on physics simulations, Feynman (1982) argued that the barrier of uncomputability is insurmountable and should be widely accepted as such in the scientific community (Bernstein & Vazirani, 1997). Physics suggests that since crossing the Turing barrier involves infinity, this barrier may indeed exist (Barrow, 2005). If the Turing computability barrier were seen as a principle of physics, then the world would be as expected to be: with limits of light speed (Cockshott et al., 2008), laws of thermodynamics (Sandberg, 1999), Heisenberg’s uncertainty principle (Cockshott et al., 2008), finite measurement precision (Deutsch, 1997), and no back-time travels (Aaronson & Watrous, 2009); in other words, without infinity. This physical limitation must be obeyed equally by classical and quantum computational machines, at least until a breakthrough in physics. Until then, neither classical, nor quantum computers can perform hypercomputation. Quantum computers do not fundamentally change the discussion about the emergence of superintelligence through technological singularity.

Quantum computers carried yet another promise: they would speed up computing enough to efficiently solve problems that do not have efficient solutions. As mentioned in the third Section, by means of superposition and entanglement qubits were thought to represent the \({2}^{n}\) amount of information. This property of qubits was thought to allow humanity to solve NP problems in polynomial time (see Aaronson, 2007). However, the crucial issue of computational techniques is not about the amount of information that could be stored in a given moment of computation; it is rather about the amount of information that could be received, decoded, and measured to establish the result. In 1973, Alexander Holevo published a theorem that established an upper bound to the amount of information that can be known about a quantum state. Holevo proved that at most one bit of information can be extracted from one qubit; thus, although purely theoretically a quantum state of n qubits can carry \({2}^{n}\) amount of information, such a state can return at most n bits of decodable information. Thus, to date, there is no premise that quantum algorithms will effectively solve all NP problems. Why, then, have quantum algorithms brought about such big computational improvements for many computationally demanding problems? Taking advantage of quantum effects, quantum algorithms support the simulation of systems where quantum phenomena play an important role (Rønnow et al., 2014); for quantum problems, they can provide more efficient (quantum) solutions compared to classical algorithms. To account for the possibilities of quantum algorithms, which differ from the classical ones, computational complexity theory distinguished a class for problems that could be efficiently solved by quantum computers called BQP (for bounded error, quantum, polynomial time; Nielsen & Chuang, 2010). The relationship of BQP to NP is not known, though it is conjectured that NP ⊄ BQP and quantum algorithms could not solve at least the hardest and thus most important problems of the NP class, i.e., NP-complete problems, in polynomial time (Aaronson et al., 2022; Watorous, 2008; Knill & Nielsen, 2000; Aaronson, 2005). Therefore, quantum algorithms have the potential to significantly enhance solutions for quantum-related problems. However, there is no evidence to suggest that focusing solely on solving quantum-related problems would necessarily lead to technological singularity.

In conclusion, complexity classes are influenced by quantum effects and differ from those considered only within classical physics; the computability limits, however, remain the same for both quantum and classical cases. If a quantum computer can perform some operation, a classical Turing machine can simulate a quantum computer performing this operation. Quantum computers do not exponentially accelerate computing power in general and do not lead to technological singularity, as they might bring exponential speed-up only in special quantum cases. Given the presented discussion on hypercomputation and computational complexity theory, there is insufficient evidence to support the claim that quantum technologies could alter the computational barrier and contribute to the emergence of superintelligence through technological singularity.

5 The dream of an artificial mind

As outlined in the previous sections, quantum supremacy, the theoretical point at which a quantum computer can perform tasks beyond the capabilities of classical computers, remains a topic of debate and exploration. While there have been significant advancements in quantum computing, there is still vast uncertainty surrounding whether quantum supremacy is achievable, particularly given the complex challenges involved in building and scaling quantum systems and limits of computability. Current discoveries in modern physics do not provide a solid foundation for asserting the feasibility of quantum supremacy. Thus, achieving quality or speed superintelligence by means of quantum technological singularity and intelligence explosion (e.g., Bostrom, 1998; Goertzel, 2013; Zheng & Akhmad, 2017; Miller, 2019) does not seem possible. However, because quantum computers can bring significant acceleration to quantum-defined problems, some researchers continue to link the development of quantum technology to the development of superintelligence (e.g. Zheng & Akhmad, 2017; Yampolskiy, 2018). In the following sections, we will discuss two alternative theses that connect quantum technologies with the emergence of superintelligence but do not involve a technological singularity. The first thesis claims that modern AI algorithms will indeed benefit significantly from the development of quantum technologies. The implicit assumption of this approach is that general intelligence is an algorithmically definable problem, it can be solved by mere intensification of one already known computing strategy, and it can be better defined in a quantum way than in a classical way. The second thesis denies that intelligence can be described by any algorithm; however, it claims that the human mind is based on quantum effects. Therefore, quantum computers may be better suited to simulate the mind than classical computers.

5.1 The quantum implementation of the mind

Current AI models are near-perfectly functioning specialized systems that support people and expand their skills in many areas. It seems that there is no such specialized task in which one of the AI systems does not exceed human abilities, and quantum computing could provide significant speed-ups, expanding the range of possibilities of artificial systems (Liu et al., 2021). In fact, with a sufficiently large number of engineers, time, and computing resources, it is possible to create millions of specialized software programs for millions of specific situations and problems. These millions of programs will be functionally equivalent in some way to the human mind. This approach to general intelligence is reflected in Marvin Minsky’s The Society of Mind (Minsky, 1988), which describes human intelligence as a set of simple specialized processes. Pierro Scaruffi calls this solution brute-force AI (Scaruffi, 2018). Brute-force AI, although achievable, does not arouse much enthusiasm in researchers today. Despite the impressive results, it remains only a complex sequence of mathematical operations performed on a powerful computing machine, which is difficult to call intelligence regardless of the adopted definition. There is ample evidence that intelligence is more than a sum of specific task-solving subcomponents (Landgrebe & Smith, 2022). Although this approach could solve particularly important problems, it lacks the ability of general reasoning in an uncertain environment, which is usually referred to as the most important aspect of any general intelligent agent (see Bostrom, 2014). Thus, even a significant acceleration of existing specialized AI algorithms may only yield limited progress in the field of superintelligence. The algorithms that might lead to AGI and superintelligence should computationally define the ability of general reasoning; for quantum computers to be essential in implementing such algorithms, they must offer advantages over classical computing in terms of defining and executing these algorithms. The following sections focus on the two primary approaches to computationally modeled reasoning and assess the potential enhancements of quantum technologies.

Today, one of the most promising approaches to the implementation of artificial general reasoning is reinforcement learning (RL), i.e., active learning from interactions with a complex and uncertain environment (Sutton & Barto, 2018). RL consists of learning via trial and error through a process designed to maximize the reward returned from the environment. Among various approaches to the implementation of artificial general reasoning, RL most closely resembles the form in which humans acquire skills and knowledge and thus seems to be the best approach to achieve speed or quality superintelligence. With the possibility of building any precise and specified environment, RL is recognized as one of the most universal and powerful approaches to the implementation of active learning of artificial agents (Nian et al., 2020). If RL algorithms could be effectively represented in quantum form, this would be an important premise in considering the potential contribution of quantum algorithms to the emergence of superintelligence. Indeed, it has been shown that, under certain assumptions, quantum effects can significantly improve the RL process (for review, see Dunjko et al., 2017). Various studies have shown a quadratic speed-up in the agent’s decision-making process (Agunbiade, 2022; Albarran-Arriagada et al., 2020; Sriarunothai et al., 2018; Saggio et al., 2021). Although these achievements are certainly significant, Dunjko and colleagues pointed out that they do not appear to have contributed to a breakthrough in the RL. The foundation of RL lies in the agent’s interactions with the environment; the quality of the model and learning efficiency are directly related to the number of interactions. This creates a quantization bottleneck that prevents quantum RL from a ground-breaking improvement and is often considered an inherent limitation of the entire RL paradigm (Dunjko et al., 2017). Thus, as mentioned at the beginning of this section, the expectation of breakthrough advances in RL brought by quantum computers should be related to the fact that reinforcement learning is a problem that is better described by a quantum system than a classical one, rather than to the quantum acceleration of computation in the RL paradigm. Recently, Li et al. (2020) suggested that RL indeed might be better described by a quantum system. They demonstrated higher performance of a decision-making simulation in quantum than in a classical RL environment, which they interpreted as evidence of the link between neuroscience data and quantum mechanics. However, this superiority of quantum models over classical ones does not necessarily indicate that the human brain is based on quantum effects. Some classical processes have been shown to generate quantum properties under certain conditions; it is not enough to demonstrate the better performance of a quantum algorithm over a classical one under specific conditions to prove that a process is quantum in nature (Ivakhnenko et al., 2018). To date, there is no sufficient evidence that RL is inherently quantum-based, and consequently, there is limited evidence suggesting that quantum technologies could enhance RL to the point of achieving general intelligence level capabilities.

Another frequently mentioned approach to computational modeling of general intelligence involves probabilistic graphical models, especially Bayesian models (van de Schoot et al., 2017). Bayesian models determine the probability of a certain event based on prior knowledge and beliefs about events and conditions with which it might be associated (Bayes, 1763). Since quantum computing is inherently probabilistic, this property could be used to significantly increase computational capacity in modeling Bayesian solutions (Benedetti et al., 2021). It is believed that the probability distribution of classical data could be directly and naturally represented using a pure quantum state instead of complex analytical equations (e.g. Cheng et al., 2018). Nonetheless, despite the superiority of the quantum data representation, inference in Bayesian networks is an NP-hard problem (due to their graphical representation) and even heuristic algorithms involve exponential complexity in the number of nodes (Kwisthout & Rooij, 2013). The foundation of Bayesian inference is belief propagation, which requires the conversion of a network into a structure named the ‘junction tree’, regardless of whether the network is classical or quantum. Finding an optimal junction tree is an NP-hard problem, and the complexity of the algorithms scales as the width of the tree increases. As noted in the fourth Section, the BQP class that distinguishes problems efficiently solvable by quantum algorithms does not overlap with the NP class, and quantum algorithms certainly cannot solve the hardest problems from NP (Aaronson et al., 2022). Besides problems with computational complexity, researchers usually mention yet another general problem with the Bayesian approach to modeling general reasoning (for an overview, see Bowers & Davis, 2012; but see also Griffiths et al., 2012 for a reply). Classical Bayesian models try to estimate what people rationally should choose, rather than what they actually chose; biological, evolutionary, and psychological findings suggest that the human mind was designed to find satisfactory, rather than optimal solutions; these solutions are adaptive to many conditions, but never optimal (see Gigerenzer & Brighton, 2009; Kahneman & Tversky, 1996). Recently, Trueblood et al. (2017) and Wichert et al. (2020) attempted to address the problem of paradoxical and irrational human inference using quantum probability theory. Wichert et al. showed a relationship between quantum probability waves and empirical findings from famous probability problems (e.g. Prisoner’s Dilemma, Two-Stage Gambling Game). However exciting, these results should be interpreted with caution, as they have not been tested in real-world complex decision problems and their relevance to general reasoning and intelligence is modest.

The computational approaches to the general reasoning shown above hold an important implicit assumption: general reasoning and intelligence are Turing-computable, i.e., solvable problems; further, to be physically modeled, this problem can be solved effectively. As mentioned in the second Section, AGI is expected to replicate human abilities other than general reasoning, such as emotional, moral, or social intelligence. Contrary to analytical skills, these abilities are largely not subject to learning, as is classically understood, being more innate than acquired in nature. Thus, the possibility of creating an artificial agent at least as good as a human in any field, with the help of AI algorithms, requires adopting one of the strongest physicalist theories of the mind, i.e., the computational theory of mind, to make emotional, moral, or social abilities not only physical processes but also Turing-computable processes (McCulloch & Pitts, 1943; Putnam, 1967; Fodor, 1975). According to these theories, phenomena both at the macro and micro levels, such as sociological and psychological phenomena, respectively are subject to certain rules and laws and are reducible to physics. For example, in the view of reductive physicalism (Kim, 1998), it is plausible to argue that not only do such laws exist, but also that appropriate bridge laws will certainly be constructed that will link them with the laws of physics, and that this is only a matter of time and technological development. Similarly, the adoption of eliminative materialism (Churchland, 1981) allows for the assumption that the artificial agent may manifest emotional and social abilities. In this approach, there is nothing that is not a brain process. As the brain is a physical object and the pro-spaces governing it are subject to the laws of physics, it is acceptable to say that emotional and social processes are simulable in the universe and that this also is only a matter of time.

Although intuitive, assumptions of computability have been widely criticized by many researchers (e.g., Roitblat, 2020; Landgrebe & Smith, 2022; for a discussion, see Tallis and Alexander, 2008). Back in the 1960s, Gödel’s incompleteness theorem was used to demonstrate some strict limitations on the mechanization of thought, understanding, consciousness, and awareness (Lucas, 1961; Wang, 2016). This issue was discussed most thoroughly by Roger Penrose; Penrose developed a Godelian argument to show that no Turing machine or algorithm can in principle simulate the self-awareness aspect of thought essential to the process of conscious understanding as experienced by human minds (Penrose, 1989). Thus, although there is some consensus on the physicality of mental processes, their Turing-computability is yet not proven; according to the modest physical CTT, not every physical process that exists in the world must be solvable. The problem of finding an effective solution for AGI is usually not mentioned at all (Šekrst & Skansi, 2021).

In the 1990s, David Wolpert developed the famous ‘No Free Lunch’ (NFL) theorems for search problems, machine learning and optimization (Wolpert & Macready, 1995, 1997). NFL theorems demonstrate that no learning algorithm could perform well in all classes of problems, which means that no algorithm could excel at everything. There is a heated debate among the scientific community about the impact of the NFL on the possibility of creating general intelligence or general reasoning systems. NFL theories highlight the obstacles towards the emergence of the AGI by either classical or quantum approaches. There are no free lunches, however, the problem of whether superintelligence itself is a free lunch remains unsolved and quantum technologies do not seem to bring a new quality to this discussion. Quantum technologies neither guarantee the computability of general reasoning, nor are there sufficient premises to claim that, if general reasoning is computable, it is better expressed in a quantum way to effectively use quantum technologies for its development (Aaronson, 2013a).

5.2 The quantum emergence of the mind

The second proposal that links quantum technologies to the development of superintelligence presents an opposite point of view compared to the computational approach to reasoning discussed earlier. It denies that intelligence can be described by any algorithm; instead, it reaches into the nature of the mind, claiming that it arises from quantum effects in the brain.

Over the past few decades, a new field within AI called artificial life has developed. It is centered around the idea of mimicking biological behaviors in non-living systems under controlled laboratory conditions (Bedau, 2007). In his paper, Chaitin (2010) asked the question that has haunted mankind for decades: is it possible to mathematically prove evolution? Following Chaitin, another question may be carefully asked: Is the quantum world, or the classical world, the better environment for evolution? That is: does evolution better describe the phenomena of classical or quantum physics? Martin-Delgado (2012) tried to answer this question by creating a model that takes into account the impact of quantum effects on evolution. Based on its results, he noticed that the quantum environment may be more natural to drive evolution since it is inextricably burdened with error, similar to Nature. If so, quantum technologies can be considered essential for the emergence of superintelligence.

These preliminary results draw attention to the concept of quantum biomimetics (Alvarez-Rodriguez et al., 2014). Biomimetics studies the functioning of living things that serve as models in various applications through reverse engineering. The first successful attempts to simulate quantum artificial life, i.e., to model basic features of living such as reproduction, mutation, and interaction, have already taken place (Alvarez-Rodriguez et al., 2016). In 2018, to support simulation with empirical results, Alvarez-Rodriguez and colleagues demonstrated the first experimental realization of quantum life in an IBM quantum computer (Alvarez-Rodriquez, 2018). These results may support the whole-brain emulation hypothesis that the realization of AGI could be based on the natural recreation of artificial individuals in quantum environments, provided that the quantum environment, rather than the classical one, would favor mind processes.

Such assumptions guided Roger Penrose and Stuart Hameroff’s hypothesis about the quantum origin of consciousness, called Orchestrated Objective Reduction (Orch OR; Hameroff & Penrose, 1996, 2014). Penrose and Hameroff claimed that consciousness is based on non-computable quantum processes inside neurons, specifically within microtubules, which would arise from a yet-to-be-discovered quantum theory of gravity. However, many scientists argue that the Orch OR hypothesis lacks a solid theoretical basis and there is little or even no evidence supporting the idea that quantum effects play any significant role in emerging consciousness and the human mind. Critics of Orch OR note that the warm and wet environment of the brain does not seem conducive to sustain quantum phenomena; decoherence would occur so rapidly that quantum phenomena would not have any real impact on humans (see Tegmark, 2007).

The second argument against the quantum emergence of mind is based on empirical observations: it does not seem that humans work in the same way as quantum computers – they do not cope better with tasks where quantum computers yield better results than classical ones, such as factorization (Aaronson, 2013a). Problems considered as BQP do not seem crucial for the evolution and survival of humans. Until there is empirical or theoretical evidence that mental processes are based on quantum effects, claims that quantum computers can be used to create an artificial mind, and thus superintelligence, lack justification.

6 Conclusions

In the presented work, we discussed how quantum computers could contribute to the emergence of superintelligence. The first and most popular scenario involves the emergence of superintelligence through quantum supremacy, which is caused by the acceleration of computation beyond the Turing barrier using quantum computers. We framed this discussion within the context of the possibility of hypercomputation and we showed that quantum computers cannot accelerate many key problems; the computational barriers are similar for classical and quantum computers. Thus, according to the perspective of modern physics, achieving quantum supremacy does not appear feasible. The second scenario involves the creation of AGI through the implementation of quantum versions of existing AI algorithms, which would significantly expand the capabilities of these algorithms. We demonstrated that there is no evidence supporting the superiority of the quantum-based definition of these algorithms. Furthermore, we showed that this scenario implicitly assumes the Turing-computability of the mind, which is also a subject of heated debate. The third scenario involves the creation of AGI through the quantum emergence of the mind. We demonstrated that, thus far, there is no evidence supporting the idea that mental processes are based on quantum effects. Therefore, there is no premise that quantum technologies can contribute to the emergence of superintelligence in this scenario. As it stands, quantum technologies do not seem to bring the expected breakthrough to the problem of building an artificial, truly intelligent agent.

Although progress is being made in many areas and human natural abilities are daily supported by artificial systems to such an extent that the term ’cyborg’ can be applied to any human being, superintelligence far exceeds the current scope of technology, scientific knowledge, definitions, and comprehension. An endlessly long discussion is required to define what the phrase ‘all cognitive tasks’ included in Bostrom’s definition means and to determine how to achieve a system that will reach the state humans have attained after thousands of years of evolution, without evolution. It does not appear that quantum technologies can contribute significant value to this discussion. This does not mean that superintelligence objectively cannot exist. According to the concept of cognitive closure popularized by Colin McGinn in ‘The Problem of Consciousness’ (McGinn, 1991), every cognitive system has its own boundaries of cognition and cannot know beyond these boundaries. Within this view, it is possible to imagine cognitive systems possessing cognitive boundaries broader than humans. On the other hand, David Deutsch, in ‘The Beginning of Infinity’ (Deutsch, 2011), presented an interesting physicalist point of view. He noticed that, if the universe is driven by universal laws and the theory of everything is possible, then there is nothing in the universe that humanity cannot understand. Universal laws of nature consist of physicalist formulas and rules that might therefore be discovered and understood, meaning that human beings are the highest form of intelligence that could exist. While it is reasonable to expect quantum computers to bring artificial systems closer to human cognitive limits and enhance natural human abilities, they do not seem to open the way towards transgressing those limits to achieve quality superintelligence, at least not with our current understanding of the laws governing the universe.