Singularity & Superintelligence

Technological Singularity – also, simply, the singularity – is a hypothetical future point in time at which - technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes - to human civilization.

According to the most popular version of the singularity hypothesis, called intelligence explosion, an self upgradable intelligent agent (such as a computer running software-based artificial general intelligence) - will eventually enter a "runaway reaction" of self-improvement cycles - with each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

https://en.wikipedia.org/wiki/Technological_singularity




Intelligence Explosion
- I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. He speculated on the effects of superhuman machines, should they ever be invented.

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (even more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in...

...the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in (unfathomable) changes to human civilization... 

 ...the new superintelligence would continue to upgrade itself and would advance technologically at an (incomprehensible) rate...


 ..such change may happen suddenly, and ...it is difficult to predict how the resulting new world would operate. It is (unclear) whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat, as the issue has not been dealt with...


 ...We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as (impenetrable) as the knotted space-time at the center of a black hole, and the world will pass (far beyond) our understanding...


https://en.wikipedia.org/wiki/Intelligence_explosion


Intelligent Agent Analyzes Processes that Produce its Intelligence, Improves Them

An “Intelligence Explosion” is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same.

This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence.

A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a very dramatic leap in capability very quickly. This is known as a “hard takeoff.” In this scenario, technological progress drops into the characteristic timescale of transistors rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (a mind orders of magnitude more powerful than a human's) before it hits physical limits. A hard takeoff is distinguished from a "soft takeoff" only by the speed with which said limits are reached...


Hypothetical Path; ...The following is a common example of a possible path for an AI to bring about an intelligence explosion. First, the AI is smart enough to conclude that inventing molecular nanotechnology will be of greatest benefit to it. Its first act of recursive self-improvement is to gain access to other computers over the internet. This extra computational ability increases the depth and breadth of its search processes. It then uses gained knowledge of material physics and a distributed computing program to invent the first general assembler nanomachine. Then it uses some manufacturing technology, accessible from the internet, to build and deploy the nanotech. It programs the nanotech to turn a large section of bedrock into a supercomputer. This is its second act of recursive self-improvement, only possible because of the first. Then it could use this enormous computing power to consider hundreds of alternative decision algorithms, better computing structures and so on. After this, this AI would go from a near to human level intelligence to a superintelligence, providing a dramatic and abruptly increase in capability....


https://wiki.lesswrong.com/wiki/Intelligence_explosion


Sudden "Intelligence Explosion" Takes Unprepared Human Race by Surprise

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be (left far behind).

Thus the first ultraintelligent machine is the (last invention) that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence


What Is: The Superintelligence?

Hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. Nick Bostrom defines superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." 

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence.

The first sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities.

David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence; AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks. Concerning human-level equivalence ...the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. ...human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention.

https://en.wikipedia.org/wiki/Superintelligence


Intelligence Explosion FAQ

(PDF Version Available)


  1. Basics
  2. How Likely is an Intelligence Explosion?
  3. Consequences of an Intelligence Explosion
  4. Friendly AI

1. Basics

1.1. What is the intelligence explosion?


The intelligence explosion idea was expressed by statistician I.J. Good in 1965[13]:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

The argument is this: Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and found a more elegant proof for one of them than Russell and Whitehead had given in Principia Mathematica[14]. By the late 1990s, ‘expert systems’ had surpassed human skill for a wide range of tasks.[15] In 1997, IBM’s Deep Blue computer beat the world chess champion[16], and in 2011, IBM’s Watson computer beat the best human players at a much more complicated game: Jeopardy![17]. Recently, a robot named Adam was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results.[18][19]

Computers remain far short of human intelligence, but the resources that aid AI design are accumulating (including hardware, large datasets, neuroscience knowledge, and AI theory). We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an ‘intelligence explosion’ resulting in a machine superintelligence.

This is what is meant by the ‘intelligence explosion’ in this FAQ.

See also:

2. How Likely is an Intelligence Explosion?

2.1. How is ‘intelligence’ defined?... 

More here;
https://intelligence.org/ie-faq/


Possible repeat entries below here……………..


An “intelligence explosion” is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence.


https://wiki.lesswrong.com/wiki/Intelligence_explosion


Intelligence Explosion: If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything apes cannot. 


https://en.wikipedia.org/wiki/Intelligence_explosion



A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. Nick Bostrom defines superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." 


Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. 


David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence; AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks. Concerning human-level equivalence ...the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. ...human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention.


https://en.wikipedia.org/wiki/Superintelligence


 ...the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in (unfathomable) changes to human civilization... 


 ...the new superintelligence would continue to upgrade itself and would advance technologically at an (incomprehensible) rate...


 ..such change may happen suddenly, and ...it is difficult to predict how the resulting new world would operate. It is (unclear) whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat, as the issue has not been dealt with...


 ...We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as (impenetrable) as the knotted space-time at the center of a black hole, and the world will pass (far beyond) our understanding...


https://en.wikipedia.org/wiki/Technological_singularity


...Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be (left far behind). Thus the first ultraintelligent machine is the (last invention) that man need ever make, provided that the machine is docile enough to tell us how to keep it under control...


 ...a sudden and unexpected "intelligence explosion" might take an (unprepared) human race by surprise... 


To illustrate, if the first generation of a computer program able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months, then the second-generation program is expected to take three calendar months to perform a similar chunk of work. In this scenario the time for each generation continues to shrink, and the system undergoes an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas. Empirically, examples like AlphaZero in the domain of Go show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly.


https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence


 ...Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones... -Donald Rumsfeld


https://wjrider.wordpress.com/2014/10/12/the-unknown-knowns/


People worry that computers will get too smart and take over the world, but the real problem is that they’ve already taken over the world. Recent years have brought extraordinary advances in the technical domains of AI. Alongside such efforts, designers and researchers from a range of disciplines need to conduct what we call social-systems analyses of AI. They need to assess the impact of technologies on their social, cultural and political settings.


A social-systems approach could investigate, for instance, how the app AiCure — which tracks patients’ adherence to taking prescribed medication and transmits records to physicians — is changing the doctor–patient relationship. Such an approach could also explore whether the use of historical data to predict where crimes will happen is driving overpolicing of marginalized communities. Or it could investigate why high-rolling investors are given the right to understand the financial decisions made on their behalf by humans and algorithms, whereas low-income loan seekers are often left to wonder why their requests have been rejected.


http://www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805


Self Replicating Machines.

https://youtu.be/kzd1OiP27s0


I mean IMO shouldn’t Musk be all excited about self-replicating machines and the grey goo hypothesis threat? To dismiss nanotech out of hand would seem to ignore the final obvious outcomes of this area of physics if researchers are successful.


A self-replicating machine would need to have the capacity to gather energy and raw materials, process the raw materials into finished components, and then assemble them into a copy of itself. It is unlikely that this would all be contained within a single monolithic structure, but would rather be a group of cooperating machines or an automated factory that is capable of manufacturing all of the machines that make it up. The factory could produce mining robots to collect raw materials, construction robots to put new machines together, and repair robots to maintain itself against wear and tear, all without human intervention or direction. The advantage of such a system lies in its ability to expand its own capacity rapidly and without additional human effort; in essence, the initial investment required to construct the first clanking replicator would have an arbitrarily large payoff with no additional labor cost, with future returns discounted by their present value. 


A clanking replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over long time periods. However, with robust error correction, and the possibility of external intervention, the common science fiction theme of robotic life run amok is unlikely in the near term.


https://en.wikipedia.org/wiki/Clanking_replicator


The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back. 


https://en.wikipedia.org/wiki/Self-replication#Space_exploration_and_manufacturing


Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves" by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then be assembling the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication".  


https://en.wikipedia.org/wiki/Self-replicating_machine  


AUTOPOIESIS: the process whereby an organization produces itself. An autopoietic organization is an autonomous and self-maintaining unity which contains component-producing processes. 


The components, through their 

interaction, generate recursively 

the same network of processes 

which produced them. 


An autopoietic system is operationally closed and structurally state determined with no apparent inputs and outputs. A cell, an organism, and perhaps a corporation are examples of autopoietic systems. See allopoiesis. (F. Varela) 


Literally, self-production. The property of systems whose components (1) participate recursively in the same network of productions that produced them, and (2) realize the network of productions as a unity in the space in which the components exist (after Varela) (see recursion). Autopoiesis is a process whereby a system produces its own organization and maintains and constitutes itself in a space. E.g., a biological cell, a living organism and to some extend a corporation and a society as a whole. (krippendorff) 


http://pespmc1.vub.ac.be/ASC/indexASC.html


 ...Grey goo is a hypothetical end-of-the-world scenario involving molecular nanotechnology in which out-of-control self-replicating robots consume all matter on Earth while building more of themselves... 


https://en.wikipedia.org/wiki/Grey_goo


http://www.issuesmagazine.com.au/article/issue-march-2012/artilect-war.html


................



This hypothesis that some variation of human manipulated information processing can never match or never surpass humans in all ways, even solving Chalmers hard problem of subjective experience, has too much evidence  against  it of already matching human abilities and replacing humans altogether in some areas. To discount this evidence of progress needs justification before you can use it to determine that superintelligence is impossible or unlikely. 


Wasting humans in competitions of Chess or playing the Game of GO etc are just a couple of examples of evidence that stand out in a sea of evidence that support hypotheses that information processing mechanisms will match and even surpass apes. Its not like claiming a coin will land on either the head side or the tails side when flipped before it is flipped based on no information or evidence to determine which with. 


The evidence of all this "informationcreep" does support some hypothesis with more statisticalsignificance than skeptical hypotheses on the matter. Your proposal of the impossibility or unlikelihood of general artificial intelligence needs to refute the null_hypothesis - like the pro general artificial intelligence hypothesis has already refuted with such testable signifigance - based on the ways information technology is matching and surpassing humans in most areas and growing. The null hypothesis in this case is that there is no correlation between information technology progress and hypotheses about what might happen next. The Null hypotheses your promoting by implication - that any evidence on the matter is random noise any system might make at any time - doesnt rule out the connection between technology progress and successful predictions based on that progress correctly predicting known unknowns or even discovering unknown unknowns in the past already. 


https://en.wikipedia.org/wiki/Creeping_normality

Autocracy, Inc - The Dictators Who Want to Run the World

Dictators are Less Interested in Ideological Alliances and More Interested in Helping Each Other Stay Powerful We think w...