8/15/2023 0 Comments Ai actions dont work with effectsOnce we have that level of generative AI, then the massive bottleneck for society will be labor to build said things that were generated, and given that AI would likely make being implemented into a general purpose robot a lot easier and designing said robot would also be easier with said AGI level tools it would be accelerated to market. Alignment will overtime make these objective functions able to be more broad and require fewer confirmation questions as it can become more independent in meeting some goal till it's super human and well aligned in a super human manner. This would make it similar to the performance and a highly experienced and skilled consultant or employee and all you'd need to do is manage it. Though that's not that far away, at most 4 years, I suspect by 2029 it will be AGI level which would mean you could give it rather broad sounding objective functions and then it would ask clarifying questions and propose potentially multiple solutions or deliverables for you to pick from which at that point it would completely change everything about society and would likely have a much steeper adoption curve since it would be adopt or die out as a company, and well adoption would be straightforward and intuitive thanks to the confirmation questions and multiple proposed solutions. It took 100 years between the invention of the internal combustion engine in 1826 to then the first automobile in 1886 and then the mass produced Ford Model T in 1908 to then transform society within the 1920s and beyond (can argue transforming society really took till the 1940s and 50s.)īy comparison a lot of generative AI is just barely now getting to a point where it can do truly useful work, and in my opinion it will take another generation or two till it can be useful in most work places. By comparison to the past we're moving rather fast though. Yeah it just takes time for society to adopt massively transformative technology. Machine Intelligence Research InstituteĤ) Be respectful Check out /r/Singularitarianism and the Technological Singularity FAQ.Proponents of the singularity typically postulate an "intelligence explosion", where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human. Futurist Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. The first use of the term "singularity" in this context was by mathematician John von Neumann. Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable. The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. This community studies the creation of superintelligence- and predict it will happen in the near future, and that ultimately, deliberate action ought to be taken to ensure that the Singularity benefits humanity. A subreddit committed to intelligent understanding of the hypothetical moment in time when artificial intelligence progresses to the point of greater-than-human intelligence, radically changing civilization.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |