THE DANGERS OF SUPER AI

      1 Comment on THE DANGERS OF SUPER AI

HOW FAR AWAY ARE WE FROM SUPER INTELLIGENT AI

    A question that’s been on the mind of anyone who is interested in AI is how far away are we from the singularity point marked by super intelligent machines? Another question a lot of people ponder is something along the lines of what would these machines look like? Or what form will they take? And more importantly what will humanities fate ultimately be? These are questions to which there are no easy answers as of yet.

    However, what we do know is that this point is coming likely within the next century and humanity will have to be ready for it. We must ensure that we have the problem of safety completely figured out in advance because if we wait until the takeoff begins it will be too late. Most people think that this technology will evolve slowly, they think that once we get to the point where we can create human level machine general intelligence that it will take a long time to reach a point where machine intelligence far surpasses that of any human. Unfortunately for us it will likely not be that easy.

THE IMPENDING TECHNOLOGICAL TAKEOFF

    Before we can figure out exactly what kind of dangers we are facing with this kind of technology we must first understand how much time we have to implement the necessary safety measures. We need to have an idea of how long will this technological explosion take. The Agricultural Revolution took place over thousands of years, The Industrial Revolution took place over hundreds of years, and the Revolution we are currently experiencing surrounding the Internet and similar technologies is taking place over mere decades. Technology is improving at an exponential rate, which is both good and bad.

    It’s good because the faster we can create the technology necessary to solve some of humanities biggest problems then the greater chance we have at succeeding as a species in the long run. The bad side highlights the issue that humans may not be able to adapt fast enough to truly survive and thrive, technology evolving too fast solves some problems while creating more problems and we must be able to adapt and overcome any troubles we may face. But that line of thinking is beyond the scope of this post so we’re going to dive into how fast AI is likely to take off and what it might possibly look like.

A FAST TAKEOFF IS MUCH MORE LIKELY

    The most likely scenario given the facts we have now and all the information we can deduce is that once we figure out how to create general machine intelligence through the creation of a Master Algorithm or any other number of possibilities we will be very close to being able to create a super intelligent program. This is mainly because as of right now we have a lot of learning algorithms that perform extremely high in one or two narrow domains and we are likely still 3 or 4 decades away from creating a universal learner that has general intelligence equal to that of a human. However, once we get there, it will only be a matter of making slight performance improvements to the learner we already have.

    I talked about the fact that if we managed to create something resembling a master algorithm we would likely be able to use it to make extremely fast strides in a very short amount of time in this post here(link will be inserted here) Now a slower takeoff of this technology is still possible though. It’s possible that we may face more problems then can be predicted at this time and the result would be a slower evolution of AI tech that would take place over the course of a century or longer. This would be the best possible outcome because it would give us the time we need to plan and time to think about exactly what we are doing.

    When you consider the fact that it will be much harder to jump from where we are now to human level AI then jumping from that point to extremely intelligent AI it becomes clear that a faster takeoff is much more likely. The main reason being is once a universal learner is invented and we create a program that possesses general intelligence equal to that of a human that program will likely be able to improve upon itself much faster then we could create it.

    A program like this might also have the capability to create other universal learners that are better then the ones humans created which would spark an intelligence explosion in the AI community. Once a master algorithm or something with similar capabilities emerges on the scene the intelligence explosion in question could take place over days, hours, or even minutes. This is only one possible scenario but it is certainly worth considering because in a situation like that we won’t have time to react, we would have to rely upon previously set safety measures.

HOW WOULD WE CONTROL INTELLIGENT AI?

    There’s another nasty problem that plagues the future of this technology. The issue of control. It’s pretty obvious that the biggest reason humans are the dominant species on this planet is the fact that we are the most intelligent animals on Earth, and intelligence enables control despite the fact that there are plenty of animals that have other biological advantages over us. A lot of people assume that because we would create the AI that we would also be able to control it easily. We have to be careful with assumptions like that because they blind us to the potential dangers.

    Once we manage to create machines that far surpass human level intelligence we will have no way of knowing what it would be capable of as the methods that we use to measure human intelligence would be completely useless. For example we have a pretty good grasp on the differences between an IQ of 80 and an IQ of 140 but all of that goes out the window if you ask yourself what would a machine intelligence look like with an IQ of 5,000? We would have no way of even attempting to measure that and we would have no way of knowing what such a machine would be capable of or what it’s motives would be.

    It stands to reason that if we could flawlessly program motives in advance that matched up with our own and kept this extremely intelligent machine locked up in a controlled environment that we would be safe right? Not necessarily, unfortunately nothing is ever that simple. Such a machine would have ways to manipulate us.

POTENTIAL SOLUTIONS TO THESE PROBLEMS

    This is a subject that we will describe more in part 2 of this series. This post was mainly written to provide you with some food for thought. There are many potential dangers and many questions that don’t have easy answers at this time. However, First I will go more in depth on some potential solutions to these problems along with some safety precautions that could be put in place, Second it is still very likely that we will be able to solve most if not all of these issues and our future will be bright.

One thought on “THE DANGERS OF SUPER AI

  1. Pingback: THE DANGERS OF SUPER AI PART 2 – Artificial Intelligence Mania

Leave a Reply

Your email address will not be published. Required fields are marked *