Almost 1 month ago, while talking to @BesseB, I was challenged to do this. So, here we go...
What is "The Singularity"?
There will be a time in the near future, within this century, where computers will become much, much smarter than humans. Most specialists agree it will come. Nobody knows what it will be like. When it does come, everything that everyone were planning to do the next day shall change!
It may not come as sudden as overnight, though. It may even take a few years.
And what is "basiux"?
A group of people who believe the explosion will probably happen basically overnight. And it will be an awesome event for everyone. Something just a few of us will be hoping for and everyone will be able to enjoy. Most likely nobody will be expecting exactly when.
All of that is possible because of a specific topic within computers. Artificial Intelligence. Or A.i. for short.
But what is Intelligence?
One way to define it is "the capacity of predicting the future". In every aspect. To be able to hit baseball with a bat, you have to see the ball, recognise its path in the air and predict where it will be. Then you have to move muscles in a very complex way as to hold the bat and move it into the precise point of collision. All that within just a few milliseconds. That's a huge sign of intelligence right there, that almost no other living being, or machine, can replicate today. Well, even a lot of humans can't manage to do it, but there is the capacity for almost any of us.
Maybe if we get an elephant's brain inside an android (a theoretical machine that replicates the human body, a robot that doesn't exist) the elephant could do a better job than us. Maybe. Because their brain might have higher capacity than ours, but if it's true their body probably limit its intelligence growth. It could also be they're missing a fundamental growth component in their being. Maybe their cortex, a main component of the human brain associated with alzheimer's (when people get to old and stop being themselves) is just wired differently. Just maybe.
Ai is intelligence created by humans (that's actually what "artificial" means). Email spam filters are a good example. They are software applications (a group of algorithms) that can anticipate if the email user would classify that email as something they don't want to read. Spam.
How do we, people of the world, work on developing Ai today?
Mostly we create programs able to do very tiny and specific tasks, compared to humans. Programmers have to write tons of lines of code, telling a dumb processor on what it needs to do to accomplish such tasks.
Most, if not all, of our top notch Ai programs evolve a lot like we do: by repetition. To learn how to walk, we try to stand up and fall down. Many times. One day, we can start walking. The only way we know of predicting the future (being intelligent) is through analysing the past, so we got to record experiences and evaluate them.
Why even bother creating such robots?
Are you kidding?! Even those "dumb" pieces of software we have made gives us huge benefits! Think about some of the tremendous technological milestones. The wheel. The sewer system...
We can't move forward as a global society, as the humankind race, without serious advancements in technology. It's the practical result of all our thoughts on how we can improve the quality of life for every single one, and the only thing able to get us closer to such goal.
And how basiux envision Singularity can be attained?
This can only be discussed philosophically. And that gets really subjective. It's all a matter of faith, funny enough. At this point, there is just no way to know how. Nobody in their sane mind can even assert it will happen. It may never happen. All we can do is keep researching and hoping. But from what we know today, there is an immense chance it will happen. So in what direction should we go?
Extrapolating, it's easy to see it's just a matter of keep evolving the algorithms, the whole software and the hardware, and eventually we would end up creating something that could resemble a Super Intelligence. That's a very dangerous way. It's much more likely we end up creating a less intelligent thing, which we can control. How can it be so much more intelligent than us and still be under our control? It can't. By this point, it's just a massive weapon that can easily escape our control or misinterpret any of our commands and boom. Game over.
Just like anything else we as a species, or we as individuals, learn to do or to build, we need many prototypes and trial and errors. Except, for this, we may not get a second chance. Picture an atomic bomb able to destroy the world. This is not so energetic, the world wouldn't blow up... But we could be biologically exterminated. Unfortunately there are no films depicting this properly. None.
So, another way we could go is not trying to control it. Just make it grow. Engineer the seed, plant it, watch carefully and closely. Make a mistake, it will die. Have a kill switch ready to be activated before it grows too big. Make it to grow fast and by itself, though, quickly decreasing the need of mentoring and monitoring, just enough to make it get on its own feet. Analogous feet, it will all be software only.
Wait, what? How are those 2 ways any different?!
The way we're doing today is the first one. Indeed they look very alike. We are trying to engineer a seed and teach it to grow. But if the tree starts to seek some sunlight, we cut it off, and teach it "go straight". We're trying, as hard as possible, to keep every single thing under our control, so we can do whatever we think is important. The financial aspect of this is just 1 side.
It doesn't matter. In practice, there's only 1 way the singularity won't happen this century: if we get extinct first. For any other reason. Again, it's so unlikely the singularity would represent any danger to us that, right now, I'm not even sure why I said it could be any harm. Maybe that's part of the discussion and research that needs to be done and verify this not-yet-a-theory idea.
Make no mistake, we need to make this happen as fast as possible. But if we can't, well, bummer. It was already expected!