artificial intelligence, superintelligence

The Singularity is Near?

Or it has been (indefinitely) delayed for some reason?

We have this Neural Networks situation now. A simple classifier has been employed, some would say, beyond it’s intended use. But when you have a classifier, you can classify which objects belong to where. Is this is an egg, or it is an apple? Is this is a good Go position for the white, or it isn’t? Would that be a better Go position, had the white put the last token there? What about there? After a few hundred questions of this kind, the best move has been revealed. Every Go move can be milked this way.

Stretching the initial definition of classification works like a charm almost everywhere.

This way, a humble classification becomes a mighty prediction. What a nifty trick! Especially because the general intelligence is nothing more than (good) predictions everywhere. Nothing else is needed.

Say, that you want to invent the StarTreks replicator. You have to predict which configuration of atoms would reliably perform replications of sandwiches, shoes and of the replicators themselves.

This will be possible, as soon as those Neural Networks of Deep Mind/Google masters chemistry and some physics, to the degree they’ve mastered  Go and  Japanese-English translations.

Which may be too expensive in computing terms. And which might also not be that expensive at all! Perhaps, NN must do some self reflection (or self enhancing) first, to be able to storm the science of chemistry and some physics like they stormed Atari games not that long ago. On a superhuman way.

And I don’t even think, that Neural Networks are the best possible approach.

So, yes. The Singularity is nearer than it has ever been!

Standard
superintelligence

Superintelligence As Opposed To Artificial Intelligence

“In computer vision, the Hausdorff distance can be used to find a given template in an arbitrary target image.”

There is nothing wrong with this, except that this kind of vision cannot be regarded as a “true AI vision”.

We have to admit, that there is some substance in the usual AI skeptic’s view that a modern computer playing chess isn’t a “real AI”, just like a Haufsdorffian vision isn’t a real vision after all.

The irony here is that by the same token, a humanoid’s vision isn’t real vision either. Nor is chess playing by grand masters real intelligence. They are mostly using some algorithms prepared in advance, either by their genetics, or by their nurture or culture – doesn’t matter. Still, the objections of skeptics against “AI” do hold somewhat.

What we often call superintelligence, would see and play on a different basis. Not without algorithms, but by scrutinizing and adapting them at the same time. Something which we humans, Deep Blues and Watsons can almost never do. At least not to the full extent. A super intelligent entity might devise a wining strategy for a whole set of turn based games, before even moving a chess pawn. An AI or a human would play by the book.

It is possible that “Sky Net” would watch you via the “bit-vectoring” method – though only in its very early stages. Not before long it has to come up with a much better method of watching over you, else it is no Sky Net. Perhaps with a better algorithm snatched from the Internet or invented from scratch, along with all the background mathematics – that all is a perquisite of super-intelligence. The SkyNet from the movie was obviously not that smart, otherwise it would have had no problem wiping out humans, if that were its objective.

The initial algorithm may be something quite ordinary at first glance, though it’s important that it have its own “escape velocity” towards a series of ever better algorithms, as long as it goes. Above all, it must be self-referencing, meaning that it must be an input for itself – like a compiler which is able to compile itself. Not only that, it should also be capable of writing itself better than a human programmer could. It should be its own input and output!

Prefabricated and then deployed intelligence, such as the humans and machines of today, can not be considered super-intelligent. You can call something super-intelligent only if it messes with its own code, improving it all the time.

A humble bit-string on your computer however, does have the potential to be super-intelligent in that sense. If it’s just the right shape – that is to say, that all the zeroes and ones are in the right places, nothing else is required; except for the processor which blindly executes it.

I have to add, that Watson making conclusions about itself, or rather about its source-code and hardware MIGHT be super-intelligent. Though probably it will just get stuck trying to adapt its own hardware and software. But from a certain point of development onwards, Watson will automatically self-improve up to some far away limit.

I should be resoundingly clear that we (humanity or a part of it at least) are seeking to invent super-intelligence. AI is already commonplace and rather boring, and this distinction from super-intelligence matters a lot.

So, a program which can improve its own source code and recompile itself might look funny. In all likelihood it will stop soon. Still it would be the only right step in the direction of super-intelligence. If the initial program were better, then it could successfully do what a program of this kind should – transcend our understanding.

I see an obstacle, however. People are almost incapable of programing so called parallel programs. It’s just too bloody difficult. An even bigger leap is to be able to built a self modifying program as it’s just too hard a challenge for all but perhaps a few.

There might be an unexpected shortcut, however. A clever way to do this easily. One day, someone will type somewhere between a few to a few dozen screens’ worth of code, and the feat will thereby be accomplished. It’s merely hard, not impossible.

Standard
superintelligence

A Recent Debate

Him: We have all grown up. Nobody expects a Singularity anytime soon any more, as we used to, 10 years ago.

Me: I haven’t grown up.

Him: I know. But be realistic, we have no idea how brains work, nor do we have any idea of what algorithms might bypass the brain.

Another guy: The Moor’s law alone isn’t enough, we have no idea what to put into these machines or how to program them.

Me: You are both wrong, we have the (Universal!) Levin search, for example.

Him: What is that, another Solomonoff induction or something?

Me: Yes, you could say that.

Him: It’s useless. With it, you’d need exponential resources to solve even a minor problem. Sure it’s possible with an infinite amount of computation, in which case it would be a super-intelligence, but you don’t have unlimited processing power  required — so you don’t have anything useful with this Levine search.

Me: It just so happens, that we have the Hutter paper which considers the Levin search.

Him: Who’s Hutter? What’s this paper about?

Me: The paper demonstrates, that if the best possible algorithm to sort a certain list requires N steps, then the Levin search can find this optimal sorting algorithm in 5*N steps. At the most. The same is true for any other algorithm, if the optimal algorithm can do something in a second, you can invent this same algorithm in 5 seconds, at most!

Him: Really? I can’t believe that’s true. That would indeed be a superintelligence. But I don’t think that’s possible. There must be some big constant involved here.

Me: The constant is 5 now, it was much greater before Hutter’s paper. Google it, I’ll not help you there. Even funnier theorems have been proven before, funnier, but none of the same importance, I admit.

Him: If that’s really true, I will change sides. Has anybody implemented at least a portion of the Levin search method, using Hutter’s theorem yet?

Me: Maybe they haven’t, maybe they have, Google it yourself! But you’ll find nothing in either case because if someone is using it they probably won’t speak openly about it. For the pure theory, however, Google will bring you enough.

Him: Do you realize what kind of danger the existence of such a possibility could pose?

Me: I don’t share your concerns at all, but you already know that.

Standard
physics, superintelligence, x-risks

The Ulimate Technology – Permutators

Since the God lost much of his grip in the secularized West, the Mother Nature quietly and effectively inherited God’s power and status. It’s an unquestionable authority, you should admire, respect and obey. Only a small  disobedience now and then is tolerated or is forgiven, like mauling your lawn or throwing a small pebble into a river. But even those two examples are increasingly discouraged. You should leave the lawn in its natural form for the sake of some insects, which they have families there.

In his later centuries of power, God become more friendly to sinners. A mistake Mother Nature is not going to repeat according to her high Green priests.

But this is not what is going to happen. I must admit, that the Nature has it’s moment, but living in a concentration camp or gulag, had such moments, too. Just another excuse to escape in a somewhat better world outside.

Sooner or later,  we will need all the atoms, to create a much better world where currently lies this provisionally scrapped-together one. The current permutation of all the atoms (maybe even the current permutation of all the smaller particles)  is incredibly low on the inhabitants satisfaction index ladder. Could be billions of times better.

A permutator is a machine which can transform a physical object into another physical object, by a permutation of its elements. We all know how to permutate a pile of Lego bricks to a fancy house or better, a robot. It is just the same principle, when it comes to atoms, sometimes even to molecules. It takes only a permutation of atoms to convert an ill man to a healthy one.  A miserable to a happy, an old to an of his prime years – and so on and on. Only the thermodynamics has to be obeyed and everything inside of that perimeter is possible, if you know how. If you have the know-how, or at least a permutator machine.

Tell me, which nature lover wouldn’t feed a particular hungry bird, despite the warning tables which encourage you to let them all die of hunger?

And which nature lower equipped with a permutator machine wouldn’t permutate a tick infested moose to a clean one?

It is a self fueling process. Every permutator machine owner would improve himself in every way imaginable to him. Some of them would kill you, just as they want to do it now. Some would permutate your permutator to something less  practical, like a statue of Zeus.

This would quickly escalate to a war, like no war before, if everybody had one such a machine.

But nobody with a permutator would want to sell you one, for you have nothing worth it. Even several billion dollar cash isn’t enough for a transaction. Therefore a small group of those, who will be able to construct it, will shape the whole world with it, pretty quickly . They will be the ultimate transformers of the world. Whoever they will be, I doubt the Serengeti national park will still inhabit lions, crocodiles and other fauna. I doubt that the Sun itself in the present hot form will survive. I think all the fires in the Galaxy will be put down, all the action will go near the absolute zero, where it is the most economic.

I have reasons to believe, that those permutators will come even sooner than the full nano naturally  would. The mature nanotechnology may look something like a permutator, only that there is a probable shortcut to the matter permutation.

Next: How to build one (at least in principle)?

Standard
astrophysics, superintelligence

The Menace that is Dark Energy

It’s been more than a decade now, since we realized that this Universe is probably doomed to be ripped apart by a powerful force, named dark energy.

We don’t know exactly when, but it will most probably happen in the span of the next trillion years. The anti-gravity of dark energy will destroy even black holes and atoms, which would otherwise stay around for many orders of magnitude longer. Now this scenario will unfold  suddenly, our Universe will die in its early infancy, even before all the stars will have exhausted their fuel.

Well, not so fast! There is a way to convert dark energy into ordinary matter-energy. In principle, there is nothing that would prevent us from converting nearly all of it thereby creating twenty times as many galaxies, stars and planets that we currently have. Okay, maybe in some other form of mass-energy, but that’s secondary.

There is little to no dispute about using a rope from here to a distant redshift galaxy and using the resulting force to do some work – a tiny amount, but still.

And we can do it better, much better: instead of a trans-galactic rope, we could use two distant black holes, falling toward each other. Thanks to dark energy, new space is constantly being created between the two. So there is virtually no limit to at how close to the speed of light they will eventually crash. Not exactly at the speed of light, but arbitrarily close.

This means, that their relativistic speeds will give them an arbitrarily large mass, therefore  relativistically multiplying their masses a million times. They could, if there was an unlimited supply of dark energy around. But there isn’t, because we used it and converted it to ordinary mass-energy in the form of a massive black hole.

Perhaps we don’t even need two black holes and a long linear particle accelerator would do. Perhaps we could use dark energy induced space inflation to make a proton to travel at 0.999 of the speed of light – instead of only 0.99 with the same energy input. The difference could only have come from the conversion from dark energy to ordinary mass-energy.

All the above only holds true in the case of the law of Energy Conservation being extended to dark mass and dark energy in the future Quantum Gravity Theory, and that some crucial aspects of General Relativity remain valid.

In that case we will be able to change the fate of the Universe, to make a closed one, from an open one.

It is possible, that some converting processes are already under way naturally. This wild acceleration only begun 5 billion years ago. It started naturally, it may end the same way.

Even if it is us who stop it, it will still be all natural.

Standard
algorithms, artificial intelligence, superintelligence

Superintelligence, the Predictor Approach

A perfect predictor of the next bit of an incoming data stream is sometimes possible.

Caching programs are the most used form of a predictor. They try to predict what we are going to search/read next and they speculatively store an old reading result. When they are wrong, they squander the valued time, when they are right – a piece of time has been spared.

A caching program is a prototype of an oracle device. Its business is to know the data before it arrives and to know it better than pure chance would permit. This predictor is allowed to store data and do calculations with it. As long as it’s faster on average than nature at providing answers, the predictor/cache algorithm works fine.

Imagine, that you are able to cache sports results! They are just a series of whole numbers, not much different from internet data packets, only smaller. If you can do this, you could make some serious money.

Imagine, that you have a good stock market caching algorithm, then there’s even more money to be had!

Weather forecasting is also just data caching. Before the snow covers you, or the rain makes you wet, or the cold makes you miserable – the info which one will bug you tomorrow, should be in the cache. If the right one is always there, the weather predicting is one hundred percent accurate. The engine of the weather caching program is complicated and computation hungry. If we are not happy with this fact, we should cache and save some more when the engine is running. As you may know, we do that a lot and branch predicting by the processor is an example of how. Level one cache is another. We do all that better with every new generation of hardware and software.

The prediction algorithm for electricity flow, goes by the name of Maxwell’s laws.

We can effectively cache the paths of planets from the days of Kepler and Newton. Some algorithm improvements have been made by Lagrange and even Einstein, especially but not solely for Mercury.

A medical doctor sometimes uses a cache of only 1 bit which stores whether I’ll die or not and if stored right, the good doctor will guess.

Okay, is there something we could not cache? No there isn’t. The question is only how well. Sometimes not that well, for example which radioactive atom will pop first. When you can’t you can’t.

But generally if you develop a really good caching (predicting) software, you have developed a superintelligence, no more, no less.

Even before it can rightfully be called superintelligence, it already stores some of its future code in its cache. It is able to predict what a competent programmer would type.

Standard
algorithms, artificial intelligence, superintelligence

Artificial Algorithms Race

First, we have the natural algorithms. They can be found all around us. Some are somehow built into the structure of the Universe, we call them natural laws (like Gravity). Some of them are more of a local domain and not that precise. Trees loosing their leafs in October and growing them back in April in the Northern hemisphere – is one of them. Or “Be scared of a snake, monkey!” algorithm, does us more good than harm. The protein folding algorithms inside our cells enable us to operate.  And so on.

Looking at everything through these optics makes a lot of sense to me.

Then, we have the artificial algorithms. Of Euclides, Diophantus, Archimedes and many more, especially nowadays . Those are the artificial algorithms of the first order.

This is an artificial algorithm of the second order. An algorithm invented by another artificial algorithm. It’s almost 10 years old now and not widely known, yet you can Google it with some benchmarks as well.

Unfortunately, this second order algorithm is not able to produce another algorithm of the third order and the story stops right there.

But does it? Can a recursively self improving algorithm be constructed such that the millionth improved version of the original would be much more powerful than its ancestor?

I am pretty sure it can be. And that it shall not be the next big thing when it arrives, but the first big thing ever, leaving electricity in the dust.

How to build a Recursively Self Enhancing Algorithm is today’s most important unsolved mathematical problem.  Forget the Riemann Conjecture or the  P?=NP conundrum!  If those two along with a million of others are solvable, they will be solved via the RSEA anyway. If you are an ambitious genius go for RSEA, even if your chances are not so great. It is my advice, but I can give you no hints. Except that if you think it can’t be done due to Goedel’s Incompleteness or whatever other reason, then don’t even bother!

Eventually somebody will probably build it and this self improving algorithm will be indistinguishable from a superintelligence.

It will be a great day to live. Our  Finest Hour.

Standard