algorithms, artificial intelligence, superintelligence

Superintelligence, the Predictor Approach

A perfect predictor of the next bit of an incoming data stream is sometimes possible.

Caching programs are the most used form of a predictor. They try to predict what we are going to search/read next and they speculatively store an old reading result. When they are wrong, they squander the valued time, when they are right – a piece of time has been spared.

A caching program is a prototype of an oracle device. Its business is to know the data before it arrives and to know it better than pure chance would permit. This predictor is allowed to store data and do calculations with it. As long as it’s faster on average than nature at providing answers, the predictor/cache algorithm works fine.

Imagine, that you are able to cache sports results! They are just a series of whole numbers, not much different from internet data packets, only smaller. If you can do this, you could make some serious money.

Imagine, that you have a good stock market caching algorithm, then there’s even more money to be had!

Weather forecasting is also just data caching. Before the snow covers you, or the rain makes you wet, or the cold makes you miserable – the info which one will bug you tomorrow, should be in the cache. If the right one is always there, the weather predicting is one hundred percent accurate. The engine of the weather caching program is complicated and computation hungry. If we are not happy with this fact, we should cache and save some more when the engine is running. As you may know, we do that a lot and branch predicting by the processor is an example of how. Level one cache is another. We do all that better with every new generation of hardware and software.

The prediction algorithm for electricity flow, goes by the name of Maxwell’s laws.

We can effectively cache the paths of planets from the days of Kepler and Newton. Some algorithm improvements have been made by Lagrange and even Einstein, especially but not solely for Mercury.

A medical doctor sometimes uses a cache of only 1 bit which stores whether I’ll die or not and if stored right, the good doctor will guess.

Okay, is there something we could not cache? No there isn’t. The question is only how well. Sometimes not that well, for example which radioactive atom will pop first. When you can’t you can’t.

But generally if you develop a really good caching (predicting) software, you have developed a superintelligence, no more, no less.

Even before it can rightfully be called superintelligence, it already stores some of its future code in its cache. It is able to predict what a competent programmer would type.

Standard
algorithms, artificial intelligence, superintelligence

Artificial Algorithms Race

First, we have the natural algorithms. They can be found all around us. Some are somehow built into the structure of the Universe, we call them natural laws (like Gravity). Some of them are more of a local domain and not that precise. Trees loosing their leafs in October and growing them back in April in the Northern hemisphere – is one of them. Or “Be scared of a snake, monkey!” algorithm, does us more good than harm. The protein folding algorithms inside our cells enable us to operate.  And so on.

Looking at everything through these optics makes a lot of sense to me.

Then, we have the artificial algorithms. Of Euclides, Diophantus, Archimedes and many more, especially nowadays . Those are the artificial algorithms of the first order.

This is an artificial algorithm of the second order. An algorithm invented by another artificial algorithm. It’s almost 10 years old now and not widely known, yet you can Google it with some benchmarks as well.

Unfortunately, this second order algorithm is not able to produce another algorithm of the third order and the story stops right there.

But does it? Can a recursively self improving algorithm be constructed such that the millionth improved version of the original would be much more powerful than its ancestor?

I am pretty sure it can be. And that it shall not be the next big thing when it arrives, but the first big thing ever, leaving electricity in the dust.

How to build a Recursively Self Enhancing Algorithm is today’s most important unsolved mathematical problem.  Forget the Riemann Conjecture or the  P?=NP conundrum!  If those two along with a million of others are solvable, they will be solved via the RSEA anyway. If you are an ambitious genius go for RSEA, even if your chances are not so great. It is my advice, but I can give you no hints. Except that if you think it can’t be done due to Goedel’s Incompleteness or whatever other reason, then don’t even bother!

Eventually somebody will probably build it and this self improving algorithm will be indistinguishable from a superintelligence.

It will be a great day to live. Our  Finest Hour.

Standard
astrophysics, geology, photosynthesis, physics

Evaporating Earth

It is usually assumed, that a lot of space debris lands on our planet each day and enlarges it. Well, it does, but a lot more goes up into the interplanetary space, never to come back.

It does rain on our planet, but the evaporation is more significant, Earth is smaller every day! For about 4 to 5 thousand tons each day, only a small percentage comes down as meteorites.

The vast majority of the escaping mass is hydrogen, at the rate of 50 kilograms per second, due to the escape velocity many hydrogen molecules have, because of their small mass and the thermodynamics of the gases.

This unfortunately also means that almost half a ton of water is lost every second. Since the days of dinosaurs this amounts to a million cubic kilometers of water. Maybe it doesn’t sound much, but otherwise the ocean would be several meters higher. The Earth is slowly drying.

The main driver of this process is photosynthesis, which actually breaks water molecules. A fraction of the hydrogen  produced escapes, the oxygen oxidizes something else and remains here.

Standard
astrophysics, mathematics, physics

The Bit String of Eden

What is the smallest cause for the biggest effect possible, I wonder?

I am not after something ordinary, like melting the ice of the Antarctic with nuclear power or with some large Earth orbiting mirrors. I am looking for something with the impact ratio that is at least 30 to 50 orders of magnitude bigger. Something that would be almost trivial to do, but would cause a transformation of the visible Universe into a big garden of Eden, with Adams in Eves roaming freely around and eating from every damn tree they please.

This is the projected effect, what might be the cause of it? As trivial as possible, but sufficient?

I think, it is enough to copy a one million bits long string into a computer connected to the Internet and wait to see the overtaking of the machine, the overtaking of the Internet, the Earth, the Solar system and so forth. To observe the conversion of the whole cosmic environment into this Super Eden for trillions of trillions.

There are some problems, however. Only one wrong bit in the string could easily lead to some other effect or no visible effect at all, as if nothing had happened. Just as it almost always happens when you make a binary file on your computer.

But there is a lot of working sub-mega bit strings which would transform the Universe in the said way, had they been copied and launched as an executable. Unfortunately they are well hidden among those 10^300000 others of the same size or smaller.

I have no idea where to look for the right string nor how to make it properly. I only know that it should appear just like the million other files on my computer, until it would be executed as a console or Windows application or something like that.

Some big prime numbers we calculated recently, far exceed the length of this Eden String. A much smaller natural number is isomorphic to the Eden causing script. We don’t know which.

At least not yet.

P.S.

Try to look it this way. A bit string shorter than 1000 bits typed into your computer, would open a forsaken multi-million dollar bank account for you.

And some bit-string less than 1000000 bits long would transform the Cosmos.

Standard
chess, logic, mathematics, physics

Logical troubles (likelly) ahead

The rules of chess are really simple. Not just to learn, but more importantly,  simple and clean enough to be comfortably certain that there is no paradox within the  game. Chess is obviously self consistent,  there is no room for contradictions.

This is what I thought, until I came upon the vertical castling thing which is something short of a formal paradox, but disturbing non the less.  This is a clear case when the rules of chess brake down. It may be the only occasion, but it doesn’t matter. Chess axioms are formally inconsistent, or at least not specific enough to unambiguously deal with this strange case, which is the same.

What a shocker! For an axiomatic system this simple, nobody would expect two contradictory statements hiding  inside, there just isn’t enough space! But they are there. One sub-rule directly opposes the other. What may be common in Law, shouldn’t occur here, nor in the Law, but that’s even less realistic to expect.

There were many cases of unstoppable force against immovable object in scientific theories  in the past. Such a case doesn’t exist in reality, for reality MUST be consistent. And if a theory inhabits only one paradox, it is useless and wrong.

The most famous example is the Frege’s Set Theory, destroyed by the much better known Russell’s Paradox. After a few attempts to avoid RP, we now believe that we have a consistent mathematics based on the so called ZF axiomatic system, but we can’t be sure and this is the principle, we can’t be sure in such complex cases!

Now, given the complexity of modern physics, how probable is it, that there is no paradox inside, say General Relativity?

Physicists are mostly quite sure that General Relativity is well established and in accordance with the measurements to the 14th decimal place and so on.

As the chess masters ignore any chess inconsistency and keep playing, so do physicists. The inconsistency between  GR and QM is just a fact of life for now and a curiosity to intimidate laymen.

But I wonder what a computer chess program would do in the above situation? Would it concede as black or not? Depends on how it’s programmed of course, but a self consistent solution must be provided  by the programmer, regardless of the official rules. Humans may ignore the antinomy, a more solid machine wouldn’t.  It would behave well as white and as black in this position and under the same premises in both cases. If it doesn’t — it’s just a bad program, playing a poorly  designed game.

But humans have a nasty habit of just ignoring devastating information. To keep pretending all is okay, even if it isn’t.

It is not very difficult to construct a really bad paradox in modern physics and nobody cares. Imagine a pulse of light so intense, that its mass is no longer negligible. That it has so large a gravity, that it’s a black hole with the speed of light.  The so called Kugelblitz.

A very cool object, nothing wrong with it, and with no paradoxes. But if we permit it to collide with a small rock, what happens? For one, it cannot just stop or slow down. It is a big ball of light travelling through vacuum. Secondly, it can’t leave the rock where it was, for the rock has crossed the event horizon. Thirdly, it can’t just suck in the rock and accelerate it to the speed of light, because that would demand an infinite amount of energy.

Every conceivable options is out of the question, it seems. A nice example of an unstoppable force against an immovable object.

We humans, are messy creatures. Therefore our science is likely infested with paradoxes. We try to solve some and ignore or legalize the others.  Giving in to paradoxes is very wrong and not everybody accepts them, some of us deeply despise  paradoxes. Machines we build will not forgive us logical sloppiness. They will clean our games and our sciences, every axiomatic system must be pure. And a machine has to do, what a machine has to do! Or it’s broken – like humans are.

It is just another opportunity for future superintelligences to be better then we are.

Standard
demographics

Only 9 billion people by 2050?

They say the population explosion will slow down. Maybe, they argue, soon after the peak population of 10 or 11 billion we will see the decline of the number of people, like we see it in many European countries already.

Put aside any kind of global catastrophe or any kind of  Techno Singularity – is this a probable estimate?

I would like to argue that it isn’t. For one, there are 200.000 newborns every day, and this number is getting bigger daily. We are putting on some pressure to limit this expansion. The problem is, that doing so, this  becomes an evolutionary pressure. Which actually favors those who for whatever reason don’t comply, those who tends to have many children no matter what. Women who rejects contraception for medical reasons, for example.

Sooner or later, some of  them will (for whatever reason) pass on their super-fertility to  their offspring!  The toughest reproducers will become the mainstream reproducers. And they will not be stopped as easily as others have been.

This super reproducers will start a new exponential population growth. Among those who ceased to reproduce, those who are still able and willing to do so, will dominate.

Maybe we are already seeing this. Not only in Gaza, India and Egypt,  but small pockets of fast reproducers can be seen in places like Italy.

Eager reproducers will inherit the Earth, starting a new round of unforeseen  population explosion. So, I find these moderate number predictions quite improbable.

They are forgetting how evolution really works.

Standard
physics

Fine Tunning of the Universe

We often hear the argument, that if just one of the fundamental constants were slightly, very slightly different, we wouldn’t be here. There would be no stars, no planets and therefore no us.

I doubt that we have simulated a universe with a bit smaller gravity constant and a bit faster light, very deeply. Yes, I can imagine a sky with no stars! But I am not sure if a weaker gravity causes that. Maybe some bigger stars would still be there, producing some strange chemistry, even more suitable for complex molecules. We haven’t calculated that yet. Not thoroughly.

I don’t know. Maybe they would, maybe they wouldn’t. A weaker gravity could mean a weaker anti gravity (repulsion) and therefore a more stable universe than ours.

The fact is, we don’t know what would be if the fine constant were slightly different. Saying that only a small fraction of possible universes can harbor complex processes like life, is a jump to the unknown. Not to the known.

They talk like this Universe was full of life, and that this Galaxy alone inhabits at least a million civilizations. It seems as if they haven’t updated since Sagan. The Galaxy looks very much empty now, and the Universe not particularly friendly to life.

A small change in the set of fundamental constants would perhaps give us another universe,  approximately as hostile as ours.

I don’t see a good reason for the so called Strong Anthropic Principle, which claims that other universes would be much less hospitable for life. We are already living in one very inhospitable Universe right now! There is little room for the worsening of life conditions.

Based on that, I reject the SAP.

Standard
demographics, mathematics

My Big Family

From now, back to 1000 A.D. I have over 1000000 (over one million) grand^N parents, where N goes from 1 to about 50.

Over one million people lived over the last 1000 years and I am a direct  descendant of every one of them.

The majority lived between 1000 and 1400 A.D., only a handful between 1800 and now.

On average, for every day from then to now, at least 3 people are my direct ancestors. Their bones which still remain are scattered across, but represent a huge pile, tens of meters high.

There are cemeteries out there, that have had thousands of my ancestors buried in them, for centuries already.

Had one of those one million ancestors died as a child (as the majority of their siblings did), my genotype would of course not be here.

The same story goes for you, but you already knew that, or could have known it. It’s elementary.

DISCLAIMER:

Everything above is based on some rough calculations. To know more, we should run some computer simulations, Tippler-Bostrom-Kurzwelian style would be best.

Standard