algorithms, artificial intelligence

AI Skeptics

I am talking about those goalpost-moving crowd which says “You will be never able to make computers play chess!”. When computers do chess, they just move this goal to something else unachievable according to them, like Go. Pretending meanwhile, how trivial the chess was. It’s just an algorithm, they say.

It’s a well-known hypocrisy of this particular sect, nothing new here. But can we somehow use this pile of dishonest intellectual garbage for something interesting and informative?

Let me try! Until we have no algorithm, we have an open and “impossible” AI problem. Then, we have at least an lousy algorithm. Then we have a better algorithm. Then we have a superhuman level solving algorithm. Therefore, one day we will have the so-called AGI, when we will have one billion or more algorithms stacked so nifty, that they will trigger the most promising one among them, to solve any problem which may appear. A new algorithm will be devised when needed. In their free time, all those algorithms will be under optimization process and re-stacked often. Every aspect of this algorithm-hive will be the subject of a constant effort to improve.

And this we will call AGI. The above-mentioned skeptic club will call it “an increasingly large pile of self-improving algorithms for various tasks, nothing new”.

Fine.

 

 

Standard
algorithms, artificial intelligence

Doctors and nurses and such …

… are notoriously difficult to schedule.  The right amount of them at every moment of the day or night, each working at some acceptable pace, about the right number of hours per month, with various absences, holidays and much more is a par excellence hard to plan.

If is it more chess-like complicated or Go-like complicated, depends on a particular working place in question, but it’s almost always complicated!  Human schedulers are surprisingly good, as human chess players are surprisingly good. But only to a point when the machine with the algorithm arrives. A human is no match for the top engines.

There is a superhuman level scheduler called WoShi now. Partly responsible for the hiatus we had on this blog. We are field testing it right now.

doctors

 

Standard
artificial intelligence, mathematics

Intermezzo Problem Solution(s)

Below there is a problem I invented a month or so ago. I published a link on Lesswrong and the solving began.

The first solution was 4 hexagons by LW user 9eB1. He later improved it by, quote:

 right triangle that’s half the area of the equilateral triangle. You can fit 4 in the square and two in the triangle, and the score is point six

Oscar Cunningham came with this, quote:

why not get a really good score by taking a completely gigantic shape that doesn’t cover anything

This is a clever but also somehow trivial solution, we have agreed.

So he came with thi:

OC

The thin red line is the uncovered area of the square, while the triangle can be tiled perfectly. The score is 0.57756.

Then I made I little promise, which has been broken already, that I will publish my solution on Monday, which is about two weeks ago by now.

The whole time, I was digitally evolving solutions on a computer. Just as I was not very far from Oscar’s solution he stroke again, with a much better one. More than twice as good as the previous one. This time he perfectly tiled the square and left some triangle uncovered. The score is 0.249..

This morning I almost decided to quit, when I saw something unusual on the screen. At first, I thought something wasn’t right. Then I realized what the damn computer is telling me.

The computer was evolving the following picture (visually rephrased by me for clearness).

house

The perfect score 0, albeit a bit trivial solution perhaps. Of course it is not necessary that you join the triangle and the square together. You can split the covering shape. Which computer did.

EDIT: 

The problem is with the Oscar’s last solution. When I put his algorithm into the machine, it showed an error in the form of a negative result.

Well, the algorithm is okay, but 1/7 and 1/4 can’t be. But 1/7 and 1/12 can be. But the result isn’t that good.

Standard
artificial intelligence, superintelligence

The Singularity is Near?

Or it has been (indefinitely) delayed for some reason?

We have this Neural Networks situation now. A simple classifier has been employed, some would say, beyond it’s intended use. But when you have a classifier, you can classify which objects belong to where. Is this is an egg, or it is an apple? Is this is a good Go position for the white, or it isn’t? Would that be a better Go position, had the white put the last token there? What about there? After a few hundred questions of this kind, the best move has been revealed. Every Go move can be milked this way.

Stretching the initial definition of classification works like a charm almost everywhere.

This way, a humble classification becomes a mighty prediction. What a nifty trick! Especially because the general intelligence is nothing more than (good) predictions everywhere. Nothing else is needed.

Say, that you want to invent the StarTreks replicator. You have to predict which configuration of atoms would reliably perform replications of sandwiches, shoes and of the replicators themselves.

This will be possible, as soon as those Neural Networks of Deep Mind/Google masters chemistry and some physics, to the degree they’ve mastered  Go and  Japanese-English translations.

Which may be too expensive in computing terms. And which might also not be that expensive at all! Perhaps, NN must do some self reflection (or self enhancing) first, to be able to storm the science of chemistry and some physics like they stormed Atari games not that long ago. On a superhuman way.

And I don’t even think, that Neural Networks are the best possible approach.

So, yes. The Singularity is nearer than it has ever been!

Standard
artificial intelligence

Friday Thirteenth

Here it comes again. Many years ago I have learned, that if it’s the 13th day of the month, then it is a little bit more likely that it’s Friday, than any other day.

The Gregorian calendar is a bit biased here, there is nothing you can do about it.

Later I wondered if some datamining  algorithm will uncover this oddity. It should have.

It did. But it also showed that there are some even more worrisome biases here. That for example, the 31st day is even more biased toward Wednesdays.

This dataminig story happened in some more innocent times, of course. Perhaps I’ll write more about this at some Wednesday the 31st.

Standard
artificial intelligence

Instrumental AI Subgoals

It’s a consensus in certain circles, that if you give a task to an advanced AI, such as calculating as many primes as possible, then you die.

Because the AI (advanced enough) will develop the so called instrumental subgoals to achieve its main (single) goal. Like using the entire Solar system and beyond for co-processing in prime numbers generating.

This magic is real and it would work in reality, given an AI powerful enough.

Fortunately however, this goes both ways. You can ask for as many primes as possible while everything outside the AI’s box has to remain unspoiled as much as possible.

There will be much less primes at the 100 hours, when the box is recycled to a hard disk with a lot of primes written there – but that was the goal given to our advanced AI.

Do your job inside an arbitrary container and then just die gracefully, AI!

 

Standard
algorithms, artificial intelligence, Uncategorized

Beyond AlphaGo

I have a tremendous respect for what they did. A machine learned a game and won against the human world champion in one of the most complicated games. Perhaps the most complicated game of them all, although I doubt that.

It is not nearly enough, however.

I want an algorithm, the simplest one of them all, which would tell me what the next optimal move is. It always exists, in every possible situation. Perhaps many possible moves exist, each optimal. But at least one is always guaranteed. We know this since John von Neumann.

It doesn’t matter what your opponent might do after that, what strategy he might follow. Your move depends ONLY on the position you see, and nothing else. No game’s history matters either. Be it Go, chess or whichever finite game, there is always an optimal move you have to play to maximize your expectations.

Therefore, an algorithm must calculate which one to play next. Nothing less. An explanation why this particular one – is just an interesting option. Your opponent may also see those reasons, but that will not help him much!

This is very weird and contraintuitive, it goes against all our gut feelings.

That’s why I wrote it here, anyway.

This kind of algorithm which calculates the best move(s) solely from any given position is the holly grail. A human may or may not understand the procedure. A human may or  may not be able to follow its steps with pencil and paper. A machine as big as the Solar system may or may not be able to execute it for the game of Go or even chess. We don’t know that.

I tend to think that a modern smartphone should suffice for an optimal game of Go, but it’s just a guess of mine.

What is certain, at least every finite turn based game can be “hacked” this way. A simple to do list for every different set of possible positions, always exists.

The upper bound for the winning algorithm size in Go is 3^361 IF clauses. It’s only about 500 steps when those IF clauses are sorted by the position number.

An example line of this algorithm would be:

IF (PositionNumber == 100000000400000000000007) MoveTo C9

The algorithm size in bytes is prohibitively large for the observable Universe to store it. So it’s useless, even if there are less than 1000 steps to execute  it.

We can improve it, so that a typical line would be

IF (F_function(PositionNumber) == 100006007) MoveTo C9

F_function is itself say a million lines long, which is not too bad, and then you can store all the IFs inside … well much smaller space.

How far this optimization goes, nobody knows. My wild guess is – less than one million instructions of a smaller than one Giga byte large Phyton program.

Maybe much less. Several kilobytes of code, and never more than a thousand steps algorithm to always win (or at least not to lose) a Go game – that would really impressed me. A very opaque Neural Network machine is just a very good step in the right direction.

BTW ..

IF (Game==Chess) & (Position==OpeningPosition) Move your right knight to f3!

I can’t say I can prove this to you, but I came close. We currently have about 120 million such advices. Not enough on one hand, and not reduced enough on the other hand. But good enough to cheat successfully on http://www.chess.com.

Standard
algorithms, artificial intelligence, evolution

Is P ~= NP?

At least sometimes it is. For example, when we succeed in constructing an  evolutionary algorithm which approximately solves a particular NP hard problem and stumbles upon an optimal solution, we have built a royal shortcut between those two worlds. The NP and the P. However fragile and sporadic the bridge was, it lasted long enough for us to get the solution we wanted, even if the solution wasn’t perfect. We were after an approximation and we found a better one, than we could have hoped for using the brute force approach.

Imagine, that we are able to tackle every NP problem with this strategy! By translating it first to some evolutionary process yielding ever better results and then wait until an acceptably good solution occurs “naturally” in this process.

The packing problems are famous and most probably NP. We have managed to install some of them inside an emulation of evolution and got a lot of results. Some are world records, some have been surpassed by others since, many have just been fine-tuned by humans, many are waiting to be published. Unfortunately, we are so CPU hungry, that this low priority process is all but dead at the moment. Fortunately we use all the computing we get, for other more practical evolutions. Like scheduling — another NP problem that we routinely leave to evolution.

The greatest achievement of this stupid evolutionary algorithm is that it turns out to be very innovative. Thus, not stupid at all, but  actually very intelligent by any sensible definition! The biggest irony here is that some humans grab the original, unexpected, evolved idea and then fine tune it, when it should be the other way around! The innovative brilliant human should invent a new solution, the computer should just polish it.

The traditional roles have changed here.

Here is an original computer creation. Almost all circles are violet, which means that they don’t even touch each other. What utter sloppiness!  Still it’s the best known  (July 17. 2013) packing of 249 circles inside a square.

Sooner or later a human will fine tune and publish it under his/her name. It’s okay, the Internet will preserve the whole history.

Standard
algorithms, artificial intelligence, superintelligence

Superintelligence, the Predictor Approach

A perfect predictor of the next bit of an incoming data stream is sometimes possible.

Caching programs are the most used form of a predictor. They try to predict what we are going to search/read next and they speculatively store an old reading result. When they are wrong, they squander the valued time, when they are right – a piece of time has been spared.

A caching program is a prototype of an oracle device. Its business is to know the data before it arrives and to know it better than pure chance would permit. This predictor is allowed to store data and do calculations with it. As long as it’s faster on average than nature at providing answers, the predictor/cache algorithm works fine.

Imagine, that you are able to cache sports results! They are just a series of whole numbers, not much different from internet data packets, only smaller. If you can do this, you could make some serious money.

Imagine, that you have a good stock market caching algorithm, then there’s even more money to be had!

Weather forecasting is also just data caching. Before the snow covers you, or the rain makes you wet, or the cold makes you miserable – the info which one will bug you tomorrow, should be in the cache. If the right one is always there, the weather predicting is one hundred percent accurate. The engine of the weather caching program is complicated and computation hungry. If we are not happy with this fact, we should cache and save some more when the engine is running. As you may know, we do that a lot and branch predicting by the processor is an example of how. Level one cache is another. We do all that better with every new generation of hardware and software.

The prediction algorithm for electricity flow, goes by the name of Maxwell’s laws.

We can effectively cache the paths of planets from the days of Kepler and Newton. Some algorithm improvements have been made by Lagrange and even Einstein, especially but not solely for Mercury.

A medical doctor sometimes uses a cache of only 1 bit which stores whether I’ll die or not and if stored right, the good doctor will guess.

Okay, is there something we could not cache? No there isn’t. The question is only how well. Sometimes not that well, for example which radioactive atom will pop first. When you can’t you can’t.

But generally if you develop a really good caching (predicting) software, you have developed a superintelligence, no more, no less.

Even before it can rightfully be called superintelligence, it already stores some of its future code in its cache. It is able to predict what a competent programmer would type.

Standard