algorithms, games

Mind the Overlap – JobShop Problem

Usually I am all about how to solve any problem using AI. This time it’s different.

A classical NP-hard problem of a JobShopScheduling is here to see and here to download and to play it by hand. What can I say? It’s extremely hard for computers and for people as well.

Still, every child can try it on her/his Android and possibly be as good as a top mathematician.

Advertisements
Standard

19 thoughts on “Mind the Overlap – JobShop Problem

  1. alpha007org says:

    You found a niche with your GA software and decade later when compiting power increased more then ten fold it would be beneficial for you to step out and invent something better. Because you invented *something*, more then I can say for most people and I think you are capable to step out of your niche and create something broader.

    • I rather say EA instead of GA. EA is more general.

      EA (Evolutionary algorithm) is the most efficient way I see. It is possible, that even RNN (Recursive Neural Networks) are already enough, but they are not as powerful as EA. I doubt that there is something even more powerful, except brute force of course, but we don’t have enough computing power for BF algorithms.

      Intelligence, as usually understood is just an EA “under cover”.

    • Thank you. This is under considerations and experimentations just now. Higher resolutions are of the utmost importance, because we have some unresolved problems. Computers, better computer algorithms (ours and everybody else’s), have failed to solve them so far. Humans failed as well. But with this interface, same clever amateur somewhere just might solve one or two. Perhaps.

  2. alpha007org says:

    I Know the meaning GA is not the same for EA.

    Sure, I completely agree that Crittical is piece of art. But even you must see it’s limitations.

    Now, when we have GPU accelerated computation available you could try to implement some kind of hybrid NN+EA. Solving complex problems is not in EA domain (designing motor vs. some simpler motor component). I can see how NN could “help” EA reach optimum solution and avoid local max. NN can run on (multicore) CPU and can spawn millions of EA to GPU. NN would penalize local maximum solutions and hold on to best approaches found by EA.

    This NN wouldn’t be Neural Network in classical term where you must apply learning to get anything useful out of it. In your case NN would “learn” from EA results and work as some kind of controller&scheduler.

    Decade ago I was playing with both NN and GA and always wondered why not take best of both worlds…

    And you would be the man I would bet on to figure it out.

    • I like your line of reasoning. (Even without the last statement.) Taking the best from both worlds, is quite an inescapable conclusion for anybody smart/daring enough that he or she is not immediately short-circuited to “but do we really know that Darwin was even right?” or “there must be something more spiritual behind everything”.

      So thousands of smart people could already be preparing to milk this cow. Or more likely chasing it to milk it as soon as possible.

      The cow however, is very wild. And even when it is not, cowboys are often clumsy. I cannot believe how long they need to substitute the sigmoid and tanh functions with x=max(0,x) sometimes, for example. The dropout method was also not used until quite recently.

      It took Schmidhuber himself, to install some additional switch-off/on neurons.

      I am not saying that the “cowboys” are stupid, but they are a bit slow sometimes.

      But then again, I am slow too!

      Anyway. Combining the best of the two worlds would demand the third world also. The world of automatic self-scripting. We need a program which will self-improve. Using EA and NN for sure, but this isn’t a whole story yet.

      I’d like to see a one million (or less!) lines long program which refactors itself constantly. Just like it has been said many times by many people in the past.

      It is not even easy to say that, let alone to do it. We (the humanity) will figure it out anyway, I suspect. Not in a too distant future, I guess.

      Sometimes it’s the cleverness of some people which is amazing. Not only the slowness of a whole group, but also the cunningness of many.

      It will be interesting to be alive in a future, which has already begun.

  3. alpha007org says:

    Regarding last two paragraphs.

    It is *fun* to be alive in this time period when there are so many innovations and working solutions previously considered very limited by CPU power and data. And still it’s narrow AI. Like Picture recognition via NN for example.

    It joys my hearth when I see 99% of the pictures feed into NN positively described by “machine.” But this is still very narrow AI problem. Alas, it uses “Deep Learning” and unimaginable large datasets and computing power. But more than anything it’s just brute force – Narrow “AI” picture recognition. To be honest, I myself don’t find this very impressive in the sense of working to the goal to the true AI. Sure, we can learn from there systems, they can be perfected to recognize reflection of Airbus A380 in cats eye, but their story end there.

    Another thing I don’t believe is that we can get to AGI with EA alone. We would need monstrous amount of computing power and energy but even then I am skeptical we will get results we wanted.

    My view:
    We need new kind of programing language and specialized hardware for that language.

    If you aren’t familiarized with “unum” I would suggest this youtube video:

    It’s very informative and explain why we cannot use all of available computing power offered by supercomputers/HPC.

    Unum is just a small step.

    For a true AI (I prefer term AGI), we need to figure out how can we use ALL available computing power available to us in parallel. I am hoping we are going to do some progress toward that goal. Even if it takes us a hundred small steps like linked “unum” proposition.

    I cannot dismiss I am wrong but if what last decade have shown us we need (more) optimal route to utilizing HPC/Supercomputers and I can’t see progress without changing/adapting/inventing new more efficient ways in software.

    I just realized how funny thing is, if you read this post first and then the other it would make more sense.

    By the way: I implemented Several Unique Sort for a project about some network analysis and with Intel’s new instructions in Haswell (ASM) the service is running 25% faster/25% more data is analyzed..

    • I’ll answer you in two parts, too much content for just one.

      Unum is what I was looking for, long ago. But gave up hope that something like that will be available relatively soon. Now, I am glad that I was apparently wrong about that.

      The behavior of Mr. Kahan is just funny. Almost as funny some functions results in IEEE standard. Well, those are tragic.

      This Several Unique Sort story was (and still is) an interesting one. SUS is well hidden in plain sight. You are one among few who spotted it. I saw some logical break downs in people, when discussing it. The Kahan’s type of behavior, I would say. A logical breakdown isn’t painful for a “patient”, he’s just fine. It can be quite annoying for others, though.

    • The other thing is AGI, as you call it. I prefer “superintelligence”, some prefer “Strong AI” and so forth. Which is not a very important part of these debates.

      The question of how difficult it is, however, is very. I mean to build one, how hard is it?

      Hard enough, that there still isn’t one around, obviously. But is it as hard as an aeroplane in 1000 A.D. would be? Or rather as an aeroplane in 1900 A.D.?

      For those who weren’t involved deeply in one of those flying machine projects, it was much the same situation in 1000 A.D. as it was in 1900 A.D. Prohibitively difficult, even an impossible task. “Flying is for the birds” conclusion was compulsory.

      But we shall overcome all the difficulties, once again.

      There is a peculiar kind of person, who can’t win a chess game with a chessmaster, but is able to write a computer program which would beat the Hell out of every human chess player, if need be. Those people were rare in 1990’s, but at least many thousands are walking the Earth today. Possibly much more.

      Among those, an even more peculiar brand of person may already live today. Those who could write a program, generally more intelligent than themselves. If need be. On the existing, or nearly existing computing infrastructure.

      I am unable to solve the Golbach’s conjecture, even if I’ve tried. I am very sure of this.
      But if a machine to do that is even possible, I just might be able to build one, given enough time and any interest whatsoever.

      Automating the math – to which Goedel would be strongly oppose, but who cares about his infinities anyway – is still an underachievement in the sense of building AGI. It just isn’t enough.

      Maybe surprising, something easier is enough. A perfect, or nearly perfect predictor would be almost indistinguishable from superintelligence. NNs boldly go in that direction already. Still a long way to go, but they are marching strong.

      Could a perfect predictor be digitally evolved? Yes it can.

      Disclaimer: A perfect predictor isn’t always right. It’s just optimal as much as it is possible in every single context, with every information it has. In other words, a superintelligence isn’t infinitely intelligent either. It does only the best it can.

  4. alpha007org says:

    Regarding SUS:

    SUS Is iust a module in a very more complicated software package. I implemented SUS in ASM (and when you’re working in ASM you’d better do your homework) and when most (99%) of the data reside in L2/L3 of Xeon CPU there is ~25% faster result over other sorting algorithms. However that doesn’t mean whole “package” get 25% increase. Overall there is 2-3% increase in packets analyzed.

    Unum: It would have to be implemented in hardware but currently it doesn’t look good. (Intel reply to implementing Unum was analogy to boiling oceans).. But it is best idea I’ve come across when I was researching Floating Point in Haswell architecture (Intel Architectural Software Developer Manual) and I was looking to optimize floating point operations and results (rounding) without lot’s of penalty (CPU Cycles / rounding erors) when result is represented to user / higher “modules”. Sadly the current state is that on one architecture FP operations results differs from another’s architecture results (Umum would eliminate such disparities).

    That moment has inspired me to think outside the box (Intel Architectural Software Developer Manual) and not long after, i found – Unum. (you can play with it with Mathematica examples).

    So my mind went further: I realized we need new architecture and new programming language for brooder AI (SAI, AGI, FAI…). As I said there isn’t ANY known software which can use ALL available computing power at the same time in some HPC or let alone top500 supercomputer – linpack being that exception.

    So we may very well have more than enough computing power for AGI but we cannot capitalize on it. And sadly when you talk big supercomputers game there haven’t been much progress in software. You can keep busy only parts at one time. Even weather models which are very parallel constructed are using different clusters for different computation. When we figure that problem out, we will take one giant step forward. I’m personally very skeptic we can do hundred smaller steps and made same progress.

    • Who has enough CPU power already, for some decent AI to operate? Chinese who “model the climate” which is totally useless if you ask me? And some institutions in America, EU an Japan which are also “tracking climate changes” or something?

      AI is still starving CPU, I am afraid. Luckily, algorithms that concern AI can be 10 to the power of 10 or more times better or worse. Large differences here, dwarf this handicap in hardware.

      Except if those with the largest supercomputers have the fastest algorithms as well. Which I doubt they have. They are sitting ducks, as the great transatlantic ship-liners once were for Wrights and other “funny” hunters, they didn’t see. Even much more so today, when a historic precedence of disproportions might occur here.

      A costumer thinks that program combines some groups of employees, according to their preferences. It does not. It exercises some primes, semi-primes, smooth numbers, GCD and stuff. Which BTW, does the required job much better. All integers, with a lot of accumulated computing, little more than a huge look-up table.

      Stephen Wolfram came up with this “computing equivalence”. It’s possible in principle, to throw integers around and play chess. I don’t know how exactly this could be done, only that it’s possible.

      Is an AI in the form of some integer manipulations also possible? Quite certainly it is.

  5. alpha007org says:

    Off course “AI is possible with some integer manipulations also”. Nobody is denying that. You misunderstood where the problem lies.

    You cannot take one supercomputer and use ALL it’s power at the SAME time.

    I wrote that climate model as an analogy. Mini-cluster c0 (in a fastest supercomputer) is computing timeframe t0 at the same time next mini-cluster c1 is computing timeframe t1 and c2 is computing t2, c3 is t4 and so on.

    If your reasoning is that AI bootstrap is possible on any arhitecture, like supercomputer with 10 to the power of 100 CPUs whose sum of their computing power is the same as one old dusty 386 or sum of some modern HPC, i’d like to hear your arguments, please.

    Because I’m on the opposite side. Nevertheless I think bootstraping AI on current (super)computers IS theoretically possible but not very likely. That’s why my argument goes to radically different architecture we have today. Supercomputers are like your ships but 50 one by one connecting with only one in front and one behind. (With very limited communication options.)

    • I have, or I can get 10^17 operations (easily) in the next 3 years. What can I do with them? Some people estimate, that this number of operation per second (not per 3 years, which is 10^8 times slower, and which I have) is necessary to achieve something tremendous.

      Of course, great things could be accomplished with 10^17 operations per second. Still, it’s a big number, especially if you use those operations wisely.

      Computing like Unum will come eventually. But for now, we have integers to play with them, non-perfect floats as well. Let we see, what can we meanwhile do with those.

  6. alpha007org says:

    Another example.

    Would animals begin to behave differently if we slow down neuron to neuron connections for 10%. Probably.

    How about 10 times slower? I would say most definitively. Would we then need “that much brain tissue” for lost speed? Would brains even started “working”?

    (I know we have different “highways” in our brains; different speed for different types of connection.)

    • > Would animals begin to behave differently if we slow down neuron to neuron connections for 10%. Probably.

      Very likely a 10% slower lion would die of starvation. On the other hand, a 10% speed up would probably give him quite an edge against every competition.

  7. alpha007org says:

    And you’re still dodging my question.

    You won’t say explicitly but your
    >Computing like Unum will come eventually.

    Is telling me, that you in your gut are leaning to my way of thinking.

    Good to know we’re on the same side, even if you won’t try to develop this kind of reasoning further.

    • I feel quite powerless in bringing about Unum. It would be great if Intel or somebody else was more keen to build this kind of machines. Really great, and they will.

      I don’t feel equally powerless in the field of AI. Here, I can bring some order into the chaos and some light into the darkness, to quote the Vanity Fair.

      Which is odd, to have this kind of attitude, but I have.

  8. msjr says:

    Just to check, did you get any interesting result from gamers? Were the solutions (if any) helpful for your scheduling application?

    • It looks like people do play the game. We will submit some unsolved problems, but they are quite big for the Android screen sizes. Not for a human to see it, but for those toys to display them, 2K or 4K horizontal resolution with not too fine a density will do it nicely.

      Generally, has a human ever helped our scheduling algorithms to solve a problem? The answer is, yes! It works just like divine intervention into evolution would work, according to some Pope’s biologist. Some users are simply watching an evolving schedule on their own computer, while on holidays or whatever. It’s almost never the single instance of the program, but as many as 100 running in their otherwise closed for the summer schools, far away. On a 16 core computer, 16 (or less) instances are running. They are all talking to each other via the Internet, regardless where they may be. Now, this scheduler at home may stop its instance and manually re-arrange something. Like, “I want this event to be exactly here and nowhere else”. This triggers a havoc inside his instance, then he spreads this havoc to all connected instances by pressing “upload script for automatic promotion”. It’s like an asteroid event delivered by God who says – I don’t like what I see very much. I want a big crater near Yucatan!

      It’s granted that this crater would be there, and nothing else. Evolution will proceed, starting from that event.

      If there was any wisdom in the human scheduler’s request, I don’t know. Sometimes there might be. Sometimes there was only a corrected mistake from the original scripting. As God forgot to make a crater near the Yucatan on day 4 or something.

      But it might also be, that the scheduler (God) is much wiser and uses the havoc caused by his actions as a tool for a greater good. I suspect some human schedulers do just that. How successful they are, I don’t know.

      A human scheduler might also be more gentle. He won’t insist with his comet, but only suggests some dino killings. The scheduling software will then annihilate those killings and resurrect those dinos, if nothing better came from this action on any of those 100 instances worldwide in a matter of hours.

      What does “better” mean? It means the accordance with the preset criteria, 32 of them. The principal said, above all, I don’t want a student to wait! Then, every idle student hour will be punished with one billion negative credits. Or even much more. One trillion perhaps, which means that no student will ever wait or there will be no schedule. Sometimes this is possible, sometimes it’s not. Well, the best so far schedule according to all criteria, is what “best” means.

      Now it gets complicated behind the scenes. We also see what is going on in all those schools and we can intervene, if we choose to. We contact them. Or they contact us, asking what they are doing wrong, that their schedule is so ugly. We say for example, don’t insist on this particular teacher’s schedule to be so nice! Delete this criteria concerning him, for he is a real nuisance with his requests. He teaches small groups and you can’t afford the whole schedule to revolve around his wishes!

      This is for humans to decide.

      But that was not your question. Your question was if is it ever was a case, when a human saw a technical solution computer didn’t.

      Didn’t yet, at least. N – you know her – performs this stunt quite regularly. She opens a random schedule evolving somewhere, does some manual technical changes and voila! Sometimes, a distant program instance on a remote corner sees the same trick in a matter of minutes. Or even before her, only reports that later. Sometimes this manual change becomes a point in history of a schedule, where a “divine” intervention left its mark in a final result.

      There are few other human schedulers who engage in this game. I want to discourage them to this, but it doesn’t really matters for the final product.

      Or does it?

      Recently we added buses which take children from and to school. This made the whole situation much more complicated. Even N has no chance of intervening successfully with this additional dimension, I think. (It was she who implemented this new option, anyway ^_^).

      Your question can be translated to “to how many CPU instructions and how much computer memory is a human idea equivalent in this scheduling context?” I have no good answer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s