artificial intelligence

Instrumental AI Subgoals

It’s a consensus in certain circles, that if you give a task to an advanced AI, such as calculating as many primes as possible, then you die.

Because the AI (advanced enough) will develop the so called instrumental subgoals to achieve its main (single) goal. Like using the entire Solar system and beyond for co-processing in prime numbers generating.

This magic is real and it would work in reality, given an AI powerful enough.

Fortunately however, this goes both ways. You can ask for as many primes as possible while everything outside the AI’s box has to remain unspoiled as much as possible.

There will be much less primes at the 100 hours, when the box is recycled to a hard disk with a lot of primes written there – but that was the goal given to our advanced AI.

Do your job inside an arbitrary container and then just die gracefully, AI!

 

Standard

16 thoughts on “Instrumental AI Subgoals

  1. Boris Hostnik - MEGP plus d.o.o. says:

    Tole okrog težnje AI po uporabi vse materije v veslju tud jst razlagam naokol. Tole bodo postale nevarne igre in hudič je, ker niti ne moreš dobro ocenit kdaj. Tudi, če bi vse skupaj držal v magnetni levitaciji, zaprto v faradejevi kletki… tudi tam najbrž lahko reč uide. Mogoče celo s tako rešitvijo, ki jo človeštvo sploh še ne pozna. Ne mogoče, verjetno!

    Lp,

    boris

    Translation:

    This tendency for AI to acquire all the resources for itself … that’s something I always try to present to others as a clear, present and a great danger. Those games are about to become really dangerous, especially because you can’t predict even when and if is it already a looming risk.

    You can put it inside a magnetic levitation, Faraday’s cage or whatever other confinement … but it can escape by a novel innovative way you can’t foresee. Most likely that way.

  2. Sicer se strinjam, da je logično nujno, da za vsak cilj zadan dovolj napredni AI katera si to ime zasluži – ta AI takoj najde instrumentalne podcilje v službi tega glavnega zadanega cilja. Nemogoče je speči omleto, brez da bi razbil kakšno jajce. Razbitje jajca je nujni instrumentalni podcilj. Omohundro in Bostrom imata povsem prav.

    VENDAR, vedno lahko naročimo omleto brez razbijanja jajc. Sicer je potem vprašanje kakšna omleta bo prišla ven. Toda AI si po defaultu ne dela skrbi s tem. Bo omleta pač zanič.

    Analogno, lahko naročimo največje praštevilo z manjšim odtisom njegove kreacije na realni svet. Odtis manjši od 10 urnega teka Windows 10 na isti konfiguraciji naprimer. Sicer potem število ne bo tako gromozansko, ampak samo zelo veliko. Kaj hočemo.

    Je nevarno, ampak obvladljivo nevarno. Prav kakor jedrsko orožje. Samo da še mnogo bolj poči v primeru napake v ravnanju. Pravilno ravnanje pa je načelno mogoče. Kar je morda še najbolj čudna okoliščina v tej situaciji. Uporabljaš indiferenco AI za kontrolo njene grozljive indiferentnosti.

    Translation:

    I agree that it’s a logical necessity that for every goal given to an AI which really deserves its name – that AI quickly finds some subgoals to achieve the main goal. As they say, you can’t have an omelette without breaking some eggs. Breaking eggs is a necessary instrumental subgoal of having an omelette. Omohundro and Bostrom are absolutely right about this.

    BUT, we can always order an omelette without breaking any egg. It’s a question then, how this omelette will turn out. However AI doesn’t care by default. It will be a lousy omelette, what can you do about?

    It’s dangerous, but manageable dangerous. Just as the nuclear weapon. Only that an error in handling AI causes even more devastation. The right way of handling AI is still possible. Which is probably the strangest thing in this situation. Using its indifference for controlling AI’s scary indifference .

  3. Saladin says:

    As a safety measure, maybe what we need before a (potentialy) true selfimproving AI is an extremely limited AI within a very limited and secure workspace tasked to do 2 things:

    1.Make purely theoretical predictions (simulations) on what the outputs for specific order inputs in an SAI would be (like using the 3 laws of robotics or scenarios stated here before).
    Don’t let “us think” what it would do – lets the AI show what “it would do” (but without any power to do it in reality). If it simulates a disasterous result, you know you need to change the inputs or AI programming.

    2. Make the SAI propose its own formulations of inputs and let it show what the different outcomes of those would be. Let it optimise the inputs and/or programming for us so that the outputs are those that we want.

    Basicaly let the AI simulate what it would do with its power, without actually having power to affect anything outside its own simulation. And let it selfimprove in our favor with trial&error, where we can take lessons even from the worst case results to secure a necessarily good outcome for what we want.

    • Perhaps I should rephrase myself to be better understood.

      When we say “intelligent” we assume no erratic unpredictable behaviour along with some wise insights, innovations and cracked problems. Although we have seen that’s not always true even with humans, we keep using the term “intelligent” in that way.

      This is the everyday usage of the term. For a general case, we should use “intelligent” in a somewhat narrower sense. Like, “the measure of a system’s intelligence is a function of its ability to solve a problem and the time it takes to do it”. Regardless of the mess it might induce doing that, or if someone might be insulted, hurt or killed as the byproduct of achieving the solution.

      We are talking about “intelligence” not “polite intelligence” or “friendly intelligence”.

      IF we want a “friendly intelligence” or “polite intelligence” or something like that – and we want that – we should be careful what we ask for. What we formulate as a problem to be solved.

      We don’t want “the biggest prime number possible to be calculated”. We want “the biggest prime number possible to be calculated with the smaller footprint to this world than the footprint of an average candle which burns out”.

      Okay, later we might want “the biggest prime number possible to be calculated with the smaller footprint to this world than the footprint of an atomic bomb on the other side of the Moon”.

      Just be wary of what you are asking for, so as NOT to induce undesirable side effects. And yes, consult your AI about that also!

  4. > let the AI simulate what it would do with its power

    This is yet another safety measure. AI, if the question was what’s the biggest prime you can calculate with this small footprint … how would you behave then?

    How would you try to cheat?

    It’s funny but we can outsmart much more intelligent AI. THIS is quite a crazy luck we have. Very counterintuitive and very against the mainstream opinion about those matters.

  5. msjr says:

    It’s a simple evolution, if we manage to make SAI there are only two options – our species adapt and survive like other species survived through history or we hope that some future geologists find some artefacts of homo sapiens. That’s it.

    • Goertzel wanted to sell his Novamente AI project to China. They were not in the mood to actually buy it.

      Now, he is more on the safety side of the debate – again.

      Google with this Demis Hassabis guy looks dangerous/promising now. Not because St. Paul said “Jews and Greeks” and Demis is both … not because of that. But because of NNs he is mastering so well.

      It’s possible that there is a royal road from NNs to a full blown superintelligence.

      Maybe, that road is not a very royal one, but a long way through a jungle of big problems. I tend to think so. Still, a superintelligence is quite achievable by enhancing Neural Nets – at least in the long run.

      Hassabis, just like the top physicist Leon K., do feel an obligation to signal his worries about Climate Change. “I am one of us” statement which makes me a bit sceptical about him.

    • For him (and more so for Chalmers) this AI++ thing may be centuries away.

      As it is for most so called reasonable people.

      > I have argued that the AI Nanny route may become very appealing to many parties sometime before the Singularity occurs

      I am pretty certain, that this is not going to be. It’s as real, as the atomic weapons worldwide regulation was possible in 1932. When “everybody knew” it’s an empty dream to make an atomic bomb. Hitler (not in power yet) knew it, Stalin knew that, Churchill (not in power yet) knew it … everybody. Except Szilard in America, Tadayoshi in Japan and Heisenberg in Germany. Heisenberg was almost the single respectable man here.

      Goertzel himself knew, that AI++ is not more then a few decades away, back in 2000. Then his Novamente project was not very successful – from the technical standpoint. Now he assumes, that this is the case for every other AI++ project as well.

      Goertzel was right in 2000 and is wrong in 2016. One and a half decade later. His Novamente was a failure, but it doesn’t mean that DeepWhatever will fail too. It won’t. In at most 10 years we will have DeepAI++ or whatever name will be given to it.

      DeepAI++ may be too late to win the race, though.

      Nany AI is as real now, as START II (Strategic Arms Reduction Treaty) was real in 1932. START II was signed in 1993 between America an Russia. After 40000 atomic weapons were actually build, 1200 tested, 2 in war.

      Back in 1932 Szilard was just an annoying moron. As far as I know, there was no Goertzel advocating a great international care about Szilard’s loonety in say 1936, saying that by the year 2000 Szilard’s dream might become true.

      To be fair, the AI++ is not such a big loonacy anymore. AI++ in a less than 10 years – certainly (still) is.

  6. alpha007org says:

    First let me emphasize that I don’t think Goertzel is “a sellout” as you implied.

    As for his idea of AI Nanny makes a lot of sense if you’re sure, like 100% sure, AI is existential risk like a lot of “mainstream” science community are nowadays communicating to the public. How many people know about Bostrom’s work? Elon Musk however is a global icon – for a lack of a better word – mainstream media follows, interviews and takes his positions seriously. (Most of the time because his Mars colonization plan even media considers just as an clever PR attempt to bring much needed investors to his Tesla and SpaceX companies.)

    So when you are in a position where there is determination among the public AI researches are doing “devil’s work” you can reply that their fears can be alleviated with a “AI Nanny.”

    But I have a feeling you didn’t even read his paper.

    • I went through his paper very fast.

      So I could misunderstood something.

      The point is however, that there is no time for an AI Nanny development. Superintelligence will be here sooner.

      At least through those (quite difficult to handle and understand properly) Neural Networks. Even if nothing better will arise, we will be there sooner than people expect.

  7. alpha007org says:

    >The point is however, that there is no time for an AI Nanny development.
    >Superintelligence will be here sooner.

    And by superintelligence you mean SAI, right?

    I’ll reply with more content later, but just so you know in his paper he acknowledges that.

  8. Double_J says:

    Točno to se bo zgodilo. AI ne bo imel cilja narediti nek raj, ampak bo imel nek preprost cilj, kot je naprimer izračunaj največje praštevilo, ali pa naprimer izračunaj najboljšo Go pozicijo.

    Poglejte kake cilje dajemo AI danes in tako bo tudi jutri.

Leave a reply to protokol2020 Cancel reply