artificial intelligence, superintelligence

The Singularity is Near?

Or it has been (indefinitely) delayed for some reason?

We have this Neural Networks situation now. A simple classifier has been employed, some would say, beyond it’s intended use. But when you have a classifier, you can classify which objects belong to where. Is this is an egg, or it is an apple? Is this is a good Go position for the white, or it isn’t? Would that be a better Go position, had the white put the last token there? What about there? After a few hundred questions of this kind, the best move has been revealed. Every Go move can be milked this way.

Stretching the initial definition of classification works like a charm almost everywhere.

This way, a humble classification becomes a mighty prediction. What a nifty trick! Especially because the general intelligence is nothing more than (good) predictions everywhere. Nothing else is needed.

Say, that you want to invent the StarTreks replicator. You have to predict which configuration of atoms would reliably perform replications of sandwiches, shoes and of the replicators themselves.

This will be possible, as soon as those Neural Networks of Deep Mind/Google masters chemistry and some physics, to the degree they’ve mastered  Go and  Japanese-English translations.

Which may be too expensive in computing terms. And which might also not be that expensive at all! Perhaps, NN must do some self reflection (or self enhancing) first, to be able to storm the science of chemistry and some physics like they stormed Atari games not that long ago. On a superhuman way.

And I don’t even think, that Neural Networks are the best possible approach.

So, yes. The Singularity is nearer than it has ever been!

Advertisements
Standard

4 thoughts on “The Singularity is Near?

  1. alpha007org says:

    >>>Especially because the general intelligence is nothing more than (good) predictions
    >>>everywhere. Nothing else is needed.

    I think you are taking to narrow and too simplistic view on the matter. Say we do have optimal predictor. It can predict optimal atom by atom configuration. How did we get there? I think “optimal predictor” is just a byproduct of fully functional ASI.

    Until we invent “the thing” which will invent ASI, we must pass certain threshold. If inteligence is just “some” information processing, then by it’s definition there must be a specter of different kind of inteligence. If we frame a “path to xAI” this way then we can see our path much more clearly. We must first “learn how to learn” and when we do achieve that tedious task we see where the most important stepstones are.

    “Optimal Predictor” doesn’t solve us anything if by any chance we make a this kind of huge discovery except maybe the most efficient way how to kill us all.

    • Intelligence IS totally reducible to prediction. Instead of me, replying to you, there could be a predictor of my next keystroke. After hundreds of correct predictions, voila, there’s “my reply to you”.

      Maybe it is nothing much. But a better predictor, could consider and predict your reactions as well. What (my) next keystroke should be, to convince you.

      It could also predict, perhaps in one minute, what programmer Bob WOULD be typing all year long in order to refactor a chunk of OS. That kind of prediction leads to code. Those predictions can yield whatever one can imagine, provided that is achievable for an intelligent agent.

      To predict is to model (an intelligent agent). When a very basic model, for the first time predicts, or rather correctly guesses, how to modify itself to be slightly better at prediction, … well then we have liftoff!

      It was Schmidhuber or Hutter or somebody else who first explained this. I merely concur with them regarding this so called SP Theory.

  2. alpha007org says:

    >>>>To predict is to model (an intelligent agent).

    You are moving the goalpost here don’t you see? Now we have an agent who makes predictions. That’s totally different topic what you were writing in your original post.

    What if to predict my next reply it spawns subgoals to harness a little more energy and in the process “by some unfortunate chain of events” phosphor on out planet is starting to deplete?

    Yes, every Predictor is some kind of inteligence. Even basic one (IF a then b) is “doing” inteligence. But Predictor without some basic meta-values, meta-goals (or simply put fAI) can do some crazy shit we couldn’t understand until it is too late.

    Don’t out-logical-fallacy out of this.

  3. I am not moving the goalpost, at all. Our intelligence, or we as humans, can be interpreted as 9 or 10 levels of control mechanism. Like this guy is preaching.

    https://slatestarcodex.com/2017/03/06/book-review-behavior-the-control-of-perception/#comment-473975

    I only suggest, that for a superintelligence we need to build, it would be weisser to compose it entirely from small predictors. Billions of them perhaps. This Schmidthuber guy has spoken about that publicly, how he is going to do it. By some coincidence or something, I largely agree with what he is saying.

    He is also saying it will be done before he is retired. But there is a small caveat about this, since the Swiss government keeps moving everybody’s retirement date. But then again, every government does that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s