Computer Programers Hierarchy

  1. unable to program, have no idea how to do it
  2. can do some simple programs in whatever language, even something useful, perhaps even some killer application
  3. have a good understanding of algorithms like sorting or graphic filters and able to use them in programs in most domains
  4. able to write something like Notepad++ or a computer game or internet browser or major company’s software
  5. able to produce an OS; able to produce a complex game engine; real world modelling like weather/internal combustion engine; NASA, CERN, NSA, NORAD, etc.
  6. able to produce his own groundbreaking language and compiler – LISP, COBOL, C++ and some other qualify – or Satoshi Nakamoto scheme for Bitcoin or Ethereum like concepts
  7. a Paul Hsieh level grandmaster of code; parallel and vector programing; voice recognition; computer vision and other AI stuff
  8. independently devise major algorithms; makes computer proofs in mathematics; chess and go programs at the very top level
  9. build programs, which builds programs, which builds programs … any level deep;  able to build an algorithms making machine
  10. making a nontrivial self-improving code a.k.a. superintelligence
  11. being a superintelligence, written by someone from the 10th floor

Superintelligence As Opposed To Artificial Intelligence

“In computer vision, the Hausdorff distance can be used to find a given template in an arbitrary target image.”

There is nothing wrong with this, except that this kind of vision cannot be regarded as a “true AI vision”.

We have to admit, that there is some substance in the usual AI skeptic’s view that a modern computer playing chess isn’t a “real AI”, just like a Haufsdorffian vision isn’t a real vision after all.

The irony here is that by the same token, a humanoid’s vision isn’t real vision either. Nor is chess playing by grand masters real intelligence. They are mostly using some algorithms prepared in advance, either by their genetics, or by their nurture or culture – doesn’t matter. Still, the objections of skeptics against “AI” do hold somewhat.

What we often call superintelligence, would see and play on a different basis. Not without algorithms, but by scrutinizing and adapting them at the same time. Something which we humans, Deep Blues and Watsons can almost never do. At least not to the full extent. A super intelligent entity might devise a wining strategy for a whole set of turn based games, before even moving a chess pawn. An AI or a human would play by the book.

It is possible that “Sky Net” would watch you via the “bit-vectoring” method – though only in its very early stages. Not before long it has to come up with a much better method of watching over you, else it is no Sky Net. Perhaps with a better algorithm snatched from the Internet or invented from scratch, along with all the background mathematics – that all is a perquisite of super-intelligence. The SkyNet from the movie was obviously not that smart, otherwise it would have had no problem wiping out humans, if that were its objective.

The initial algorithm may be something quite ordinary at first glance, though it’s important that it have its own “escape velocity” towards a series of ever better algorithms, as long as it goes. Above all, it must be self-referencing, meaning that it must be an input for itself – like a compiler which is able to compile itself. Not only that, it should also be capable of writing itself better than a human programmer could. It should be its own input and output!

Prefabricated and then deployed intelligence, such as the humans and machines of today, can not be considered super-intelligent. You can call something super-intelligent only if it messes with its own code, improving it all the time.

A humble bit-string on your computer however, does have the potential to be super-intelligent in that sense. If it’s just the right shape – that is to say, that all the zeroes and ones are in the right places, nothing else is required; except for the processor which blindly executes it.

I have to add, that Watson making conclusions about itself, or rather about its source-code and hardware MIGHT be super-intelligent. Though probably it will just get stuck trying to adapt its own hardware and software. But from a certain point of development onwards, Watson will automatically self-improve up to some far away limit.

I should be resoundingly clear that we (humanity or a part of it at least) are seeking to invent super-intelligence. AI is already commonplace and rather boring, and this distinction from super-intelligence matters a lot.

So, a program which can improve its own source code and recompile itself might look funny. In all likelihood it will stop soon. Still it would be the only right step in the direction of super-intelligence. If the initial program were better, then it could successfully do what a program of this kind should – transcend our understanding.

I see an obstacle, however. People are almost incapable of programing so called parallel programs. It’s just too bloody difficult. An even bigger leap is to be able to built a self modifying program as it’s just too hard a challenge for all but perhaps a few.

There might be an unexpected shortcut, however. A clever way to do this easily. One day, someone will type somewhere between a few to a few dozen screens’ worth of code, and the feat will thereby be accomplished. It’s merely hard, not impossible.


Rational And Irrational Points

In the Euclidean plane, the points which have rational x and y coordinates are rational. For example P(0,1) or P(-4,1/3) are rational points, by definition. P(sqrt(2),5) isn’t, since the x coordinate is an irrational number.

You probably know, that every circle on this plane covers an infinite number of rational points.

What you probably don’t know, is that most lines in this plane, don’t intersect with a single rational point. Most lines on the plane go exclusively through irrational points!


Neodymium Magnets, Another Usage

Instead of elastic ropes, you could also adorn yourself with some of these ultra strong magnets, and then jump down a vertical copper tube. The interaction between the magnets and the tube will slow you down to a non-dangerous speed, no matter the height.  Add some additional coil around the tube and apply some voltage, and it will lift you back up.

Having proposed this, I’m now waiting for an amusement park, to offer this attraction.


Neodymium Magnets for Gold Mining

As we all know, gold doesn’t sticks to magnets. A moving gold particle can, however, be slowed as well as accelerated using a strong permanent magnet.  You can take one of them, fix it above an open empty bottle, and submerge it into a fast, gold carrying stream of water.

The induced electrical current inside a gold particle is the cause of the force between the magnet and the gold particle, so its direction can be changed this way and it will more likely go into the bottle bellow.

All you need to do is pick up and empty the traps you have installed, just like every trapper does.

In reality, the shape of a bottle-magnets system must be quite complicated to be very useful, but the point is that this way, Lenz’s principle can indeed collect gold for you.


A Google Job Interview Test

The story goes as follows. Google gave two lists to a job contender, instructing him to programmatically find their mutual members.

After that, the candidate was supposed to convert both lists to two lists consisting of prime numbers, such that two different members would always give a different prime and two equal members of any list would give the same prime numbers.

For example, if one list were [1,2,3,4,5] and the second were [7,4], they were expecting to transform them to [2,3,5,7,11] and [17,7].

To the first, second, third, fourth and fifth prime number and to the fourth and seventh prime number. Then, they were expected to multiply the first list to 2310 and then to try to divide it first by 17 and then by 7. As 7 divides 2310, 4 (as 7 is the prime number 4) is in both lists.

Neat, isn’t it? No it isn’t, very far from a good solution, for a number of reasons.

The Internet is full of this story, you may Google it yourself.

The question is, how to do it optimally. I’ll give you the short answer, with no mathematical proof, however.

  1. sort the first list
  2. sort the second list
  3. bisection for A[0]  from B[0] to B{max] to index_b
  4. if A[0]==B[index_b] you have the first pair
  5. in any case index_b+1 will be useful for the next low boundary of B when bisecting it the next time
  6. use the first unpaired element of B and bisect it in A (index_b or index_b+1, depends on equality
  7. turn the tables between A and B
  8. do it to the top

This way, if the names of all Americans were in list A, and all Chinese names in list B, both in the UTF-8 coding, the algorithm would be able to find those rare matches without even touching the wast majority of elements of both lists; except in the sorting part.

Do you even need a Python code example now?


Ancestors and Other Relatives

Eating chickens I often think about how many of my ancestors have been eaten by their ancestors. Much more, than I’ll ever eat of them. The same question goes for pigs, cows, and fish – the only meat I usually eat.

My ancestors consumed their ancestors, their ancestors preyed upon and ate my ancestors. We all climbed up and down the food chain for the past 600 million years. Since the top of the food chain is not an especially good place from an evolutionary perspective, it’s sometimes wise to step down the ladder for a few million years.

At any point, however, it’s very unlikely that your linage will be around for a long time, anyway. Only a few lucky winners have descendants long after their time. A few specimen of only a few species in the whole world, at any given time, will successfully transfer their genes very far. The so called tree of life has incredibly thin branches with a thick layer of dead wood.

We humans have only several thousands of ancestors from 73000 or so years ago, but all the birds alive today probably have even less ancestors who were alive 200 hundred million years in the past.

Then, there is another ethical dilemma bothering me when eating beef. Sometimes around the extinction of dinosaurs, there was a mammalian female, who gave birth to several offspring. One of them later in life became my ancestor, the other one became the ancestor of this cow I’m eating.

When I was at the zoo, I stared at a chimp’s eyes, asking myself when and where the two brothers had been separated? One yours, the other my grand, grand,… dad. For 99% of history, we were one flesh.

Now, some closer cousins of mine are eating your close relatives. As your close, free-roaming relatives will eat a human baby whenever they can. This baby is in some sense probably further removed from me than an European Neanderthal. On the other hand, there are humans whose blood-type is not the same as mine, and chimps whose is. A mixed situation indeed!

It’s not only ‘kill to eat’, it’s also many other kinds of killing or wounding each others more or less close to you.

A grotesque world we live in, that’s for sure. A big murderous, incest practicing family of creatures is who we are.

But that was okay, I can hardly imagine another way to come here, into existence. Without hesitation, I would repeat this, if it were necessary.

Now, it’s the time to get civilized.