mathematics

# Landau’s problem

Are there infinitely many primes of the form N*N+1?

Like 2,5,17,37 …

At first, we have candidates in the form of (N,N*N+1)

(1,2) (2,5) (3,10) (4,17) (5,26) (6,37) (7,50) (8,65) (9,82) (10,101) …

We check the first candidate and it is a prime. So we can transform our candidates as:

(1,2) (2,5) (3,10) (4,17) (5,26) (6,37) (7,50) (8,65) (9,82) (10,101)

Then, we can exclude from the candidates all the pairs, where the second component is congruent by module of all divisors of 2 (greater than 1!)  – to this element’s second component.

(1,2) (2,5) (3,10) (4,17) (5,26) (6,37) (7,50) (8,65) (9,82) (10,101)

At the second step, we see that (2,5) – which is (2,2*2+1) is still a candidate and therefore an actual prime. This is the lema, you don’t need to prove! So we have:

(1,2) (2,5) (3,10) (4,17) (5,26) (6,37) (7,50) (8,65) (9,82) (10,101)

Now we must eliminate all those with the first component  congruent to 2 or 3, modulo 5. Because (5*k+2)^2+1 is divisible by 5 and (5*k+3)^2+1 is divisible by 5.

(1,2) (2,5) (3,10) (4,17) (5,26) (6,37) (7,50) (8,65) (9,82) (10,101) (11,122) (12,145) (13,170)(14,197) (15,226) (16,257) …

Now we see, that 10 is NOT a candidate for prime. We still need to find all the divisors of 10 and to eliminate the appropriate numbers from this sieve. In the case of 10 there are no new divisors, and 2 and 5 have been already used. It’s nothing to do then.

But the next one 4*4+1=17 – is a prime.

There is always infinite number of candidates in this sieve. The problem is to prove, that we never stop to encounter (non-striked) candidates, from time to time.

Prove that, and you will solve this 100 years old problem!

Standard

# The Singularity is Near?

Or it has been (indefinitely) delayed for some reason?

We have this Neural Networks situation now. A simple classifier has been employed, some would say, beyond it’s intended use. But when you have a classifier, you can classify which objects belong to where. Is this is an egg, or it is an apple? Is this is a good Go position for the white, or it isn’t? Would that be a better Go position, had the white put the last token there? What about there? After a few hundred questions of this kind, the best move has been revealed. Every Go move can be milked this way.

Stretching the initial definition of classification works like a charm almost everywhere.

This way, a humble classification becomes a mighty prediction. What a nifty trick! Especially because the general intelligence is nothing more than (good) predictions everywhere. Nothing else is needed.

Say, that you want to invent the StarTreks replicator. You have to predict which configuration of atoms would reliably perform replications of sandwiches, shoes and of the replicators themselves.

This will be possible, as soon as those Neural Networks of Deep Mind/Google masters chemistry and some physics, to the degree they’ve mastered  Go and  Japanese-English translations.

Which may be too expensive in computing terms. And which might also not be that expensive at all! Perhaps, NN must do some self reflection (or self enhancing) first, to be able to storm the science of chemistry and some physics like they stormed Atari games not that long ago. On a superhuman way.

And I don’t even think, that Neural Networks are the best possible approach.

So, yes. The Singularity is nearer than it has ever been!

Standard

# Cell-600 Bulk (Hypervolume)?

1. Edge is 1, what’s the hypervolume?
2. General formula for the bulk?

The solution for 1 can be obtained numerically.

For 2, you have time until the end of this year (2017). (Unless the solution will be published sooner than that.)

Standard

# Physics Problem

Two equal, rigid objects A and B are dropped from high altitudes toward the Earth at the same time. Felix Baumgartner style free fall without any initial relative velocity against the planet. Somewhere in the Central Atlantic, both will splash.

Object A starts higher than object B and reaches the Atlantic first.

How?

Standard