### Tuesday, November 16, 2004

## JSH: Legacy of error

Basically the latter 1800's and early 1900's represent a period when a substantial amount of errors, mathematical and logical entered into the math field, and then, in what has been a covering action, the concept of "pure math" was emphasized, which helped to hide the errors.

There is more than just the problem with the ring of algebraic integers that I've discussed so much.

Like, consider the Riemann Hypothesis. I find it hard to face the issue myself at times, but my find of a three dimensional formula that counts primes gives a rather clear and readily understandable explanation for the closeness of the prime distribution to continuous functions.

It's just simple. But you have these complex ideas that got a lot of attention, get a lot of attention, and I know that nothing will come of them.

If you look at Riemann's actual work, you'll find a section where he talks about certain terms looking like they'll tend to zero, as they look like they should balance out.

But has anyone ever proven that they will?

Goldstone had what he thought was a proof shot down for using the same type reasoning just recently. So why wouldn't anyone go back to Riemann's work with the same analysis?

Yet, why is my work still just supposedly the rantings of some "crank" when all of it is now rigorously proven?

You are in a field with a legacy of error.

Andrew Wiles didn't prove Fermat's Last Theorem. Riemann's Hypothesis can be shown to probably be wrong on some very basic grounds, but no one will admit it. And that's not all of the error.

I have research I don't bother even talking about in public, as what's the point?

Actually it's just freaking depressing. The mistakes cluster around 1900, both before and after it, and it's just weird looking at them, wondering.

Like think about this algebraic integer thing. Why wouldn't it occur to people that maybe they might want to consider roots of non-monic polynomials irreducible over Q that might correlate to integer examples like

3x^2 + 4x + 1 = (3x + 1)(x + 1)?

I mean, it's not like it takes brilliance to consider that maybe some non-monics—even if they were irreducible over Q—might have roots that have integral properties despite being irrational.

It doesn't take brilliance to consider possibilities.

And as for counting primes, why not have a p(x,y) function that gives a count with p(x,sqrt(x))? Why couldn't the answer to linking the prime distribution to continuous functions like x/ln x come from multi-variable functions?

Why not ask why?

Have you seen my derivation of the prime counting function?

Such a simple derivation could have been done a thousand years ago. Maybe it was and lost.

If I knew a few years ago what I know now, I'd have never made these discoveries—if I could have stopped myself. Ignorance is bliss. There is no profit in it after all, and I'm not talking about money. I never really was.

There is more than just the problem with the ring of algebraic integers that I've discussed so much.

Like, consider the Riemann Hypothesis. I find it hard to face the issue myself at times, but my find of a three dimensional formula that counts primes gives a rather clear and readily understandable explanation for the closeness of the prime distribution to continuous functions.

It's just simple. But you have these complex ideas that got a lot of attention, get a lot of attention, and I know that nothing will come of them.

If you look at Riemann's actual work, you'll find a section where he talks about certain terms looking like they'll tend to zero, as they look like they should balance out.

But has anyone ever proven that they will?

Goldstone had what he thought was a proof shot down for using the same type reasoning just recently. So why wouldn't anyone go back to Riemann's work with the same analysis?

Yet, why is my work still just supposedly the rantings of some "crank" when all of it is now rigorously proven?

You are in a field with a legacy of error.

Andrew Wiles didn't prove Fermat's Last Theorem. Riemann's Hypothesis can be shown to probably be wrong on some very basic grounds, but no one will admit it. And that's not all of the error.

I have research I don't bother even talking about in public, as what's the point?

Actually it's just freaking depressing. The mistakes cluster around 1900, both before and after it, and it's just weird looking at them, wondering.

Like think about this algebraic integer thing. Why wouldn't it occur to people that maybe they might want to consider roots of non-monic polynomials irreducible over Q that might correlate to integer examples like

3x^2 + 4x + 1 = (3x + 1)(x + 1)?

I mean, it's not like it takes brilliance to consider that maybe some non-monics—even if they were irreducible over Q—might have roots that have integral properties despite being irrational.

It doesn't take brilliance to consider possibilities.

And as for counting primes, why not have a p(x,y) function that gives a count with p(x,sqrt(x))? Why couldn't the answer to linking the prime distribution to continuous functions like x/ln x come from multi-variable functions?

Why not ask why?

Have you seen my derivation of the prime counting function?

Such a simple derivation could have been done a thousand years ago. Maybe it was and lost.

If I knew a few years ago what I know now, I'd have never made these discoveries—if I could have stopped myself. Ignorance is bliss. There is no profit in it after all, and I'm not talking about money. I never really was.