Tuesday, August 26, 2003

 

FBI has been notified

I thought it relevant to inform that I notified the FBI a couple of months ago about some of the math issues I've brought up here. I received a single reply that agents were looking into it, as I cited national security, given that mathematicians are so important in the defense of this nation.

It was not a form letter reply. I've followed up but have not gotten further information from the FBI.

I have also informed a couple of senators, but did not receive anything other than form letter replies.

The senators were McCain of Arizona and Graham of Florida.

Some of you may be angered by my contacting important agencies like the FBI who have VERY important work to do in defense of this nation.

However, I think it very important if mathematicians are as adept at lying as I've seen, and the federal government needed to be notified.

I will also suggest that those of you who receive federal funds carefully review the terms and conditions you agreed to in order to get them.

I am not saying that I know of any investigations into mathematicians resulting from my contacts with the United States Government. I would suspect that I was simply ignored as a "crank", and that they referred to mathematicians who may have lied to them.

However, it was my duty to inform, and possibly at some future date, if some mathematicians did lie to the FBI or senators, they may face further questions.

If I was mostly ignored by the FBI and those senators, which is probable, then, of course, they didn't ask anyone.

At a later date I will probably make higher level contacts, hoping to get feedback from members of the National Security Council.

Sunday, August 24, 2003

 

Connecting the Dots: Overview of my work

Since I can go from talking about FLT, to discussing an esoteric error in the ring of algebraic integers, to my prime counting function, to partial differential equations, to what happens when you add 1/2 to the ring of integers, some might get a little lost, or not realize that all of it is connected by a rigid, logical framework based
primarily on modern ideas.

And in fact the modern ideas I use have a lot to do with your reading this post as object-oriented thinking is quite important in computer programming.

What I want to do in this post is give a roadmap, connect all the dots, so that the big picture people understand how it all relates.

First off, while object-oriented thinking permeates modern computer programming, and has always been in the sciences, mathematicians seem to have missed the train. If you've picked up a textbook on abstract algebra, you might notice there are all these little, dry rules. Being someone who focuses on the concrete, when I picked up a text on abstract algebra, back when I was fourteen at Duke University, I put it back down after a few minutes, and instead went back to playing with calculus.

To me it's like when Ptolemy was a big wheel. For those who forgot their science history, Ptolemy worked out a system for figuring out where objects in the sky would be at various times using spheres and circles. Those were used because the heavens were considered the abode of God, and spheres and circles were considered perfection, so people figured God would only use perfect stuff.

The problem is that left little errors, which were fixed with, guess what, little circles called epicycles. It was a kludgy system that involved a lot of calculation and still would be off.

Well along comes Copernicus and Kepler, and Kepler drops the circle business and uses ellipses. Being himself religious Kepler came up with his own reasons for why God would use the supposedly less perfect ellipse, rather than circles.

When I decided to try and use basic algebra to find a short proof of Fermat's Last Theorem, which I really, really, really hoped existed, I made a conscious decision not to bother with overdone approaches. I like simple.

What happened is that I came upon a rather basic, straightforward approach which boils down to factoring x^p + y^p - z^p indirectly. However, that approach revealed that mathematicians hadn't discovered enough mathematical infrastructure to handle factorizations at that level.

So I was forced to work out that infrastructure myself, which I call object mathematics.

While thinking about such things, I found myself chatting about simple polynomials like x+1, and (x+1)(x+3), which got me to thinking about prime numbers, and a few weeks later I had a way to count them that mathematicians got close too, but never quite got the full thing.

I know they didn't because I can look at their work where they got close, and see how close they came to what I have. And also I can see what my discovery does that what they have cannot do.

I played with prime counting for a while, including working out a partial differential equation that follows from my functional way to count prime numbers, and then went back to thinking about my FLT work.

For a while I was convinced by others that I needed algebraic integers, which are numbers defined to be the roots of monic polynomials with integer coefficients. You know, like x^2 + 2x + 2, as the polynomial is monic because the first coefficient is 1.

So I put object mathematics to the side, hoping that maybe mathematicians had indeed built up the infrastructure needed for my FLT work, but then a little later I found out that no, they hadn't, and in fact there was this intriguing little problem with algebraic integers.

That forced me to go back and blow the dust off of my work on object mathematics, and I finally worked it out thoroughly within the last few weeks, as part of the polishing process.

Then I was surprised to find that mathematicians seemed to not know basic things about their own work, which thinking back to Ptolemy, doesn't surprise me now, as when you have a lot of excess, based on unnecessary rules, people can learn things by rote, and not understand.

So mathematicians apparently don't understand that including fractions like 1/2 with numbers like integers gives you the field of reals. Their belief comes from arbitrary rules where they exclude infinite sums on an ad hoc basis.

Seeing that is easy. Consider that

1/(k-1) = 1/k + 1/k^2 +…

when k is a nonzero integer other than 1 or -1, which is easy to prove in the classic way using S.

S = 1/k + 1/k^2 +… = 1/k(1 + 1/k + 1/k^2+…) = 1/k(1+ S), so

kS = 1 + S, S = 1/(k-1).

So if you add 1/2 in with integers, you have 1/4 = 1/2(1/2), so you get 1/3, and now you can have 1/12 = 1/4(1/3), which gives you 1/11, both from the formula above, and that process leads you on and on until you have the field of reals.

I say that such infinite sums are decidable since you can get an answer, and there's no reason to exclude them. Mathematicians want them excluded so they yelp, and start tossing out arbitrary rules. And I think back to Ptolemy.

So that's an overview of my work and for LOTS of mathematics you can check

http://groups.msn.com/AmateurMath

where I go into a lot of detail, giving a short proof of FLT, my prime counting function and its associated partial differential equation, and I have a paper outlining the problem with algebraic integers.

Also I have the framework object mathematics with discussion on why I unearthed it, and I even connect back to Gauss's Fundamental Theorem of Algebra.

Basically I do a lot in a few pages which is what you can do with concise and potent mathematics. If you can make it through and understand it all, you are at least a hundred years ahead of current mathematicians. But don't tell them that as they seem to get upset very easily.

Saturday, August 09, 2003

 

Problem with Algebraic Integers: Detailed Exposition

A mathematical proof begins with a truth, and proceeds by logical steps to a conclusion which then must be true.

I've pulled a detailed exposition of a short argument that quickly shows a problem with algebraic integers. It starts after the reference.

Message-ID: <3c65f87.0308081016.460d766c@posting.google.com>

Now here's a math proof. Those who doubt that fact can believe it's a claim of proof, but it's verified to be a proof by tracing the argument out.

In this case, I begin with an expression. The expression exists, so that is the truth from which you start.

Consider, in the ring of algebraic integers,

P(m) = f^2((m^3 f^4 - 3m^2 f^2 + 3m) x^3 - 3(-1+mf^2 )x u^2 + u^3 f).

That is, I have the identity which defines P(m) in terms of various symbols, and it's all in the ring of algebraic integers, which means that the symbols can only represent numbers that are algebraic integers.

Now using b_1, b_2, b_3, w_1, w_2, and w_3, I have the factorization

P(m)/f^2 = (b_1 x + u w_1)(b_2 x + u w_2)(b_3 x + u w_3)

where w_1 w_2 w_3 = f, and

b_1 b_2 b_3 = (m^3 f^4 - 3m^2 f^2 + 3m),

and at m=0

P(0)/f^2 = 3xu^2 + u^3 f = u^2(3x + uf),

so two of the b's must equal 0, which means

P(0)/f^2 = w_1 w_2 u^2 (b_3 x + u w_3)

which is

P(0)/f^2 = u^2 (b_3 w_1 w_2 x + u f) = u^2(3x + uf)

proving that w_1 w_2 must equal 1, if f is coprime to 3, which leaves b_3 = 3.

Now that was a lot of steps, but each was a logical one.

First I introduced b_1, b_2, b_3, w_1, w_2, and w_3, which are defined by the factorization

P(m)/f^2 = (b_1 x + u w_1)(b_2 x + u w_2)(b_3 x + u w_3)

then I set m=0, and used the definition of P(m) to get P(0).

That told me that at m=0 two of the b's are 0, because then

P(0)/f^2 = 3xu^2 + u^3 f = u^2(3x + uf),

where the "u^2" couldn't get there unless two of the b's are 0.

Then using that result I get from

P(m)/f^2 = (b_1 x + u w_1)(b_2 x + u w_2)(b_3 x + u w_3)

that

P(0)/f^2 = w_1 w_2 u^2 (b_3 x + u w_3)

and multiplying through by w_1 w_2 I have

P(0)/f^2 = u^2 (b_3 w_1 w_2 x + u f)

which with

P(0)/f^2 = 3xu^2 + u^3 f = u^2(3x + uf),

tells me that w_1 w_2 = 1, when m=0.

Essentially objections to how f^2 divides off now come down to claiming that the w's are functions of m, but consider that w_1 w_2 = 1, when m=0, if f is coprime to 3.

Now I'm focusing on what has been revealed to be an area of confusion. Apparently some people believe that when I divide off f^2 that it can divide off as a function of m, so that m=0 might be a special case. I'm now starting the argument to address that belief by noting again that w_1 w_2 = 1, when m=0, if f is coprime to 3. That is, when f doesn't have 3 as a factor.

But that was an arbitrary choice, so let f=3.

That is, I said f is coprime to 3 but in considering this possibility it's worth it to relax that restriction and now consider what would happen if it equals 3.

Now w_1 w_2 = 3^{2/3} WITHOUT REGARD TO m.

Seeing that is as simple as looking at

P(m)/f^2 = (m^3 f^4 - 3m^2 f^2 + 3m) x^3 - 3(-1+mf^2 )x u^2 + u^3 f

with f=3 as then you have

P(m)/3^2 = (m^3 3^4 - 3m^2 3^2 + 3m) x^3 - 3(-1+m3^2 )x u^2 + 3u^3

so every coefficient has a factor that is 3, as you can tell by looking.

So with

P(m)/3^2 = (b_1 x + u w_1)(b_2 x + u w_2)(b_3 x + u w_3)

each of the b's and each of the w's has a factor that is 3^{1/3}, while the b's can have additional factors in common with 3, the w's cannot, as when 3 is separated out, notice you have

P(m)/3^2 = 3((m^3 3^3 - 3m^2 3 + m) x^3 - (-1+m3^2 )x u^2 + u^3).

But before at m=0, they were coprime to f, now they are not when f=3, as they are constant. Clearly, they are constant in both cases with respect to m, without regard to the value of f. Which makes sense as f^2 is not a function of m, and it is what is being divided off.

That is, if they were functions of m, so that w_1 w_2 = 1 at m=0 as a function of m, then it wouldn't matter if f had a factor of 3 or not, you'd STILL get w_1 w_2 = 1 at m=0, without regard to the value of f.

But in fact, if f=3, you have w_1 w_2 = 3^{2/3} at m=0, which only works if the w's are independent of m, which they are.

It makes sense that they are anyway, as f^2 isn't a function of m, but I've seen that for some people the idea can take hold after seeing m=0 highlighted.

But if the w's were functions of m, then w_1 w_2 would equal 1, without regard to the value of f, but it does not.

Therefore, the factorization is

P(m)/f^2 = (m^3 f^4 - 3m^2 f^2 + 3m) x^3 - 3(-1+mf^2 )x u^2 + u^3 f = (b_1 x + u)(b_2 x + u)(b_3 x + uf)

where you'll notice that the b's are algebraic integers with m=1, f=sqrt(2), but that's a special case as generally they are not, which shows a problem with the ring of algebraic integers.

And here I've packed in a lot of information as well.

First, with f coprime to 3, I now know that the factorization is

P(m)/f^2 = (b_1 x + u)(b_2 x + u)(b_3 x + uf)

as the w's are constant with respect to m, so I can just check at m=0, which revealed that w_1 w_2 = 1. Now that doesn't necessarily force w_1 and w_2 to each equal 1, but even if they were factors of 1, i.e. unit factors, that would only change b_1 and b_2.

So I have my factorization without regard to m in terms of where the f goes, and then I point out that you can actually check my work using m=1, f=sqrt(2), as then you get a polynomial which you can factor rather simply. So you can actually get the values for the b's and check them, and see that they are all algebraic integers, and all are
coprime to 2.

However, usually, for f values that are coprime to 3, you don't get b's that are algebraic integers, which shows a problem with the ring of algebraic integers.

Now the nice thing about a mathematical proof is that if someone disagrees they have to find some misstep.

Unfortunately, people can say that proof is not a proof, even when it is, just like if you tried to say you were human, and not a dog, someone might dispute any proof you might give, claiming it false.

This page is powered by Blogger. Isn't yours?