zex20913 wrote:
x= .9999999...
-(10x=9.9999999...)
-9x=-9
x=1
That proof, which has become almost standard for this topic, is based on a fallacy. When it comes to infinity, there are some things that you just can’t do. Infinity isn’t actually a number, it’s a concept; and therefore the algebraic rules do not hold. There are also many different infinities, and some are bigger than others.
Take for example the set of all positive real numbers ( R⁺ ) and the set of all positive integers ( Z⁺ ). There are
infinitely more real numbers than there are integers. Without a formal proof, you can tell that this is true because for any positive integer x, there is a positive real number 0.x
∀x ∈ Z⁺ ∃ y ∈ R⁺ s.t. y = 0.x
This means that if you were “counting” along both the real and integer number lines, by the time you have “counted” to infinity on the integers, you would not have even reached 1 on the reals.
This means, that while we use infinity in equations, we don’t actually know what number it is. All of the following are conceptually true.
∞ + ∞ = ∞
( ∞ +/- x = ∞ ) ∀x ∈ R⁺
∞ ⋅ ∞ = ∞
-∞ ⋅ -∞ = ∞
What we can not do is:
∞ - ∞ = 0
That doesn’t equal 0, it is indeterminate.
In the above given “proof,” the subtraction of 0.9999~ from 9.9999~ does
not equal 9. That would require a one to one correlation between each and every decimal. That is indeterminate.