Originally posted by Pious
Please let's not associate "philosophy" with believing that 0.9r != 1 and mathematics with 0.9r = 1. The link you posted, is someone's opinion albeit a professional mathematician/philosopher, but that doesn't make them right; there are plently of philosophers who would disagree with it.
Harley, I'm afraid in this case you might just be wrong and have misunderstood the subtle reasoning that it takes to see that 0.9r actually refers to the same concept as "1".
I haven't misunderstood it at all, and I don't see anything subtle about the reasoning. In fact, it strikes me as blindingly obvious, almost self-evident - within the use to which the term "infinity" (or rather, infinitesimally small) is put. I didn't dispute 0.9r=1 since my first post on it 2 days ago, and haven't disputed since and I'm certainly not associating philosophy with 0.9r!=1. Maybe others are, but I'm not.
That's not what I'm saying at all.
Did you read that article, Pious?
In case not, let me clarify one thing. Philosophy and philosophy of maths aren't the same thing. Any philosopher that attempts to look at maths philosophically is going to get his knickers in a twist in short order, if he isn't also a mathematician.
I've seen some remarks on here that are either woolly or wrong, depending on how you look at it, like whether an convergent infinite series
approaches a limit or reaches it. I make no pretence at being a mathematician, despite having done some maths at Uni many years ago, but in my day, that would be "approach, but never quite reach". The difference may be infinitesimally small, but it's there.
But whether that difference is important or not depends what you're up to. For instance, what's the square root of 2? In purely mathematical terms, we can express the result but never fully quantify it. For almost any practical purposes, however, expressing it to some number of decimal places (depending on the purpose) will inevitably be enough. So, does that infinitesimally small difference matter? Maths often uses approximations, and sometimes has notation specifically to denote them. You can't express root 2 accurately any other way. Absent the notation, you are stuck with an approximation. Recurring numbers are another such notation as you can't express such numbers accurately without some such notation, and infinity is another example of that, as are the Sigma notations to express the sum of a convergent infinite series.
Maths HAS to have such notations or many problems either become almost inexpressibly cumbersome or just doesn't work. And as soon as you start using such notation, you have to accept that they imply approximations, or a notation to indicate such.
What does 0.9r actually mean?
Many times, Alpha and other maths types have said, paraphrasing, '0.9 followed by an infinite number of nines', and have followed it up by comments like 'you can add another number onto that infinite number of nines
because it's infinite. Well, agreed. Totally agreed. But then you have to ask what "infinite" means, otherwise that statement is meaningless, and "infinite" is just a trite way of getting round a problem.
And
that is where the philosophy of maths comes in - because the answer to that question could be viewed as "whatever it needs to mean to make the methodology work at the extreme, provided such is consistent with observable results". That's my paraphrasing of something Alpha said in response to comments about the nature of proofs. i.e. that something is right in maths if it can't be proven to be wrong, so if I've mischaracterized what he meant, then it's my fault. My response to that logic, by the way, is that it's better to say that something should be
accepted as being right in maths,
until it is proven wrong. The difference in statement is, perhaps, the difference between the mechanistic and philosophical approaches to maths.
So, there's a dichotomy. Alpha said that pure maths exists in a world of it's own, disparate from the real world, so that trying to resolve infinity to the billionth, gazillionth, googolinionth (nice word, huh?
) or googolgazzionth (I'm getting good at this word stuff) decimal place is irrelevant in the pure world as it doesn't affect the theory, and irrelevant in the real world as you get far beyond the quantum point when you go that far. Yet, in that same, highly theoretical world of pure maths, you
still have paradoxes involving infinity - like the cardinality of the set of natural and even numbers being the same (the Aleph discussion), despite the self-evident fact that one set has members the other doesn't - all because of the nature of infinity.
That, without doubt (to me, at least) gets to the point of the philosophy of maths because it comes down to the nature of infinity (and therefore, 'infinitesimally').
My argument, therefore, is that you can answer the original question using a given system, based on the interpretation of infinity implicit in those systems, and that such proofs as given will work
because of the implicit interpretations of infinity. The fact that this is not the only interpretation of infinity possible, even within maths, does not invalidate those proofs, within those systems - and those systems are systems which, so far, have demonstrably failed to be disproved and upon which much of our current understanding of maths, and therefore both technology and our modern lifestyles, are based. So, implicit in that is that, at least until proven otherwise, we accept those systems. After all, if someone could demonstrate algebra to no longer work, I suspect the computer I'm typing at would vanish in a puff of smoke, and we'd end up finishing this thread via smoke signals or semaphore.
So the answer to the original question, with the mathematical methodology the proofs use is, obviously, "yes, it equals 1". But those methodologies themselves embed definitions of infinity and that is ultimately a philosophical argument and, given the limits of the human brain, one we may never fully resolve - just approximate. Hence my original premise - 'it depends how you look at it'.