Urgent C# help needed!

Soldato
Joined
12 Jan 2004
Posts
6,824
Location
Londinium
Can some please just explain this to me. If I output the following code:

Code:
Convert.ToString(1 / 12)

It says:

Code:
0

Thats funny because according to my calculator it is 0.083 recurring. Is there any particular reason why c# feels the need to rewrite the rules of mathematics without mentioning it? Sometimes I want to spit blood at ASP.NET for making such simple tasks so difficult.

Anyways, can someone please help? I need to do lots of calculations in this part of my app but all I am getting are integer results, even when variables are initialised as double. Why?
 
Last edited:
Your doing integer division there, turn one of your numbers into a float (i.e 1.0 instead of 1).

It would be Convert.ToString((float)1 / 12) instead robmiller.
 
Last edited:
growse said:
And don't forget, floats are approximations :)
Yep, a frequent pitfall for programmers is trying to compare two floating point numbers for equality. If you want two real numbers to be precisely equal to eachother, you need to use fixed point numbers (decimal in C#).
 
Or you can just write some code to represent the exponent and the mantissa as integers. This is what I use for huge number calculations.
 
Inquisitor said:
That would just convert 0 to a float :p

You have to make sure the dividend or divisor is a float, double, or decimal.

Thanks, it never ceases to amaze me how amazingly stupid this language can be.
 
How on Earth is it stupid? "Don't change datatypes unless explicitly told to do so" seems a perfectly reasonable thing for C# (or any language) to do :confused:

Imagine you want to assign the quotient of two integers to another integer; you'd get a compile time error because you're trying to implicitly demote a double/float/decimal to an int. If you were using a language that allowed implicit demotions, then you'd unwittingly end up using a floating point number, and lose precision, even if the result would normally be integral anyway. For example, 6 / 2 should give exactly 3 as a result, but if the result is always a floating point, then you'd end up with some number that's very close to, but not precisely, 3.

It's a strongly typed language; get used to it :)
 
Last edited:
nero120 said:
Thanks, it never ceases to amaze me how amazingly stupid this language can be.

Its not stupid its a valid feature of the language. Floating point arithmetic is slow compared to integer arithmetic... Languages are just a tool, they are not ment to be smart. Doing things like this is just poor programming, nothing to do with the language.

Imagine you were programming a critical application and you lost precision, it could cost lives.. in an extreme circumstance.

With the exception of compilers, there is very little in the programming world that can make better decisions than a human can do.
 
Last edited:
Una said:
Its not stupid its a valid feature of the language. Floating point arithmetic is slow compared to integer arithmetic... Languages are just a tool, they are not ment to be smart. Doing things like this is just poor programming, nothing to do with the language.

Oh okay, writing something like:

Code:
decimal dmlTest = 1 / 12;

and expecting an accurate answer is "poor programming" is it? Well I disagree, it is moronic to have to specify that either the dividend or divisor is not an integer (even when I have told it that the assigned variable is a decimal!!!!!!!!), especially when every other language I have worked with can give the right answer to that expression. I would hardly call expecting that expression to give the right answer "smart". "Basic", more like.
 
Inquisitor said:
It's a strongly typed language; get used to it :)

And thats my major issue with it. I am still getting used to this language, and I spent the whole day going over code because I couldnt figure out where the error was coming from. Silly me, I forgot to hold its hand every step of the way and wasted 4 hours. Other languages have gotten on fine, I dont see why this should be any different.
 
nero120 said:
And thats my major issue with it. I am still getting used to this language, and I spent the whole day going over code because I couldnt figure out where the error was coming from. Silly me, I forgot to hold its hand every step of the way and wasted 4 hours. Other languages have gotten on fine, I dont see why this should be any different.
You don't have to "hold its hand", you just have to make sure you don't program in a sloppy fashion. Every aspect of C#'s behaviour is the result of a consciencious design decision, so you can hardly call the langauge stupid. Like I said, if you're getting errors like this, it's probably because you're not coding it properly.
 
Inquisitor said:
Read my post above.

Anyway, it's easy to fix; just use decimal literals:
Code:
decimal test = 1M / 12;

You fail to realise that someone coming to the language would not realise more work is needed to get the right answer. For all I have read on this, not once has it made me aware of this issue.
 
nero120 said:
Oh okay, writing something like:

Code:
decimal dmlTest = 1 / 12;

and expecting an accurate answer is "poor programming" is it? Well I disagree, it is moronic to have to specify that either the dividend or divisor is not an integer (even when I have told it that the assigned variable is a decimal!!!!!!!!), especially when every other language I have worked with can give the right answer to that expression. I would hardly call expecting that expression to give the right answer "smart". "Basic", more like.

Because quite you often use the modulo operator for the remainder.

The advantage of strongly typed languages is really that they give the compiler more information to do the correct optimizations. Loose typed languages require some run time method of inspecting the variables, and this produces overheads. It can also produce ambiguity.

I don't belive in dumbing down languages for the sake of the noobie programmers. As djikstra said “Don't blame me for the fact that competent programming, as I view it as an intellectual possibility, will be too difficult for 'the average programmer', you must not fall into the trap of rejecting a surgical technique because it is beyond the capabilities of the barber in his shop around the corner.”
 
Last edited:
nero120 said:
And thats my major issue with it. I am still getting used to this language, and I spent the whole day going over code because I couldnt figure out where the error was coming from. Silly me, I forgot to hold its hand every step of the way and wasted 4 hours. Other languages have gotten on fine, I dont see why this should be any different.
You don't have to "hold its hand", you just have to make sure you don't program in a sloppy fashion. Every aspect of C#'s behaviour is the result of a consciencious design decision, so you can hardly call the langauge stupid. Like I said, if you're getting errors like this, it's probably because you're not coding it properly.

Making the return datatype of an operator ambiguous and dependent on the context is a) impractical, b) difficult to implement, and c) a bad idea, which can lead to very obscure problems; that's why the C# design team didn't implement it. It's things like these that I don't like about languages like VB; ambiguity.
 
Inquisitor said:
You don't have to "hold its hand", you just have to make sure you don't program in a sloppy fashion. Every aspect of C#'s behaviour is the result of a consciencious design decision, so you can hardly call the langauge stupid. Like I said, if you're getting errors like this, it's probably because you're not coding it properly.

You have to know how to "code it properly" in the first place. Maybe it is a good decision when you know all the answers, but when you simply want to do a simple piece of arithmetic and you dont get the right answer, that is stupid.

C# makes things overly difficult, my expression should be enough to tell it that I want the answer in decimal, not integer. As I said, it is stupid.
 
You also fail to realise that most people who code successfully in a strongly typed language actually know what a strongly typed language is and how to use it.

It only makes things overly difficult if you don't know how to code properly in a strongly typed language.

Also, if you know anything about how computers work, division is not "simple arithmatic". There's a whole host of complications based on the very simple fact that binary isn't very good at handling decimals. So when you say divide x by y, the computer needs to know a) how you want the answer and b) what on earth the inputs are, because it'll probably do different things based on whether the inputs are floats, doubles or decimals. What you see in other languages as "intelligence" is actually them guessing how you want the answer which can be very dangerous if it guesses wrong and you don't realise it.

Like I said before, floats are a useful type, but they're only an approximation of a decimal number. However, if you do a division in a language where you don't specify what types you need and it guesses that you want a float, whereas in fact you don't want any floats because you're dealing in high-accuracy, you have an error creeping in that's almost impossible to track down.

Strongly typed languages aren't stupid, it's what most of the world's production code is written in because they're unambiguous and very powerful.
 
Last edited:
Back
Top Bottom