Urgent C# help needed!

Soldato
Joined
12 Jan 2004
Posts
6,824
Location
Londinium
Can some please just explain this to me. If I output the following code:

Code:
Convert.ToString(1 / 12)

It says:

Code:
0

Thats funny because according to my calculator it is 0.083 recurring. Is there any particular reason why c# feels the need to rewrite the rules of mathematics without mentioning it? Sometimes I want to spit blood at ASP.NET for making such simple tasks so difficult.

Anyways, can someone please help? I need to do lots of calculations in this part of my app but all I am getting are integer results, even when variables are initialised as double. Why?
 
Last edited:
Inquisitor said:
That would just convert 0 to a float :p

You have to make sure the dividend or divisor is a float, double, or decimal.

Thanks, it never ceases to amaze me how amazingly stupid this language can be.
 
Una said:
Its not stupid its a valid feature of the language. Floating point arithmetic is slow compared to integer arithmetic... Languages are just a tool, they are not ment to be smart. Doing things like this is just poor programming, nothing to do with the language.

Oh okay, writing something like:

Code:
decimal dmlTest = 1 / 12;

and expecting an accurate answer is "poor programming" is it? Well I disagree, it is moronic to have to specify that either the dividend or divisor is not an integer (even when I have told it that the assigned variable is a decimal!!!!!!!!), especially when every other language I have worked with can give the right answer to that expression. I would hardly call expecting that expression to give the right answer "smart". "Basic", more like.
 
Inquisitor said:
It's a strongly typed language; get used to it :)

And thats my major issue with it. I am still getting used to this language, and I spent the whole day going over code because I couldnt figure out where the error was coming from. Silly me, I forgot to hold its hand every step of the way and wasted 4 hours. Other languages have gotten on fine, I dont see why this should be any different.
 
Inquisitor said:
Read my post above.

Anyway, it's easy to fix; just use decimal literals:
Code:
decimal test = 1M / 12;

You fail to realise that someone coming to the language would not realise more work is needed to get the right answer. For all I have read on this, not once has it made me aware of this issue.
 
Inquisitor said:
You don't have to "hold its hand", you just have to make sure you don't program in a sloppy fashion. Every aspect of C#'s behaviour is the result of a consciencious design decision, so you can hardly call the langauge stupid. Like I said, if you're getting errors like this, it's probably because you're not coding it properly.

You have to know how to "code it properly" in the first place. Maybe it is a good decision when you know all the answers, but when you simply want to do a simple piece of arithmetic and you dont get the right answer, that is stupid.

C# makes things overly difficult, my expression should be enough to tell it that I want the answer in decimal, not integer. As I said, it is stupid.
 
Una said:
Because quite you often use the modulo operator for the remainder.

The advantage of strongly typed languages is really that they give the compiler more information to do the correct optimizations.

If I write:

Code:
int test = 1 / 12;

and it give me 0, fair enough.

But if I write:

Code:
decimal test = 1 / 12;

and still get 0 then that is stupid. I should not also have to specify on the RHS that I want the answer in decimal. There is no excuse for that, it creates more work for the programmer (not to mention you have to KNOW to do that in the first place) when the compiler should has enough information for its optimisations on the LHS.

I think we shall have to agree to disagree here, I really dont like some things about this language, you have to work too hard for it and it makes things more complex and bloated in the process. But that is just me.
 
Last edited:
growse said:
You also fail to realise that most people who code successfully in a strongly typed language actually know what a strongly typed language is and how to use it.

It only makes things overly difficult if you don't know how to code properly in a strongly typed language.

I hope you feel better about yourself for it. Unfortunately we cant all be as nerdy as you, some people have to learn these things. Nowhere have I read that my initial expression would deliver anything other than the right answer, and I do not see how it is unreasonable to expect the compiler to know that I want a decimal answer, AS MY VARIABLE HAS BEEN INITIALISED AS A DECIMAL.
 
Inquisitor said:
As I said, this is done to avoid ambiguity. As an example, should the return type of a function be dependent on the context? Of course not; if anything, the context should be dependent on the return type. The same applies to operators (they're almost the same thing after all); the return type is defined by the opreands, not what you're trying to do with it. If this wasn't the case, you'd get all sorts of ugly problems. As for not knowing about it, that's not the fault of the language, is it? :)

You are incorrect. A functions return type is not defined by its parameter types, but by its type. For example:

Code:
protected decimal divide(int value1, int value2)

This function does not (in theory) return an integer does it? So why is it so unreasonable to expect the divide operator to behave differently, as you said "they're almost the same thing after all" - that is of course unless the divide function (/) is specified to return an integer, which of course it is not.

As I said now I know I will not make the same mistake again. But I find again and again that I am wasting hours on trying to find errors when I have neglected to realise some quaint little feature that I missed is causing the problem. It is very annoying.
 
growse said:
No it hasn't.

1 is an integer. Not a decimal. 12 is also an integer.

Had you declared:

Code:
decimal f = 12;
decimal p = 1;
decimal ans = p/f;

Then that is declaring it as a decimal. Just writing 1 is saying "here's an integer, value 1".

Wonderful, 3 lines of code to accomplish what should be done in one.

Why bother writing "decimal" in front of the variable name then? If the compiler is just going to ignore it what is the point? Oh, but it will give me a compile error if I don't. Hmm, thats smart isnt it?

You seem to be ranting because you don't have a clue. Everyone has to learn from somewhere, and most people manage it just fine.

Obviously thanks to you fine people I now know the truth. Now I can laugh at people who find this language a pain in the ass, Oh joy.
 
growse said:
Ok, lets explain it again.

1/12 means "Take the integer 1 and divide it by the integer 12".

decimal ans = x means "take whatever the hell x is is and try and cast to a decimal".

So first, the code evaluates the first statement. The answer is 0 (as an integer). It then sets to taking 0, and casts it to a decimal. It gets 0, as a decimal. This is your answer.

Division is an operator, it knows about different types of numbers and therefore does different things with different arguments. Dividing two integers together is a good fast way of getting the whole part of the answer without the remainder if that's what you need. The reason a function is different is because you're explicitly saying that you're going to get something different out to what you put in. This all makes perfect sense if you have even the slightest clue about how a computer handles the execution of code. God help you if you ever decide to learn C, where everything can effectively be described as just a pointer to a bit of memory.

I can appreciate that, but please tell me, why would the compiler assume that both my 1 and 12 are of integer types? When a variable of decimal type is equal to 1 (or 1.0000....), does that mean the variable type changes to integer? No. So then, why does it assume I am dividing two "variables" of integer type? I suppose that was just a decision someone made and then said that if you want divide other types you have to specify it explicitly. But then what type will it give me if I divide 1 and 0.5?
 
growse said:
When you write "1", the compiler assumes it's an integer. If you write "0.5" it assumes it's a double. The reason for this is that these are the simplest types it can use for those particular bits of data. If it went round assuming that everything was a decimal, then that would be hugely inefficient and wasteful if you were doing everything in mainly integer arithmatic.

The type return of 1/12 is therefore an integer, and the type return of 1/0.5 is a double, because you're dividing an integer by a double and the answer to that is always a double (it's the most complex type). if you try and divide a double by a decimal, you'll find you can't. You get an error during compile.

Variables also don't change type. If you specify decimal d = 1M; then that's always a decimal and will never change type.

Thanks. A couple of last questions:

1.) What does the 'M' stand for?

2.) If I use 1 / 12 in the middle of a larger function, will it still equate to an integer? e.g:

Code:
decimal test = 3.5 + (2 * (1 / 12))
 
Back
Top Bottom