Urgent C# help needed!

Una said:
Because quite you often use the modulo operator for the remainder.

The advantage of strongly typed languages is really that they give the compiler more information to do the correct optimizations.

If I write:

Code:
int test = 1 / 12;

and it give me 0, fair enough.

But if I write:

Code:
decimal test = 1 / 12;

and still get 0 then that is stupid. I should not also have to specify on the RHS that I want the answer in decimal. There is no excuse for that, it creates more work for the programmer (not to mention you have to KNOW to do that in the first place) when the compiler should has enough information for its optimisations on the LHS.

I think we shall have to agree to disagree here, I really dont like some things about this language, you have to work too hard for it and it makes things more complex and bloated in the process. But that is just me.
 
Last edited:
growse said:
You also fail to realise that most people who code successfully in a strongly typed language actually know what a strongly typed language is and how to use it.

It only makes things overly difficult if you don't know how to code properly in a strongly typed language.

I hope you feel better about yourself for it. Unfortunately we cant all be as nerdy as you, some people have to learn these things. Nowhere have I read that my initial expression would deliver anything other than the right answer, and I do not see how it is unreasonable to expect the compiler to know that I want a decimal answer, AS MY VARIABLE HAS BEEN INITIALISED AS A DECIMAL.
 
nero120 said:
and still get 0 then that is stupid. I should not also have to specify on the RHS that I want the answer in decimal. There is no excuse for that, it creates more work for the programmer (not to mention you have to KNOW to do that in the first place) when the compiler should has enough information for its optimisations on the LHS.
As I said, this is done to avoid ambiguity. As an example, should the return type of a function be dependent on the context? Of course not; if anything, the context should be dependent on the return type. The same applies to operators (they're almost the same thing after all); the return type is defined by the operands, not what you're trying to do with them. If this wasn't the case, you'd get all sorts of ugly problems. As for not knowing about it, that's not the fault of the language, is it? :)
 
nero120 said:
If I write:

Code:
integer test = 1 / 12

and it give me 0, fair enough.

But if I write:

Code:
decimal test = 1 / 12

and still get 0 then that is stupid. I should not also have to specify on the RHS that I want the answer in decimal. There is no excuse for that, it creates more work for the programmer (not to mention you have to KNOW to do that in the first place) when the compiler should has enough information for its optimisations on the LHS.

I think we shall have to agree to disagree here, I really dont like some things about this language, you have to work too hard for it and it makes things more complex and bloated in the process. But that is just me.

In loose typed languages like PHP the casts (coercion) happens automatically. When your implicitly doing casts behind the scenes it makes results less predictable - This is a big downside in complexed formulae.
 
nero120 said:
I hope you feel better about yourself for it. Unfortunately we cant all be as nerdy as you, some people have to learn these things. Nowhere have I read that my initial expression would deliver anything other than the right answer, and I do not see how it is unreasonable to expect the compiler to know that I want a decimal answer, AS MY VARIABLE HAS BEEN INITIALISED AS A DECIMAL.

No it hasn't.

1 is an integer. Not a decimal. 12 is also an integer.

Had you declared:

Code:
decimal f = 12;
decimal p = 1;
decimal ans = p/f;

Then that is declaring it as a decimal. Just writing 1 is saying "here's an integer, value 1".

You seem to be ranting because you don't have a clue. Everyone has to learn from somewhere, and most people manage it just fine.
 
Inquisitor said:
As I said, this is done to avoid ambiguity. As an example, should the return type of a function be dependent on the context? Of course not; if anything, the context should be dependent on the return type. The same applies to operators (they're almost the same thing after all); the return type is defined by the opreands, not what you're trying to do with it. If this wasn't the case, you'd get all sorts of ugly problems. As for not knowing about it, that's not the fault of the language, is it? :)

You are incorrect. A functions return type is not defined by its parameter types, but by its type. For example:

Code:
protected decimal divide(int value1, int value2)

This function does not (in theory) return an integer does it? So why is it so unreasonable to expect the divide operator to behave differently, as you said "they're almost the same thing after all" - that is of course unless the divide function (/) is specified to return an integer, which of course it is not.

As I said now I know I will not make the same mistake again. But I find again and again that I am wasting hours on trying to find errors when I have neglected to realise some quaint little feature that I missed is causing the problem. It is very annoying.
 
growse said:
No it hasn't.

1 is an integer. Not a decimal. 12 is also an integer.

Had you declared:

Code:
decimal f = 12;
decimal p = 1;
decimal ans = p/f;

Then that is declaring it as a decimal. Just writing 1 is saying "here's an integer, value 1".

Wonderful, 3 lines of code to accomplish what should be done in one.

Why bother writing "decimal" in front of the variable name then? If the compiler is just going to ignore it what is the point? Oh, but it will give me a compile error if I don't. Hmm, thats smart isnt it?

You seem to be ranting because you don't have a clue. Everyone has to learn from somewhere, and most people manage it just fine.

Obviously thanks to you fine people I now know the truth. Now I can laugh at people who find this language a pain in the ass, Oh joy.
 
nero120 said:
You are incorrect.
No I'm not.

The + operator used in this code:
Code:
int x = 1;
int y = 2;
int z = x + y;
is not the same + operator as is being used in this code:
Code:
double x = 1;
double y = 2;
double z = x + y;
They are both doing two entirely different things; one is adding two ints, one is adding two doubles, therefore, they do not give identical results. This is called operator overloading.

It would be absurd for an operator's implementation to be defined by the datatype you're trying to get from it.

It's equivelant to having the following two functions:
Code:
public static int Add(int x, int y);

public static double Add(double x, double y);

In fact, operators are so similar to functions that you can even overload them with your own implementations. Some of the built in operators are an exception, such as integer division, which is done by the processor's ALU (I think; someone correct me if I'm wrong).
 
Last edited:
Ok, lets explain it again.

1/12 means "Take the integer 1 and divide it by the integer 12".

decimal ans = x means "take whatever the hell x is is and try and cast to a decimal".

So first, the code evaluates the first statement. The answer is 0 (as an integer). It then sets to taking 0, and casts it to a decimal. It gets 0, as a decimal. This is your answer.

Division is an operator, it knows about different types of numbers and therefore does different things with different arguments. Dividing two integers together is a good fast way of getting the whole part of the answer without the remainder if that's what you need. The reason a function is different is because you're explicitly saying that you're going to get something different out to what you put in. This all makes perfect sense if you have even the slightest clue about how a computer handles the execution of code. God help you if you ever decide to learn C, where everything can effectively be described as just a pointer to a bit of memory.
 
growse said:
Ok, lets explain it again.

1/12 means "Take the integer 1 and divide it by the integer 12".

decimal ans = x means "take whatever the hell x is is and try and cast to a decimal".

So first, the code evaluates the first statement. The answer is 0 (as an integer). It then sets to taking 0, and casts it to a decimal. It gets 0, as a decimal. This is your answer.

Division is an operator, it knows about different types of numbers and therefore does different things with different arguments. Dividing two integers together is a good fast way of getting the whole part of the answer without the remainder if that's what you need. The reason a function is different is because you're explicitly saying that you're going to get something different out to what you put in. This all makes perfect sense if you have even the slightest clue about how a computer handles the execution of code. God help you if you ever decide to learn C, where everything can effectively be described as just a pointer to a bit of memory.

I can appreciate that, but please tell me, why would the compiler assume that both my 1 and 12 are of integer types? When a variable of decimal type is equal to 1 (or 1.0000....), does that mean the variable type changes to integer? No. So then, why does it assume I am dividing two "variables" of integer type? I suppose that was just a decision someone made and then said that if you want divide other types you have to specify it explicitly. But then what type will it give me if I divide 1 and 0.5?
 
When you doing decimal p = 1;

There is no chance of ambiguity. The compiler knows that it can coerce the integer literal 1 to a decimal and store in the memory held at location p.

When your doing decimal dmlTest = 1 / 12; Im not sure what the in internal representation of decimal is in C# (likely to be 128-bit) but when you start trying to store that in an integer (32bit) you are going to loose precision. All you are doing when you define a data type is really allocating the storage space for it.
 
Inquisitor said:
In fact, operators are so similar to functions that you can even overload them with your own implementations. Some of the built in operators are an exception, such as integer division, which is done by the processor's ALU (I think; someone correct me if I'm wrong).

Yeah, integer division is handled by the ALU, FPU handles the floating point division.

Pointless facts the slowest operations are division and modulus. In fact, floating-point division is as fast or faster than integer division :p .
 
When you write "1", the compiler assumes it's an integer. If you write "0.5" it assumes it's a double. The reason for this is that these are the simplest types it can use for those particular bits of data. If it went round assuming that everything was a decimal, then that would be hugely inefficient and wasteful if you were doing everything in mainly integer arithmatic.

The type return of 1/12 is therefore an integer, and the type return of 1/0.5 is a double, because you're dividing an integer by a double and the answer to that is always a double (it's the most complex type). if you try and divide a double by a decimal, you'll find you can't. You get an error during compile.

Variables also don't change type. If you specify decimal d = 1M; then that's always a decimal and will never change type.
 
Una said:
Pointless facts the slowest operations are division and modulus. In fact, floating-point division is as fast or faster than integer division :p .

I didn't know this - be interesting to do a speed test comparing the division of, say, 48 and 3, and then 48M and 3M. I would have thought the integer would have been quicker, but perhaps not :)
 
growse said:
I didn't know this - be interesting to do a speed test comparing the division of, say, 48 and 3, and then 48M and 3M. I would have thought the integer would have been quicker, but perhaps not :)
decimals are not floating point, they're 128-bit fixed point numbers, so the division of 48M and 3M would be a lot slower than that of 48 and 3 ;)
 
growse said:
I didn't know this - be interesting to do a speed test comparing the division of, say, 48 and 3, and then 48M and 3M. I would have thought the integer would have been quicker, but perhaps not :)

Its really hard to benchmark this sort of stuff with pipelining and superscalar units in cpu's now. Using SSE inline instructions for performing single-precision division or compiler intrinsics or even using multiplication/shifts instead of divison can help.

Its not really very practical to consider this type of stuff, but a friend of mine writes 4kb demos.. which require this sort of micro-optimizations :p I also find it quite interesting.
 
growse said:
When you write "1", the compiler assumes it's an integer. If you write "0.5" it assumes it's a double. The reason for this is that these are the simplest types it can use for those particular bits of data. If it went round assuming that everything was a decimal, then that would be hugely inefficient and wasteful if you were doing everything in mainly integer arithmatic.

The type return of 1/12 is therefore an integer, and the type return of 1/0.5 is a double, because you're dividing an integer by a double and the answer to that is always a double (it's the most complex type). if you try and divide a double by a decimal, you'll find you can't. You get an error during compile.

Variables also don't change type. If you specify decimal d = 1M; then that's always a decimal and will never change type.

Thanks. A couple of last questions:

1.) What does the 'M' stand for?

2.) If I use 1 / 12 in the middle of a larger function, will it still equate to an integer? e.g:

Code:
decimal test = 3.5 + (2 * (1 / 12))
 
nero120 said:
1.) What does the 'M' stand for?
No idea, they couldn't use D because that's already use for doubles :p

nero120 said:
2.) If I use 1 / 12 in the middle of a larger function, will it still equate to an integer?
Yup. As I've said, it's entirely independent of the context; 1 / 12 will always equate to the same thing regardless of what the expression is part of.
 
nero120 said:
2.) If I use 1 / 12 in the middle of a larger function, will it still equate to an integer? e.g:

Code:
decimal test = 3.5 + (2 * (1 / 12))

decimal test = 3.5 + (2 * (1 / 12));

That won't work your missing a cast. You can't implicitally convert a double to a decimal...
 
nero120 said:
Thanks. A couple of last questions:
Code:
decimal test = 3.5 + (2 * (1 / 12))

So (1/12) is of type integer, so it multiplies that integer by 2 to get another integer, and adds on the double of 3.5. This will give a double as an answer, and you then need to cast this to a decimal to whack it in 'test'.
 
Back
Top Bottom