其他分享
首页 > 其他分享> > byte + byte = int... why?

byte + byte = int... why?

作者:互联网

byte + byte = int... why?

问题

Looking at this C# code:

byte x = 1;
byte y = 2;
byte z = x + y; // ERROR: Cannot implicitly convert type 'int' to 'byte'

The result of any math performed on byte (or short) types is implicitly cast back to an integer. The solution is to explicitly cast the result back to a byte:

byte z = (byte)(x + y); // this works

What I am wondering is why? Is it architectural? Philosophical?

We have:

So why not:

A bit of background: I am performing a long list of calculations on "small numbers" (i.e. < 8) and storing the intermediate results in a large array. Using a byte array (instead of an int array) is faster (because of cache hits). But the extensive byte-casts spread through the code make it that much more unreadable.

 

回答1

The third line of your code snippet:

byte z = x + y;

actually means

byte z = (int) x + (int) y;

So, there is no + operation on bytes, bytes are first cast to integers and the result of addition of two integers is a (32-bit) integer.

 

This has got to be the most correct, concise answer. There is no operand to add between bytes, so instead of explaining why "adding two bytes" works or not (it never happened), this clearly shows why the result is an int, because the only thing that happened is an addition of 2 ints. – RichardTheKiwi Apr 3, 2011 at 23:22     回答2

In terms of "why it happens at all" it's because there aren't any operators defined by C# for arithmetic with byte, sbyte, short or ushort, just as others have said. This answer is about why those operators aren't defined.

I believe it's basically for the sake of performance. Processors have native operations to do arithmetic with 32 bits very quickly. Doing the conversion back from the result to a byte automatically could be done, but would result in performance penalties in the case where you don't actually want that behaviour.

I think this is mentioned in one of the annotated C# standards. Looking...

EDIT: Annoyingly, I've now looked through the annotated ECMA C# 2 spec, the annotated MS C# 3 spec and the annotation CLI spec, and none of them mention this as far as I can see. I'm sure I've seen the reason given above, but I'm blowed if I know where. Apologies, reference fans :(

   回答3

I thought I had seen this somewhere before. From this article, The Old New Thing:

Suppose we lived in a fantasy world where operations on 'byte' resulted in 'byte'.

byte b = 32;
byte c = 240;
int i = b + c; // what is i?

In this fantasy world, the value of i would be 16! Why? Because the two operands to the + operator are both bytes, so the sum "b+c" is computed as a byte, which results in 16 due to integer overflow. (And, as I noted earlier, integer overflow is the new security attack vector.)

EDIT: Raymond is defending, essentially, the approach C and C++ took originally. In the comments, he defends the fact that C# takes the same approach, on the grounds of language backward compatibility.

 

回答4

This is for the most part my answer that pertains to this topic, submitted first to a similar question here.

All operations with integral numbers smaller than Int32 are rounded up to 32 bits before calculation by default. The reason why the result is Int32 is simply to leave it as it is after calculation. If you check the MSIL arithmetic opcodes, the only integral numeric type they operate with are Int32 and Int64. It's "by design".

If you desire the result back in Int16 format, it is irrelevant if you perform the cast in code, or the compiler (hypotetically) emits the conversion "under the hood".

For example, to do Int16 arithmetic:

short a = 2, b = 3;

short c = (short) (a + b);

The two numbers would expand to 32 bits, get added, then truncated back to 16 bits, which is how MS intended it to be.

The advantage of using short (or byte) is primarily storage in cases where you have massive amounts of data (graphical data, streaming, etc.)

 

标签:...,short,int,C#,result,byte,why
来源: https://www.cnblogs.com/chucklu/p/16276298.html