float is consistent across 32/64 bit platforms
It is possible to get deterministic results for floating calculations across multiple computers provided you use an executable built with the same compiler, run on machines with the same architecture, and perform some platform-specific tricks.
- I want to use UE4, and AFAIK, it expects C++ source code. So D/GO/... won't cut it.
- I don't think I need "unlimited precision", but I also forgot about bignum; it's already available, and should produce the same results anywhere, which is my main requirement. Maybe I really should start with that, until proven that it causes performance problems. OTOH, I do prefer fixed-size (allocation free) data-types whenever possible.
- int vs uint, or int64 vs uint64, is not really a concern for me. I usually code in Java, where we don't even have a uint/uint64. On the one hand, if a value is unsigned, and the valid lower bound is 0, then you never have to check the lower bound. OTOH, I remember from reading "The C++ programming language" recently, that one should use signed values even if negative values are not valid, because it's likely bad data is going to be negative, so with an int, you have a better chance of catching bugs. And catching bugs has higher priority then removing the "lower bound check".
- Whenever I define an object field, I have to decide how big it will be; I see this as a critical design decision. So it's no big deal using int32 or int64 instead of int. Specifying the type of all the literal constants is rather more tiresome. But that is just an "annoyance", not really a "problem". The "problem" is that I don't trust myself to always remember to be explicit. I don't even have someone to do code-review, so some compiler/tool support would have been welcome. I guess I can just start by checking my code with a regex, as suggested, and eventually write a macro instead, if I the regex cannot cover all corner-cases.
If it's the case, I think it's better for you to make a type (maybe distinct type?) that when to work with int literal, it will convert it to uint64 (or int32/64).
When it's different type, compiler will tell you, for example, the proc +, won't work unless you define it first. So your concern of being "careless" will be checked by compiler.
Isn't it true only for floating-point calculations?