Dear All,

I'm trying to convert a uint64 to a float64 between 0 and 1. The following works for int but not for uint64.

var x = 100
var y = float64(x)/float64(high(int))

How do I adapt the above to work for uint64?

2017-12-03 16:29:34

Idk why "high(uint64)" doesn't work, but you could try "18446744073709551615.0'f64" instead of "float64(high(uint64))".

EDIT: Please note that this cannot ever result in 1.0, due to (unavoidable) rounding error when converting uint64 to float64.

2017-12-03 17:01:30

@monster I think calculating high(uint64) based on high(int64) and low(int64) is far nicer:

let ihi = high(int64)
let ilo = low(int64)
echo ihi
echo ilo
# all the possible values on 64 bits:
echo cast[uint64](ihi) - cast[uint64](ilo)

let y1 = cast[uint64](high(int64)) - cast[uint64](low(int64))
let y2 = 2'u64 * cast[uint64](high(int64)) + 1'u64
let yf = float64(y2)
echo y1
echo y2        # the same
echo y2+1'u64  # be sure it IS max
echo yf

You can implement high and low by yourself:

type sizedVariant[T: SomeUnsignedInt]  =
        (when T is uint:   int
         elif T is uint8:  int8
         elif T is uint16: int16
         elif T is uint32: int32
         elif T is uint64: int64
         else: BiggestInt)

proc low[T: SomeUnsignedInt](t: typedesc[T]): T = 0
proc high[T: SomeUnsignedInt](t: typedesc[T]): T =
   type S = sizedVariant[T]
   cast[T](high(S)) - cast[T](low(S))

echo high(uint8),  " ", high(int8)
echo high(uint16), " ", high(int16)
echo high(uint32), " ", high(int32)
echo high(uint64), " ", high(int64)

2017-12-03 18:44:46

That's a very bad idea I think.

Dividing by high(uint64) will make you run in catastrophic cancellation issues all over the place because have at most 2^-15 ULP or so and you're dividing by 2^64-1.

If you want to scale a discrete value between [0,1], depending on your use case you should:

  • scale by the min and max (if used in linear applications)
  • alternatively use the logistic sigmoid function 1/(1+exp(-x)) (if used in non-linear applications)
  • or alternatively, center on the mean and make sure that standard deviation is 1 (if used in probabilistic application and you can assume a normal distribution).
2017-12-04 08:38:28

Hi @mratsim

I wasn't going to divide as is, just use an acceptance/rejection approach. A nicer approach is here.

2017-12-04 15:02:22