@cawhitworth
My reaction to floating point was the same until I helped re-factor #qemu's #softfloat implementation into something you can actually follow. It's a little obscured now by the macro tricks we pull to have a common implementation across 32 to 128 bit floats but one day I'll hopefully get a cleaner #rust implementation working.
Hearsay has it that 64bit #DoublePrecision #FloatingPoint on #GPU is #slow, but #SoftFloat using two 32bit integers is even much #slower (I tested with #OpenCL).
#doubleprecision #floatingpoint #gpu #slow #softfloat #slower #opencl