I’ve wondered why programming languages don’t include accurate fractions as part of their standard utils. I don’t mind calling dc, but I wish I didn’t need to write a bash script to pipe the output of dc into my program.
Because at the end of the day everything gets simplified to a 1 or a 0. You could store a fraction as an “object” but at some point it needs to be turned into a number to work with. That’s where floating points come into play.
There is already a pair of objects we can use to store fractions. The ratio of two integers.
Irrational numbers is when floating points come into play.
Many do. Matlab, Julia and Smalltalk are the ones I know
Scheme and many others. And lots of libraries for C and others.
It’s called bignum, or Arbitrary-precision arithmetic.
A lot of work has gone into making floating point numbers efficient and they cover 99% of use cases. In the rare case you really need perfect fractional accuracy, it’s not that difficult to implement as a pair of integers.
99.000004%
Performance penalty I would imagine. You would have to do many more steps at the processor level to calculate fractions than floats. The languages more suited toward math do have them as someone else mentioned, but the others probably can’t justify the extra computational expense for the little benefit it would have, also I’d bet there are already open source libraries for all the popular languages of you really need a fraction.
It would be pretty easy to make a fraction class if you really wanted to. But I doubt it would result in much difference in the precision of calculations since the result would still be limited to a float value (edit: I guess I’m probably wrong on that but reducing a fraction would be less trivial I think?)