Meme transcription:
Panel 1: Bilbo Baggins ponders, “After all… why should I care about the difference between int and String?
Panel 2: Bilbo Baggins is revealed to be an API developer. He continues, “JSON is always String, anyways…”
Meme transcription:
Panel 1: Bilbo Baggins ponders, “After all… why should I care about the difference between int and String?
Panel 2: Bilbo Baggins is revealed to be an API developer. He continues, “JSON is always String, anyways…”
All of us fintech devs however, know the true horrors. Make everything a string, lest ye end up in precision hell
You are a fintech dev using floating point? And your advice is to encode things as strings?
…
This is why I got out of fintech.
(I am sorry, I know there are horrors and I am sure I am not familiar with your exact scenario.)
Edit: just for anyone who passes by: try to stick with integers in a currency’s smallest unit of division. (This is only one small bit of this problem, but the number of times I have seen currency values in floating point makea me psychotic.)
Until you get to multi currency, which is why I don’t support using ints or longs, and strings are still the only way. There are currencies that have no precision, and others that have 3 or more digits of precision - and then you’re looking at doing calculations each time. Strings are the safest way to make sure you’re representing exactly what you want to when sending data over the wire or persisting
And then the marketing department comes up with products that cost 1.5ct apiece.
Oh yeah, and some things like petrol are calculated to fractions of a cent as well
Just reminded me of an argument trying to explain that arithmetic with floating point numbers is not always correct to a coworker who was a mathematician just starting in software dev.
In a mathematicians mind the fact that an arithmetic operation can produce inaccurate result is just incomprehensible
Lovecraftian horror for mathematicians. Immediately goes insane.
The fun differences between the perfect world of theoretical and the realistic. Everyone thinks of computers as perfect - but it’s not until you’re asked to solve “How do you store decimals using only 0s and 1s?” does it start to click. Not as easy. It’s why I’m hesitant to hire bootcampers into my roles. Bootcamps are great, and they get more people coding, but you don’t learn that theory behind the scenes - you don’t really know what the computer and operating systems are doing. For 90% of the time it doesn’t matter, it’s abstracted away - but that last 10% man, that can really fuck up an entire system.
JSON numeric encoding is perfectly capable of precise encoding to arbitrary decimal precision. Strings are easier if you don’t want to fuck around with the parser, though.
Correct, JSON can handle any precision, because it’s just dumped as a string anyway, just not enclosed in the
""
. However, as you mentioned, as soon as it comes through the parser it’ll put it into an underlying float value. In C# I create a save high precision attribute that will take the value and put it directly into adecimal
. In JS I’m sure there’s some way to do that, but that parser is way less extensible compared to C#. However, this also all assumes you know the client will parse it correctly, overriding the default behavior. Safest is to just send it as a string, and then create your parsers to automatically send to and from strings