To represent the biggest number in the universe without precision loss
Let me ask you
What data type will you choose to assign and process 100,000 digits of pi?
Ummm… float, decimal,… BigNumber … not sure!!
What output do you expect when you subtract 0.1 from 0.3?
0.2 but I’m not sure why it is 0.19999999999999998.
> var x = 0.3 - 0.1
What encoding will you use to represent/store the string of Emoji?
How will you decide whether you should use float or decimal data type?
9007199254740992 doesn’t seem very big… wait!! it doesn’t have decimal point. Shouldn’t I use long data type?
What data type or character encoding will you choose if you’re very concerned about the storage?
Ummm… UTF-32… no wait!! it depends.
Probably the next session can help you to answer better.
BigBit standard treats 1 byte as a bit. So you can call it ByteBit format too. BigBit standard defines 3 numeric data type and 1 character encoding.
Numeric Data type
BigBit standard defines 3 numeric data types: LB, HB, EHB
- Linked Bytes Format: It can represent any positive non fractional number in the universe
2. Header Byte Format: It can represent any number between
-13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084095 and `
3. Extended Header Byte Format: It can represent any number in the universe.
- You can store any number in the universe.
- Any number stored can be retrieved without precision loss. Check for 9007199254740992.
- They’re comparitvely smaller in memory than a number represented by IEEE 754 format. Check for 128.
BigBit standard use LB format, given above, for character encoding. There is a single encoding for all the unicodes.
If you have any question or feedback regarding BigBit standard please comment.
Feedbacks are important to understand how can I improve and bring more useful materials. Please clap this article, comment here, share with your friends, or please give a star to above mentioned github repositories.