Difference Between ASCII and Unicode

van Vlymen paws
2 min readMay 28, 2020

--

If you are going to have technological interview, they probably ask you a challenge question like implement an algorithm to determine if a string has all unique characters. What if you cannot use use additional data structure, according to the Cracking the coding Interview, 6th Edition pp.192.

You should ask first question response weather using an ASCII or Unicode string. ASCII stands for American Standard Code for Information Interchange. It uses numbers to represent text. Digits (1,2,3, etc.), letters (a, b, c, etc.) and symbols (!) are called characters. Remember that, ASCII is always have simpler characters and lower 8-bit byte since it represents 128 characters to decrease storage size. ASCII has 256 this would be the case in extended. Originally its character codes were 7 bits long, but then it was extended to have a bit length of 8. Original = 128 characters, extended = 256. If not, then using Unicode to increase storage size.

Unicode represents most written languages in the world. ASCII has its equivalent in Unicode. The difference between ASCII and Unicode is that ASCII represents lowercase letters (a-z), uppercase letters (A-Z), digits (0–9) and symbols such as punctuation marks while Unicode represents letters of English, Arabic, Greek etc. mathematical symbols, historical scripts, emoji covering a wide range of characters than ASCII.

--

--

van Vlymen paws

The best way to get better at programming is to use what you have learned to build something!