Hex / Binary Encoder · 5 min read
Why Every Byte Is Two Hex Digits — and Other Base Conversion Shortcuts
A practical guide to the mental shortcuts engineers use for hex, binary, and decimal — why one byte is two hex digits, why 0xFF is 255, and how to convert in your head.
If you've ever stared at 0xDEADBEEFin a stack trace and wondered why hex shows up everywhere in computing, the short answer is that hex is a near-perfect compression of binary for human eyes. One hex digit is exactly four binary bits. One byte is exactly two hex digits. That clean alignment isn't a coincidence — it's the reason hex displaced octal as the lingua franca of low-level programming.
The Four-Bit Identity
Hex is base 16. Each digit represents one of sixteen values, 0 through F. Binary is base 2; each digit is one of two values, 0 or 1. Sixteen is 2^4, so one hex digit packs exactly four binary digits with no remainder:
0x0 = 0000 0x4 = 0100 0x8 = 1000 0xC = 1100 0x1 = 0001 0x5 = 0101 0x9 = 1001 0xD = 1101 0x2 = 0010 0x6 = 0110 0xA = 1010 0xE = 1110 0x3 = 0011 0x7 = 0111 0xB = 1011 0xF = 1111
Once you memorise this table — and most engineers do, eventually, by accident — converting between binary and hex is mechanical. 1010 1101 is 0xAD. No arithmetic needed.
Why One Byte = Two Hex Digits
A byte is 8 bits. Eight divided by four (bits per hex digit) is two. So every byte is exactly two hex digits, and a hex string of length 2N represents N bytes with no ambiguity. That's why SHA-256 hashes are 64 hex characters (32 bytes), MD5 is 32 (16 bytes), MAC addresses are 12 (6 bytes), and IPv6 addresses are 32 (16 bytes). Whenever you see an even-length hex string in the wild, you can divide by two to get the byte count.
0xFF Is 255 Because That's the Largest Byte
An 8-bit unsigned byte holds values from 0 to 255. In hex, that range is 0x00 to 0xFF. 0xFF= 15 × 16 + 15 = 255. This is why so many limits in computing land on 255 or 256: pixel colour channels (0–255), ASCII extended (0–255), HTTP status code "255" conventions, and so on. Whenever you see "max value 255" somewhere, the underlying constraint is almost always "one byte."
Decimal Conversion in Your Head
Most engineers don't mentally convert hex to decimal except for round numbers. The ones worth memorising:
0x10= 16 (one hex digit overflow into two)0x80= 128 (top bit of a byte set; sign bit in two's-complement)0xFF= 255 (full byte)0x100= 2560x400= 1024 (one kilobyte)0x1000= 4096 (typical memory page size)0xFFFF= 65535 (max unsigned 16-bit; also the largest port number)0xFFFFFFFF= 4,294,967,295 (max unsigned 32-bit; one less than 2^32)
Once these are wired in, you can read a memory address or a hash prefix and instantly know the rough magnitude.
The Binary Shortcut for Powers of Two
Binary makes powers of two trivial: 2^N is a 1 followed by N zeros. 2^10 = 10000000000 in binary, which is 0x400in hex, which is 1024. Engineers often round 2^10 to "1K" because it's within 2.4% of decimal 1000 — the entire SI-vs-IEC kilo/kibi debate exists because of this happy coincidence.
Binary to Decimal Without a Calculator
For an 8-bit number, write the place values above the bits:
128 64 32 16 8 4 2 1 1 0 1 1 0 1 0 1 = 128 + 32 + 16 + 4 + 1 = 181
Add up the place values where the bit is 1. For larger numbers, group into nibbles (4 bits) and convert each to its hex digit, then convert the hex to decimal — usually faster than summing 32 place values.
Why Octal Mostly Lost
Octal (base 8) was popular on early machines like the PDP-8, where word sizes were multiples of three bits. As the industry standardised on 8-bit bytes and 16/32/64-bit words, octal stopped aligning cleanly. Three octal digits represent nine bits, which doesn't evenly divide a byte. Octal survives in Unix file permissions (chmod 755) and a handful of legacy contexts, but hex is the universal default.
The Practical Upshot
You don't need to convert hex to decimal in your head most of the time — you need to recognise byte boundaries, eyeball the magnitude, and know when something is a round power of two. Memorise the 16 nibble-to-hex mappings, learn the half-dozen reference points above, and the rest is muscle memory. After a few months of reading hex in stack traces and packet captures, your brain stops translating and starts reading it directly.
References
- Knuth, D. E. (1997). The Art of Computer Programming, Volume 2: Seminumerical Algorithms (3rd ed.). Addison-Wesley. Chapter 4.1: Positional Number Systems.
- IEEE. (2008). IEEE 754-2008: Standard for Floating-Point Arithmetic.
- Hennessy, J. L. & Patterson, D. A. (2017). Computer Organization and Design: The Hardware/Software Interface (5th ed.). Morgan Kaufmann.
- Cerf, V. (1969). RFC 20: ASCII format for Network Interchange. Internet Engineering Task Force.