Skip to content

What is a bit, in terms of computers?

In this context, think of a bit as a digit. Bit as a word is literally a combination of binary and digit

A bit is the smallest unit of data used in a computer. Either a 1 or a 0. In binary numbers, each digit is a bit. So not really a concept of data transfer. More a concept of encoding information. Also, letters and special characters can be encoded into binary by establishing a standard, like ASCII.

When working with IP addresses, the bits represent addresses of computers. However, in other contexts binary can represent words, special characters, even images, and videos! It all depends on the standard (rules for translation) that you use. Without a translation standard binary information is meaningless.

ASCII stands for American Standard Code for Information Interchange. It’s a standard, originally set by IBM, for encoding information in binary. Other standards also exist, like Unicode. Files have different standards for encoding, like .png for images or .mp4 for videos. All data on computers is stored as binary, including the operating system of the computer and all files. All data manipulation and computer processing occur as manipulation of binary strings at the fundamental level.

These concepts are so fundamental to computer science, once they click for you everything else will start to make sense, including many of the reasons why certain things are done. I fell like last key concept you need to understand for this to all make sense is what happens at the hardware level. Binary is simply a written representation of what exists in the computer’s memory: trillions of tiny switches (transistors) that can either be on or off. Hence, 1 or 0. These switches can be flipped and even have their state read using circuits and electric currents.

Ultimately, binary: 1s and 0s, are symbols for us to communicate and understand the information stored in these sequences of tiny switches