Binary code: what is it?
Learn the meaning of binary code, its appearance and how it plays a very important role in the execution of all computer programs.
11/09/2023
Binary Code meaning
Binary code is the end product of all source code written by a programmer using a programming language. When a programmer writes code for a program, it must be transformed to binary before it can be executed by the computer.
This source code to binary code transformation is performed in a process called Compilation. See the image below for an illustration of this process.
A good example of binary code is the specific extension files we use in many applications, such as:
- .exe – extension containing the binary for running Windows applications.
- .ipa – extension containing the binary for running IOS applications.
- .dll – extension containing the binary of function libraries used by Windows applications.
Files with these extensions are ready to be executed by the computer. That is, they have already been compiled and are in a format that the computer can understand.
The structure of a Binary Code
At first glance, binary code can be understood as a bit file (“bit” is the name given to each character of the code formed by only two symbols: 0(zero) and 1(one)).
Below is an example of what a binary code looks like. Each character, either 0 (zero) or 1 (one), represents a bit.
1.01101001 01101110 01110100 00100000
2.01101101 01100001 01101001 01101110
3.00101000 00101001 01111011 00001101
4.00001010 00100000 00100000 00100000
5.00100000 01110000 01110010 01101001
6.01101110 01110100 01100110 00101000
7.00100010 01001000 01100101 01101100
8.01101100 01101111 00100000 01010111
9.01101111 01110010 01101100 01100100
10.00100010 00101001 00111011 00001101
11.00001010 00100000 00100000 00100000
12.00100000 01110010 01100101 01110100
13.01110101 01110010 01101110 00100000
14.00110000 00111011 00001101 00001010
15.01111101
Binary code is formed by a bit grouping set. Note in the image above that each set of eight bits forms separate blocks. We call these blocks as “byte”. Thus, a byte is nothing more than an eight-bit set. See an explanation in the image below.
Sounds simple, doesn’t it? It is just a cluster of bits, because that is how the computer interprets all the information sent to it.
Regardless of the programming language in which a program is written, it will always be converted to binary code before it is executed. This is because a computer only interprets 0 (zero) and 1 (one).
But the story is not just about that. There is a universe of knowledge and secrets behind the use of these numeric characters that explain why computers only deal with such information.
Why do computers only recognize zero and one?
We human beings can recognize and deal with the most diverse types of information and sensory input. Computers, no.
A computer is nothing more than an integrated circuit structure that contains thousands of tracks through which electric pulses travel. He can’t handle anything but that. The only information that is truly stored in a computer is the PRESENCE or ABSENCE of an electrical pulse.
In fact, the concept of 0 (zero) and 1 (one) are, respectively, abstractions for the absence or presence of these pulses. Note in the example below that when there is an electrical pulse passing through the circuit it is interpreted as a bit 1 (one). When it does not exist it is interpreted as a bit 0 (zero).
This is how the computer interprets the electrical pulses as binary codes. But there is still a big question: how can the computer transform these bits and pulses (with no apparent meaning) into the most diverse types of information, tasks and technological services with clear meanings for us human beings?
David Santiago
Master in Systems and Computing. Graduated in Information Systems. Professor of Programming Language, Algorithms, Data Structures and Development of Digital Games.