Here’s how computers know what to do with 1s and 0s:
A modern computer is an incredibly complicated device that actually works on a few simple principles.
Data is stored in binary signals, which are usually either the presence or absence of an electrical current.
Those signals are then put through countless series of logic gates.
That process is how computers work.
So if you want to learn all about how a computer knows how to handle 1s and 0s exactly, then this article is for you.
Let’s get started!
What Are the 1s and 0s in Computers? (3 Concepts)
To know how a computer manages 1s and 0s, we should probably start a little simpler.
What are the 1s and 0s?
You can probably understand what it means for something to be binary, but how does a computer even work with 1s and 0s?
How does it store or read that information in the first place?
We have to cover that before the mechanics of processing will really make any sense.
Let me start by saying that magnetic storage and computation are a lot less common than they used to be.
You can still find magnetic hard drives that use these concepts, but it’s on its way out.
Even so, magnetic computation is usually a lot easier to understand.
By going through these concepts first, we can build a bit of a foundation of understanding.
Later, when I take you through modern systems, you’ll understand the binary aspects; it will just be applied through different technological mechanisms.
With that in mind, let’s dive in.
#2 How Magnetic Storage Works
Hopefully, you know that magnets have two poles: north and south.
If you get two magnets close to each other, similar poles will repel while opposite poles attract.
So, if you try to push the north side of two different magnets together, a force will build up, and one of the magnets will flip.
Magnetic 1s and 0s exploit this relationship.
I have to oversimplify this to fit it in such a short explanation, but here’s the gist of how it works.
Data storage is a series of magnets that are arranged in order.
Either the north side will face out (representing a 1), or the south side will face out (representing a 0).
Technically, you could swap that and make south 1 and north 0.
It doesn’t really matter as long as you’re consistent.
So, in order to store data, you just put a bunch of magnets in a row.
The arrangements of 1s and 0s can represent whatever you want them to.
That’s how binary information is stored in a magnetic system, and if you want to get deeper into how binary works, this is a good place to start (but things do get very complicated as you delve deeper).
#3 Read and Write Heads
We’re only halfway through the first concept.
It makes sense that magnets can represent 1s and 0s, but how do you actually store or read the information on a magnetic drive?
That comes down to the read and write head.
Physics nerds among you already know that a magnetic field can induce a current in a conductor.
If you’re not familiar with that, you can take my word for it, or you can take a detour to understand more about magnetic induction.
Because magnets can induce a current (and vice versa), magnetic storage uses an electric read-and-write head.
Basically, this is a conductor loop that passes over each magnet in the storage unit (for modern drives, these components are nanometers in size).
So, if the read head passes over a north-facing magnet (1), then that induces a current in the read head, and the system can recognize that current as a 1.
If the head passes over a south-facing magnet (0), it induces the current in the opposite direction, so the system can recognize a 0.
But, how do you set the magnets to reflect a 1 or a 0?
For that, there is a write head (which is combined with the read head in modern systems).
The write head gets close to one of the magnets in the system.
It will then push a current through in order to induce a magnetic field.
The direction of the current determines if the magnetic field is facing north or south.
Using this, the write head can flip the magnets in the storage device as needed, arranging them in the appropriate configuration of 1s and 0s according to the computer’s instructions.
What About the Solid-State Storage in Computers?
We’re already pretty deep, and there are some big concepts remaining.
Take a breath if you need to.
Grab some water.
When you get back into it, we’re going to take the concept of magnetic storage and see what changes with modern, solid-state systems.
Essentially, solid-state devices replace magnets with electronic circuits.
Instead of storing a north or south magnetic field, the device is storing charge inside of the circuit.
This is made possible by the amazing technology of transistors (more on those here).
The storage transistors are attached to different kinds of transistors that then check for a current.
If the checking transistor (read/write transistor) sees a currency, that’s a 1.
If it doesn’t see a current, that’s a 0.
In essence, the design philosophy is exactly the same as a magnetic drive.
Sufficiently complicated strings of 1s and 0s can represent whatever you want.
How Are 1s and 0s Stored?
That still doesn’t explain how the 1s and 0s are actually stored.
That mechanism is made possible by something called a floating gate transistor.
This is a different kind of transistor, and it’s special.
You can run a current into one of these transistors, and it will build up a charge.
Then, you can close the gate on the transistor (cutting off the current), and the charge will remain.
So, solid-state drives are replacing magnets with transistors that can indefinitely store a charge.
When the gate is opened, the charge on that transistor can induce a current in the read/write system (which is made of different kinds of transistors).
So, if the storage transistor has a charge on it, when it is checked by the system, the presence of that charge will create a current, and that is read as a 1.
If the transistor doesn’t have a charge stored up, then it won’t create a current, and that is read as a 0.
How Does the Computer Process 1s and 0s? (3 Topics)
You’re still with me?
That’s good, because you’ve covered the basics of how computers store and read information.
It’s all in 1s and 0s, but you now know the physical mechanics behind it all.
The next task is still challenging and deep.
Now, we have to make sense of how a computer actually uses the 1s and 0s.
It’s great that we can string a bunch of them together, but the things a computer does are really complicated.
How does it sort all of this information, much less compute something useful?
To understand that, we have to cover two topics.
First, we have to talk more about transistors and how they make everything possible.
Then, we’ll get into logic gates and how they actually control the computer.
We’re circling back to transistors again.
As you know, solid-state storage uses them to hold signals, but they’re actually stealing concepts from computer memory and central processing.
The truth is that transistors revolutionized these devices long before they were used to store the pictures you take on your smartphone.
You don’t really need to know how a transistor works to get this idea.
Instead, you need to know that a transistor can be used as a gatekeeper.
You can build transistors that will block a current or let it through depending on conditions (which will be explained in the next section).
You can also make transistors very, very small.
Modern processing transistors are nanometers in size.
So, you can put a ton of transistors into a circuit, each blocking or passing on currents as prescribed, to make an incredibly complex computer system.
One more thing about transistors.
They’re also adjustable, so you can change how some of the transistors in the system work without having to physically remove or insert them.
This is the key to what makes a computer programmable.
#2 Logic Gates
I said that transistors can be used as gatekeepers, but what really matters is the application of logic gates.
Now, logic gates are physical things in your computer, but we’re actually going to focus on them more as a non-tangible concept.
That sounds weird.
I want to focus on the theory behind logic gates before getting into the mechanics of how they physically work.
The gist is that you can arrange a circuit such that the path of current through it will change depending on conditions.
To try to keep this simple, you’re going to set up conditions with the circuit, and whether or not the signal passes through that point creates a new 1 or 0, and the new 1 or 0 carries a different meaning from what came before it.
This is the transformational process that we call “computing.”
And, it’s the logic gates that determine the conditions.
As an example, you can have an “and” gate.
That’s a circuit with two inputs and one output.
The gate will be closed, blocking the circuit from flowing to the output.
The only way to lower the gate is to apply a current to both inputs at the same time.
Thinking back to how 1s and 0s work, this gate would require two 1s in order to open.
You need a 1 at each input point.
If you meet that condition, the gate opens, and the current flows through.
So, the next part of the computer circuit will know that you had two 1s together if it gets any signal from this “and” gate.
That’s a very specific example, and there are a lot of different types of gates.
The core gates cover ideas known as and, or, xor, not, nand, nor, and xnor.
I’m breezing through this part because you don’t actually need to know all of the logic in a computer to understand the premise.
What you do need to know is that you can arrange as many gates as you want in any configuration, and this is really where the “thinking” of a computer occurs.
Ultimately, each gate is giving a 1 or a 0, but depending on how many gates came before it, that 1 or 0 can actually have a lot of information packed into it.
If you arrange enough gates together, you can get something as complicated and powerful as a computer.
Once more, this relies on the impressive technology that is transistors.
Because they are so small, you can put billions of them together, making a machine that can handle vastly complex sets of instructions.
#3 The Final Piece
Do you remember when I said that you can change how a transistor responds to things without having to physically add it or remove it from the computer?
That’s called being programmable.
This is conceptually deep and mechanically simple, so let’s focus on the easier part.
A programmable transistor has a gate inside of it.
That gate can be flipped to one side or the other, depending on current that runs through it.
So, a computer can deliberately flip the gates in certain transistors.
That allows the installed transistors to actually change from one type of logic gate to another.
Lots of transistors in a computer have this feature, so that actually allows you to change the internal logic of the computer on the fly.
It’s the final component of how programming works in a computer.
How Does the Computer Understand the 1s and 0s?
Let’s try to put all of that together.
A computer works by passing 1s and 0s (which are either present or missing electrical signals) through lots and lots of logic gates.
The output of those gates is what has some meaning.
This is done billions of times a second, and that’s how you can have very sophisticated computers.
Putting this in context, let’s think about how the computer allowed me to write all of this.
When I type these letters on my screen, my computer receives a bunch of digital signals from my keyboard (these will be groups of 1s and 0s).
The output from my keyboard is then processed through countless logic gates, combined with even more countless gates that run the software that is ultimately captaining my computer.
The computer knows how to print the letters to my screen and also save them in a file that keeps this information organized.
It’s a lot happening all at once, but it boils down to those two core concepts.
The computer is storing binary signals and putting them through a series of logic gates.