128-Bit Operating Systems: Why Aren’t There Any?

Here’s why there isn’t a 128-bit operating system.

There are mainly 32-bit and 64-bit operating systems, but what about 128-bit operating systems?

So if you want to learn why there aren’t 128-bit operating systems, then you’re in the right place.

Keep reading!

What About Bits and Operating Systems?

Currently, there are 32-bit and 64-bit processors, operating systems, and programs. 

Before these existed, programmers worked with 8-bit and 16-bit systems.

Therefore, it would be logical to assume that new devices and programs with higher bit depth will appear along with the progression of technical development. 

The logical next step should be 128-bits.

But does that make sense?

Let’s figure it out.

The digit capacity in computer science is the number of bits that a device can simultaneously process.

There is:

  • Processor’s bit capacity—the capacity of its machine word.  
  • Data bus capacity—the operating system’s capacity.
  • Capacity of programs and applications. 

All these are different concepts overlap and may partially depend on each other. At the lowest level is the processor capacity.

Why Are There No 128-Bit Operating Systems? (3 Steps)

We have to consider several issues:

  • Challenges related to bit depth. 
  • Processor’s bit capacity and the bit version of OS and how they are related.
  • Growth of bit depth occurred historically and why it has stopped at the moment.

Increasing the system capacity over 64 at present may be of interest only for a narrow range of applied problems. 

With the growth of the bit depth, the calculation accuracy also increases. 

The 128-bit (or higher) architecture is useful for mathematically intensive operations such as graphics, cryptography, and complex system modeling, but not for operating systems.

Based on the preceding conversation, we can conclude that a system with a 64-bit processor is now sufficient for the bulk of users. 

Closeup Man Working Laptop Conference

The 64-bit system offers enough computer performance capacity for most professional applications such as: 

  • Mathematics
  • Physics
  • Geodesy
  • Cartography
  • Cryptography
  • Databases

64- or even 32-bits are large enough for most practical calculations.

A wider memory bus can speed up the loading of instructions and data, which is a lot. 

Still, each instruction also requires more memory and processing power when it uses more bits.

A higher bit depth of an operating system does not directly mean higher speed. 

Increasing the system’s bit capacity, contrary to expectations, does not give a performance increase in proportion to the increase in the capacity and possibly. 

On the contrary, it will slow down due to the need to process longer addresses.

So, we consider 128-bit operating systems will not be appearing at your local computer store anytime soon. 

But maybe you might work on a 128-bit operating system 10 years from now. 

We’ll just have to wait and see.

Let’s get into the details of why that is so:

#1 Processor’s Bit Capacity

The main characteristic of a processor is its clock frequency.

It is the number of cycles per second.

But the bit capacity of the processor, in turn, determines the size of data processing per cycle, which the processor exchanges with the Random Access Memory (RAM). 

If the size of data per clock is 1 byte, then the processor is called 8-bit (8-bit).

If the size is 2 bytes, the processor is 16-bit (16-bit).

If the size is 4 bytes, the processor is 32-bit (32-bit).

If the size is 8 bytes, then it is a 64-bit processor (64-bit).

Historically, an increase in processor bit capacity is associated more with increasing the address space, increasing the length and complexity of the executed instructions. 

At the same time, the increase in the speed of processors due to this increase is not considered, probably due to the insignificance of this value.

In 1978, the era of x86 began with the creation of the Intel i8086 microprocessor.

In the same year, the i8088 was developed; its primary difference was the 8-bit external data bus, which provided compatibility with the 8-bit binding and memory used earlier.

In 1982, Intel announced i80286, which is the second-generation 16-bit x86 compatible microprocessor. It is an improved version of the Intel 8086 processor. 

The main advantages of the new processor are 3-6 times higher performance and additional addressing modes.

However, its main attribute was its compatibility with existing software.

In 1985, Intel released the i80386, perhaps the most significant event in the history of x86 processors. 

It was revolutionary: a 32-bit multitasking processor with the ability to run multiple programs simultaneously. 

The Intel 386 had significantly improved memory management over the i80286 and built-in multitasking, which enabled the Microsoft Windows and OS/2 operating system development.

In fact, until recently, most processors were nothing more than fast 386s. 

A lot of modern software uses the same 386 architecture but works faster.

In 1989, Intel released the 80486 (also known as i486, Intel 486, or simply 486th).

It is a 32-bit scalar x86-compatible microprocessor of the fourth generation built on a hybrid CISC-RISC core

The 80486 is an improved version of the 80386 microprocessors.

In addition, it was the first microprocessor with an embedded math coprocessor (FPU).

In 2002, Intel proposed a 64-bit architecture to replace the Intel processors with the AMD (AMD64). 

Intel presented a new designation EM64T (Extended Memory 64-Bit Technology), to keep up with their competitors.

Currently, both 32-bit and 64-bit processors are in use. 

Some clarification is needed here. In the designation of processor architectures, we meet the abbreviations x86 and x64. 

Hand holding a computer microchip with motherboard on the background.

X86 is a 32-bit processor, while x64 is a 64-bit. 

Why is the processor 32-bit, and is the architecture called x86?

That name came from the early processor models that ended in 86 and had a 32-bit architecture: 8086, 80286, 80386, 80486.

With the transition to 64-bit architecture, the bit depth of the internal registers of 64-bit processors doubled (from 32- to 64-bits ).

As a result, 32-bit x86 code commands got 64-bit counterparts.

In addition, due to the expansion of the address bus width, the amount of memory addressed by the processor has increased significantly.

One of the main differences between x86 and x64 systems is the use of the computer’s RAM. 

The RAM usage limit for 32-bit systems is 2 ^ 32 = 4,294,967,296 bits or 4 GB.

If the device has more than 4 GB of RAM, the system will not use the rest.

For 64-bits, it will be 2 ^ 64 = 18,446,744,073,709,551,616 or 18 million terabytes. 

When will the size of the RAM exceed this value?

The main disadvantage of x64 is that 64-bit programs use much more RAM to complete their work.

Therefore, if you have only a little bit of RAM, it doesn’t make sense to install x64. 

Besides, you need to consider that the operating system (OS) itself also uses part of the RAM.

The primary rationale for implementing a 64-bit architecture is the development of applications that need large address space. 

The 4 GB RAM limit of 32-bit systems affect the performance of such resource-intensive programs.

When the application of a 64-bit architecture is more efficient: 

  • High-performance servers and database management systems where the speed of work increases many times with an increase in the amount of RAM
  • Design systems, modeling of structures, and technological processes such as geodesy and cartography
  • Computing systems for mathematical and scientific calculations and simulation of physical experiments
  • Cryptography, the strength of which grows with the growth of the length of the operands and keys
  • Modeling 3D situations

Also, 64-bit processors allow you to process huge numbers efficiently.

Computing with large numbers or high precision requirements is one of the strong points of the 64-bit architecture because even an ordinary floating-point number fits exactly into 64-bits.

This feature is in demand in some particular operations, such as encryption or media coding.

Bit Versions of the Operating Systems

As we learned above, the bit capacity of the processor is primary.

To choose the right OS, you need to know your processor’s bit rate and RAM amount.

If you have an x64-bit processor and RAM that is more than 4 GB (ideally from 6 GB), it is definitely worth installing an x64-bit system.

If the amount of RAM in your system is precisely 4 GB, sometimes people will install an x64-bit system so they won’t lose half a gigabyte of memory.

But, this is a fallacy because the x64 system uses more memory for its operation, which makes such an installation impractical.

When the amount of RAM does not exceed 4 GB, and the processor also operates in x32 mode, there is nothing left but to install the x32 OS.

Hand typing on laptop keyboard with operating system on screen.

What is the difference between x64?

A 64-bit OS “sees” large amounts of memory, knows how to work with them, and allows you to run 64-bit applications.

For every bit you add to the OS architecture, we double the number of available addresses.

Addresses are the number of combinations that you can form with a given number of bits. For example:

1 bit = 0 or 1, which adds up to 2 combinations

2 bits = 00, 01, 10, or 11, which adds up to 4 combinations

3 bits = 000, 001, 010, 011, 100, 101, 110, or 111, for a total of 8 combinations

Thus, going from 32-bits (which is a total of 4,294,967,296 combinations) to 64-bits (which is a total of 18,446,744,073,709,551,616 combinations) is already redundant.

And it’s not just the addressable space, which has been dramatically increased. Look at this table for Windows operating systems:

Architectural component64-bit windows32-bit windows
Virtual memory16 terabytes4 GB
Paging file size256 terabytes16 terabytes
Hyperspace8 GB4 MB
Paged pool128 GB470 MB
Non-paged pool128 GB256 MB
System cache1 terabyte1 GB
System PTEs128 GB660 MB The

64-bit OS will also allow you to run regular 32-bit programs as well. Thus, you don’t need settings for this.

It’s just that a 64-bit system has a subsystem for executing 32-bit applications. 

Therefore, you can successfully install and work with both 32-bit and 64-bit applications. 

However, while 32-bit applications can run in a 64-bit OS, it doesn’t work the other way around!

The next technical point is that 64-bit operating systems require 64-bit drivers. 

As a rule, all modern PC devices, laptops, and peripherals have two driver versions on the accompanying installation disc; that is, 32- and 64-bit.

When working with modern devices, there should be no problems with this.

#3 128-Bit Operating Systems

In computer science and computer technology, 128-bit is used to designate structures and data types, which occupy 128-bits of memory in the computer or 16 bytes of memory. 

In 128-bit computer architectures, all the foundations such as registers, address buses, or data buses are 128-bits.

Researchers described the 128-bit multi-comparator in 1976.

The design of a central processing unit (CPU) with 128-bit multimedia extensions appeared in 1999.

Unbeknownst to many of us, we are already using 128-bit modes.

Yes, there are no mainstream general-purpose processors that can handle 16-bytes at a time. 

However, 128-bit capacity has been present on the general market for at least fifteen years in a limited form.

There have also been experimental and commercial developments even earlier, most notably the DEC VAX modifications.

The beginning was laid by the MMX/SSE “multimedia” instructions in the late 90s, manipulating 128-bits, albeit not as a single integer but divisible into several numbers.

In the “zero-addressed” Transmeta Corporation thundered, the original chips used 128-bit to accelerate the translation and execution of the emulated machine code of other people’s processors. 

Today the latest version of the most popular OS, MS Windows, will refuse to work on a computer whose processor and motherboard do not support the CMPXCHG16B assembler instruction, which operates with a 128-bit number. 

Finally, many assistive technologies in mass computing use 16-byte math: memory in graphics cards, IPv6 addressing, ZFS (zettabyte file system).

They all benefit if the central microprocessors go up to 128-bits.

Do You Need a 128-Bit Operating System? (Benefits and Disadvantages)

64-bit programs give programs direct access to memory space roughly a billion times larger than today’s PCs. 

Still, even for specialized supercomputers, this limitation can be overcome by other architectural changes beyond merely increasing the number of bits.

Corridor in a data center with servers and supercomputers.

Most numbers rarely exceed a few million in a typical computer program, much less than 64-bit numbers, which amounts to billions of billions (18,446,744,073,709,551,616). 

Computer programs also commonly use so-called floating-point numbers, which represent fractional numbers. Here, the number of bits improves the precision. 

However, floating-point bit precision has little to do with operating system bits or memory size.

Benefits of Bit Growth on OSs and CPUs:

  • Increased address space when moving from 32-bit to 64-bit, but challenging to use for the foreseeable future.
  • Higher accuracy of calculations. In a 64-bit system, with an increase in bit depth, an increase in calculation accuracy occurs. But so far, these requirements arise only for applied specialized calculations.

Jumping to 64-bits allows you to process numbers up to 64-bits in one arithmetic operation. 

However, the need to work with numbers beyond two billion (32-bits) is infrequent.

Thus, the real benefits of this innovation are possible for cryptography and serious scientific research.

Disadvantages of Bit Growth on OSs and CPUs:

  • Increasing the word length requires an increase in the amount of allocated memory for it, which you can notice when moving from 32-bit systems to 64-bit systems. A significant increase in the volume of RAM and permanent memory is limited due to their high price.
  • When using systems with increasing bit depth, it will be necessary to replace users’ software.

Author

  • Theresa McDonough

    Tech entrepreneur and founder of Tech Medic, who has become a prominent advocate for the Right to Repair movement. She has testified before the US Federal Trade Commission and been featured on CBS Sunday Morning, helping influence change within the tech industry.

    View all posts