Touchstone Words


What Is The Fastest Computation Speed Possible | Touchstone Words

Popular Articles

Sexual activity and body health
Do You Know If You Are Sexually Active
Biofuel, Biodiesel, Environment, Fuel, Fossil Fuel, Energy, biohydrogen, biomethanol, biohyrdrogen d
Pros and Cons of Biofuel Energy
Porn actors who go to Hollywood
From Porn performances to Hollywood
social and economical state of a country
Pros and cons of capitalism vs socialism
Perceptions and mind thinking
What are perceptions and how to manage them
Taylor Swift nightmare songs
Top Ten Worst Taylor Swift Songs Shared by her Fans
Blockchain Hyperledger family
Intro to Hyperledger Family and Hyperledger Blockchain Ecosystem
How to get right attitude woman
Why in dating and relationship, attitude matters
Blow job tips
Pros and Cons of Blow Jobs
Hyperledger fabric and its components
The Survey of Hyperledger Fabric Architecture and Components for Blockchain Developers

What is the Fastest Computation Speed Possible

By Shane Staret on 2017-11-08

Computers. They make life a whole lot easier. From being able to connect with people to solving complex math problems, they just do it all. And they can do it really fast. Yet interestingly, even the most powerful supercomputers are not as efficient as the human brain.

speekmind_college_contentspeekmind_college_content

 

While supercomputers may not be more efficient than the human brain, we still have managed to create machines that are more powerful. The Sunway TaihuLight is currently the fastest supercomputer in existence, measuring at 125 petaFLOPS. Basically, a “FLOPS” is a way to measure processing speed, one FLOPS is one floating-point arithmetic calculation per second. A petaFLOPS is 1015 FLOPS...that means this supercomputer can perform over 125,000,000,000,000,000 complex arithmetic calculations per second.  Goddamn, meanwhile it took me about 15 seconds today to figure out that 62 + 3 is the same thing as 6 * 6 + 3.

What makes the Sunway TaihuLight’s impressive processing power even more unbelievable is the fact that supercomputers cracked the petaFLOPS barrier less than a decade ago, in 2008. Meaning that the processing power of the most powerful supercomputer went up by over 12,500% in just nine years. But, there has to be a limit...right? Of course there has to be a limit regarding how quickly a computer can process information.

The first major thing holding modern computers back is the transistor. You see, the transistor is a physical component that allows the computer to process information. So the more you can fit within a circuit, the more processes you can accomplish. For the past few decades, the size of the transistor has shrunk immensely. The very first usable transistor created in 1947 could fit in someone’s hand (about 9 cm) and now the smallest is about 1 nm.

 

Many scientists thought creating a transistor smaller than 5 nm was going to be impossible due to quantum tunneling and now they are stuck again with trying to shrink the transistor even more.

But the size of the transistor is not the only thing that determines the speed limit of the computer. In fact, we may not even have to worry about shrinking the transistor anymore with quantum computers becoming the norm.

 

Instead, Seth Lloyd from the MIT Department of Mechanical Engineering states, “In particular, the speed with which a physical device can process information is limited by its energy and the amount of information that it can process is limited by the number of degrees of freedom it possesses”. What does that all mean? It basically means that there is a universal limit to how many degrees of freedom a computer may possess, thus there is a finite amount of energy that can be transferred per second. Electrical energy must be used in order to process data, which is why computers actually have to be plugged in to work.

Seth Lloyd and his team go over all of the specific equations here, but the gist of the math is that by using E = mc2, where m is the mass of the computer and c is the speed of light, the average energy (E) that a computer uses can be determined. This is assuming ideal conditions. Then, the number of operations a computer could process by second is determined by taking the average energy (E) doubled, and dividing it by π and h, where h is Planck’s reduced constant. So, the equation to find the number of operations per second is # = 2E/πh. Basically, the larger a computer is, the more energy it can use per second, meaning a larger amount of operations can take place per second.

Using the above equations, you can find that a one kilogram computer, under circumstances where processing speeds are equal to the speed of light, can process up to 1.36 x 1050 operations per second. So the world’s fastest supercomputer, the Sunway TaihuLight, is about 1.36 x 1035 times slower than the theoretical limit of a computer that is one kilogram. By the way, the Sunway TaihuLight is much more massive than one kilogram, so we still have quite a ways to go.

But let’s get crazy. Say we could build a computer that is the mass of the entire observable universe (1053 kg). We would get a computer that has a limit of 5.426 x 10103 operations per second. That is more operations per second than the number of atoms in the universe.

So yeah, we have just touched the tip of the iceberg so far. Modern day supercomputers are fast, but on the grand scale of things, it is possible to go much, much faster. Realistically however, we will never hit the computational speed limit. It should be impossible for computers to ever process the things at the speed of light and unless we plan on building a computer the mass of the entire Earth any time soon, it looks like we will have to stick to using computers that only process 125,000,000,000,000,000 per second. What a bummer.

Article Comments

By Same Author

Learn about computer freezing tasks
What Is Happening When Your Computer Freezes
A intro to Fermi Paradox
What is the Fermi Paradox and How it Works
Science behind moving furniture
The Quest to Solve the Moving Sofa Problem
The biggest mistake by a programmer
The Worst Programming Mistake Ever Made
why GitHub matters
Why You Should Use GitHub
stem and coding for kids
Why We Should Teach Young Kids How to Code
so many coding languages are out there
Why There Are so Many Programming Languages
computer IC evolution
The Law of Moore and the Ever Increasing Efficiency of Circuits
What threading matters
Why Threading is So Important
Learn about logic gates
Logic Gates and How an Adder Works

Affiliated Companies

Disclaimers And Things

Copyright © WEG2G, All Rights Reserved
Designed & Developed by DC Web Makers