Will reversible logic help extend Moore’s Law? It's hard to tell.

September 27, 2021

5 Min Read
ED3910.jpeg
Alamy

Soham Bhattacharya

In this era of semiconductor technology, reversible computing plays a vital role in diminishing power dissipation and the increment of energy efficiency in building up processors or any kind of simple or complex digital logic circuits. Several research areas have been going on in the modern world which could improve in an existing or impressive manner energy efficiency.

Moore’s law states that as transistors were made smaller, they would become faster, cheaper, and more efficient. Because of this fact, the semiconductor industry is busy building smaller, more densely packed transistors for fabrication. But nowadays, some experts in industry and academia say that making the transistor size smaller will no longer yield the improvements it used to. The reason behind building chips with multiple cores is that smaller transistors made the clock speeds stagnant over a decade ago. Even these multiple-core processors have faced the “dark silicon” problem; the space of the chip must be powered off to avoid overheating issues. Also, Dennard Scaling failed to pose a severe design challenge where the voltage scaling and transistors scaling are not in line.

What is Reversible Computing?

The dark silicon problem has caused several issues with the power dissipation and design implementation of the chips. Meanwhile, applications of reversible computing play a vital role in reducing power dissipation problems in chips. The thought of reversible computing goes to the true heart of thermodynamics and information theory. It is the only conceivable way inside the laws of physics that we could continue to work on the expense and energy effectiveness of general-purpose computing. To know how the reversible computation concept has emerged, we have to go back to the past a little bit.

Related:Open-Source Verification not as Easy as Design

Physicist Rolf Landauer of IBM distributed a paper in 1961 named "Irreversibility and Heat Generation in the Computing Process." Landauer contended that the logically irreversible character of regular computational activities has direct ramifications for the thermodynamic conduct of a device that is doing those operations. Landauer's decision, which has since been tentatively affirmed, was that each bit eradication should disperse around 17-thousandths of an electron volt at room temperature. This is a small measure of energy, yet given all of the tasks on a PC, it adds up.

Present-day CMOS innovation does less well than Landauer determined, dispersing something in the neighborhood of 5,000 electron volts for every bit deleted. Standard CMOS plans could be worked on in such a manner, yet they will not at any point have the option to get much underneath around 500 eV of energy lost per bit deleted, still a long way from Landauer's lower limit.

Related:Two Myths About Silicon Photonic Chips

Later, Charles Bennett, who showed in 1973 that it was feasible to develop completely reversible PCs fit for playing out any calculation, tried to fix the tasks that delivered moderate outcomes. This would permit any brief memory to be reused for resulting estimates while never eradicating or overwriting it. Thus, reversible calculations, whenever carried out on the right equipment, could evade Landauer's cut-off. 

Unfortunately, Bennett's concept of utilizing reversible processing to make calculations undeniably more energy efficient has been mulled in scholarly backwaters for a long time. The issue was that it's tough to design a framework that accomplishes something computationally intriguing without causing a lot of entropy increment with every activity. However, innovation has improved, and the need to limit energy use is currently intense. So, a few scientists are indeed looking at reversible processing to save energy.

Two scientists (Edward Fredkin and Tommaso Toffoli) in the late 1970’s and early 1980’s took the issue more seriously and created two necessary abstract computational primitives named "Fredkin Gate" and "Toffoli Gate." Moreover, several other gates came into play afterward, like Feynmann Gate, Double Feynmann Gate, HNG Gate, TSG Gate, and many more. These gates helped a lot to utilize the concepts to implement different combinational and sequential logic gates, which have been in the semiconductor industry.

Ressler had taken the first step in investigating the requirements for reversible computers. He had implemented a simple accumulator-based machine using only Fredkin gates. He talked about control stream issues and the idea of a garbage stack to hold additional operands from irreversible activities. Still, his plan bears little likeness to a cutting-edge processor model. The data path in Ressler's work doesn't express forward and reverse components. However, it depends on the instruction set and reversible Fredkin gates to guarantee reversibility. Later, scientists like Hall and Baker covered up issues related to reversible computing. The Pendulum, a Reversible Computer Architecture by Carlin James Vieri in 1993, was an extension of the previous work by Ressler, Hall, and Baker. The thesis assumed that bit assurance must be avoided and all the computations must be reversible logically.

But all things considered, reversible computing is not easy. Without a doubt, the design obstacles are colossal. Accomplishing proficient reversible processing with any sort of innovation will probably require an intensive update of our whole chip-plan substructure. We'll likewise need to retrain a large piece of digital engineering to utilize the new planning philosophies. The trouble with these difficulties would be an exceptionally sorry excuse for not looking dependent upon them. Right now, we've shown up at a noteworthy point in the advancement of registering innovation, and we should pick away soon. If we progress forward with our current course, this would add up to abandoning the fate of figuring and tolerating that the energy productivity of our equipment will soon level. And surprisingly, a quantum-computing advancement would just serve to fundamentally accelerate a couple of particular classes of calculations, not register overall. So, supposedly, an unbounded future for processing anticipates us if we are sufficiently striking to hold onto it. 

Soham Bhattacharya is studying for his Master’s degree in Electronics and Communication Engineering from Heritage Institute of Technology, Kolkata, India.

Gain the latest insights from industry experts! Join DesignCon Digital for 2 days of education, sourcing, and networking, Oct 5 and 6, 2021. Register here.

Sign up for the Design News Daily newsletter.

You May Also Like