IBM In-Memory Computing Architecture Speeds Up Computers 200 Times

Yes, its true that Ibms in-memory computing architecture will increase the speed of Computers 200 times.. In-Memory Computing or Computational Memory is an emerging concept nowadays…

Well Reputed Technology giant IBM makes an unsupervised automated learning Algorithm which will run on one million Phase Shift Memory Devices (PCMs)PCM is a type of computer RAM, it stores data by changing state of matter…
Phase Shift Memory, or PCM, is a type of non-volatile optical storage that works by manipulating the behavior of chalcogenide glass, which is how the data is stored on rewritable Blue-ray discs.
An electric current is applied to change the PCM cells from an amorphous structure to a crystalline structure, allowing it to store 0s and 1s in any state, while the low voltage application can read the data again.
For Newbies, In-Memory Computing refers to the storage of information in the RAM of dedicated servers rather than relational databases which operates on slower disk drives.
Well its not only this, it is an emerging concept which aims to replace Von Neumann’straditional computer architecture which however, divides calculation and memory into two different devices.
Moreover, In traditional computers, data transfer between RAM and CPU slows down the process and consumes more power.
Furthermore, the demonstration shows, IBM Algorithm runs on one million PCM devices. 
The details which are explained in paper published today in the scientific journal Nature Communications. To demonstrate the technology used, the authors selects 2 time-based examples and compared their results with traditional methods of Automatic Machine Learning, such as clustering k-means..
In comparison to classic machines, this innovation is expected to bring 200 times improvements in terms of both speed and energy efficiency.
As a result, this technology will be adequate to “enable ultra low, low power and massively parallel computing systems for AI (Artificial Intelligence) applications”.
However, the BLU Acceleration is IBM’s vision of what memory database technology should look like. It takes the basic concept of a database into memory one step further with the inclusion of eight features such as:
  • On-Chip computing..
  • Data Omission..
  • Compressed Data Analysis which makes it faster, simpler and Agile..
PCM devices are developed using a germanium antimony Telluride alloy, stacked between two electrodes. When a small electric current is applied to the material due to heating, its state changes from amorphous to crystalline, yes, its a chemistry terminology folks….

“This is a major breakthrough in the research area into the physics related to the Artificial Intelligence, which explores new hardware, devices, and architectures,” says Dr. Evangelos Eleftheriou, an IBM member and co-author of the paper published by IBM.

As CMOS scaling laws breaks due to the technological limits, a radical shift from the memory-processor dichotomy is needed to bypass the limitations of today’s computers.
Given the simplicity, high speed and low energy of our approach of memory computing, it is remarkable, our results are so similar to our classical reference approach executed on a von Neumann computer.
The best part of BLU acceleration technology is that customers have validated our claims about performance gains, improved compression and reduced time to value.

For example, at the University of Toronto, BLU is delivering on its promises, the queries are working well and excellent compression results were observed.

The Coca Cola Bottling Company experience is similar where BLU Acceleration helps accelerate the Decision-making process.
Moreover, relying on the main memory for storing active data sets, the In-Memory technology eliminates the latency of data movement through slow-rotating disk storage.
Organizations can access vital information faster than ever before, opening up new opportunities for them to gain an edge over their competition.
The result of computing stores in memory devices, and in this sense the concept is loosely inspired in terms of how it computes the brain,” According to Dr. Abu Sebastian, a scientist, and IBM Researcher.
Auto-learning Algorithms that use large datasets will also see an increase in speed by reducing latency overhead by reading data between iterations, the company wrote.
Compared to the flash, which can support about 3,000 write cycles, the PCM can support up to 10 million cycles, making it a potentially changing technology for data centers.

Please share your reviews if you find this article informative.

Leave a Reply

Your email address will not be published. Required fields are marked *