Posted by on
Categories: AI google IBM Machine learning Microsoft

@IBM Research says it’s developed a new approach to in-memory computing that could give it an answer to the hardware accelerators for high-performance and machine-learning applications sought by @Microsoft and @Google. IBM’s researchers describe its new ‘mixed-precision #inmemorycomputing’ approach in a paper published today in peer-reviewed journal @NatureElectronics. The company is eyeing a different take on traditional computing architectures in which software requires data transfers between separate CPU and RAM units. According to IBM, that design, known as a von Neumann architecture, creates a bottleneck for data analytics and machine-learning applications that require ever-larger data transfers between processing and memory units. Transferring data is also an energy-intensive process.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.