Posted by on
Tags: , , , , , , , , , , , , ,
Categories: Uncategorized

Companies increasingly deploy machine learning (ML) to improve business processes and enable real-time, automated decision-making. When doing so, they often face application speed and scalability challenges.

In the first of this two-part series, we look at how in-memory computing (IMC) can provide the necessary speed and scale for real-time applications. In Part 2, we will explore the role of IMC in ML and deep learning (DL)

To take advantage of machine learning to improve business processes and enable real-time, automated decision-making, companies must be able to guarantee real-time application performance at scale. They may also need to frequently update their machine learning models.

Finally, they must accomplish all this while minimising infrastructure costs.

Consider these examples:

  • Credit card companies are under pressure from card issuing banks to shift from nightly to hourly (or even more frequent) updates of their ML-powered fraud detection models to enable faster detection of new fraud vectors.
  • E-commerce sites want to improve the relevance of their recommendation engines by more frequently updating their machine learning models based on the latest user website interaction data.
  • Enterprises across multiple industries want to improve information security and detect new threats faster by more frequently updating their model of ‘normal’ network activity.

For companies deploying deep learning technologies, another key consideration is minimising the cost and complexity of their DL infrastructure.

Read more here:

https://www.computerweekly.com/blog/CW-Developer-Network/In-memory-for-continuous-simpler-deep-learning-Part-1

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.