Changing the rules of the game: Why in-memory computing is like hydrofoils

0

After  SAS® Global Forum in Denver, I took a holiday in San Francisco. Like all good tourists, I obviously went to visit the Golden Gate Bridge. While I was walking across it, I noticed a windsurfer getting what I thought was a bit close to the mouth of San Francisco Bay, especially given that the tide was going out. I do a bit of sailing, and it really looked like he was in trouble. The strong wind and the current were moving him rapidly towards the open sea, and much faster than he could surf upwind. I was just wondering whether I ought to do something when the scenario suddenly changed.

The surfer managed to gain enough speed to lift his board onto its hydrofoils. Suddenly he was flying. The waves and current were not bothering him anymore and he was gaining tremendous speed. Within a couple of seconds he was gone, back towards San Francisco and Alcatraz.

Hidden Insights: Changing the rules of the game: Why in-memory computing is like hydrofoils

A real game-changer

It is not the first time that I have seen sailboats or windsurfers with hydrofoils. At Lake Neusiedl, where I sail in summer, a few smaller yawls (dinghies) and boards use this new technology. The 34th America’s Cup was the first large public sailing event where large catamarans were equipped with hydrofoils to allow them to "come out of the water" when reaching a certain speed and virtually fly over the water. It was, however, the first time that I really appreciated how much foils had changed the game by allowing surfers or sailors to stand up against forces of nature that would previously have held them back.

This started me thinking about the similarity between this technology and distributed in-memory computing for machine learning and artificial intelligence. Over the last couple of years, computing technology and high-speed surfing and sailing have both evolved massively. Today’s possibilities, in terms of speed and power, are far beyond what we were discussing even just 10 years ago. There are some interesting parallels.

First of all, neither technology is especially new, but they are being used in new ways to deliver huge increases in capability. Foils were first patented back in 1898, but new materials and new control systems have made them viable at long last. With high-performance in-memory computing, distributed systems and GPUs, we can solve problems that were largely the stuff of science fiction when I started working in data science 20 years ago. We are not talking about a speed gain of 20, 50 or 100 percent, but multiples of computing performance.

This technology allows us to process analyses far quicker than before. We are also able to study large data volumes with advanced machine learning algorithms that were impossible in the past because of advances in both the volume of data and our capacity to handle it.

Finally, for both hydrofoils and machine learning, the hardware is important: the carbon-foil, or the RAM and CPUs in a server. However, these would not be usable without the control systems or surrounding technology, which make the technology manageable, even for relative amateurs. Foils are now available not just for America’s Cup yachts, but for other dinghies and boards. Machine learning and AI methods have been made available in more usable forms via options like SAS® Viya®, our distributed high-performance environment.

Hidden Insights: Changing the rules of the game: Why in-memory computing is like hydrofoils

Giving our customers the chance to fly over the water

As I watched that surfer getting smaller and smaller towards the horizon, I had a feeling that what I had just seen might be a metaphor for what we provide  for our customers. We want to give them the chance to reach areas in their data lakes that were impossible to reach before, just as the surfer was able to get nearer the mouth of the bay. We allow them to see features of their customer base that were inaccessible and give them the opportunity to process these findings with tremendous speed, before safely returning to a point from where they can start new analyses.

Tags
Share

About Author

Gerhard Svolba

Principal Solutions Architect

Dr. Gerhard Svolba ist Analytic Solutions Architect und Data Scientist bei SAS Institute in Österreich. Er ist in eine Vielzahl von analytischen und Data Science Projekten quer über fachliche Domains wie Demand Forecasting, analytisches CRM, Risikomodellierung und Produktionsqualität involviert. Seine Projekterfahrung reicht von der fachlichen und technischen Konzeption über die Datenaufbereitung und die analytische Modellierung in unterschiedlichen Branchen. Er ist der Autor der SAS Press Bücher Data Preparation for Analytics Using SAS, Data Quality for Analytics Using SAS and “Applying Data Science: Business Case Studies Using SAS”. Als nebenberuflich Lehrender unterrichtet er Data Science Methoden an der Medizinischen Universität Wien, der Universität Wien und an Fachhochschulen. Sie finden auch Beitrage auf: Github und Twitter. ENGLISH: Dr. Gerhard Svolba ist Analytic Solutions Architect und Data Scientist bei SAS Institute in Österreich. Er ist in eine Vielzahl von analytischen und Data Science Projekten quer über fachliche Domains wie Demand Forecasting, analytisches CRM, Risikomodellierung und Produktionsqualität involviert. Seine Projekterfahrung reicht von der fachlichen und technischen Konzeption über die Datenaufbereitung und die analytische Modellierung in unterschiedlichen Branchen. Er ist der Autor der SAS Press Bücher Data Preparation for Analytics Using SAS®, Data Quality for Analytics Using SAS® and “Applying Data Science: Business Case Studies Using SAS”. Als nebenberuflich Lehrender unterrichtet er Data Science Methoden an der Medizinischen Universität Wien, der Universität Wien und an Fachhochschulen. Sie finden auch Beitrage auf: Github und Twitter.

Leave A Reply

Back to Top