Last week, the World Wide Web Consortium (W3C) approved WebAssembly as a full-fledged Web standard, joining HTML, CSS and JavaScript. View in browser »
The New Stack Update

ISSUE 197: Welcome to WASM

Talk Talk Talk

You have to do the culture changes before you start doing the technology. I absolutely reject the idea that you adopt the technology and it makes the culture changes. The tech fails because the org wasn’t ready for it.”


Tom Petrocelli, Amalgam Insights

Add It Up
Machine learning model deployment timeline

Creating and deploying machine learning (ML) models supposedly takes too much time. Quantifying this problem is difficult, not least because there are so many job roles involved with a machine learning pipeline. With that caveat, let us introduce Algorithmia’s “2020 State of Enterprise ML.” Conducted in October 2019, 63% of the 745 respondents have already developed and deployed a machine learning model into production. On average, 40% of companies said it takes more than a month to deploy an ML model into production, 28% do so in eight to 30 days, while only 14% could do so in seven days or less.

We believe Algorithmia’s estimate is much closer to reality than that reported in a Dotscience survey earlier in the year, which reported 80% of respondents’ companies take more than six months to deploy an artificial intelligence (AI) or ML model into production. That data point is misleading because it includes respondents that are still evaluating use cases and are in the process of deploying their first ML model. Of course, this in and of itself is a substantial concern. In fact, 78% of AI or ML projects that involve training an AI model stall at some point before deployment according to another 2019 survey, this one consisting of 277 data scientists and AI professionals, conducted by data labeling company Alegion.

What's Happening

This is what one can say with a reasonably high degree of certainty: organizations are deploying applications in hybrid environments consisting of legacy datacenters and often different cloud services, while the open source business models allowing them to do that are changing. In this context, for database management and use, the choice of NoSQL remains a safe bet for today’s deployments, especially for multicloud environments, Alvin Richards, chief product officer of Redis Labs, said.

In this podcast recorded live in Las Vegas during Amazon Web Services’ (AWS) re:Invent 2019, Richards discussed how databases and open source have evolved and how Redis Labs has adapted its NoSQL offering and business model along the way. He was joined by Rajat Panwar, chief technology officer of HolidayMe, an online travel agency and Redis Labs customer.

Redis Labs on Why NoSQL is a Safe Bet

Welcome to WASM

Last week, the World Wide Web Consortium (W3C) approved WebAssembly as a full-fledged web standard, joining HTML, CSS and JavaScript. In effect, WebAssembly vastly broadens the palette of web developers to write code in their favorite programming languages (assuming it’s C++ or Rust, though more languages will be supported in the future), and WebAssembly will compile it for a stack-based virtual machine that runs in any browser.

WebAssembly, or WASM in short, has attracted so much attention, people are now looking at it for uses outside of the browser. For instance, the service mesh experts over at Solo.Io are harnessing the technology to make it easier to extend out the function of the Envoy data proxy. You could extend Envoy before, Solo.IO founder Idit Levine told us, but only by merging your code in with Envoy’s core codebase, which means you’d need to recompile each time you updated Envoy. Also, if you’re code is broken, it’d break Envoy itself. Sad emoji. Instead, you can write your functionality as a WASM program, and share it on’s new WebAssembly Hub where other users can deploy it as well. Pretty neat, huh?

For the upcoming holiday season, the TNS newsletter will be taking a week off. So don’t expect our weekly missive to pop into your email folders next Friday, though we will be back the week after, ready to charge forward to cover cloud native computing for 2020. 

Xen Project Hypervisor 4.13 Extends Support for Embedded Systems, AMD’s EPYC

The Xen Project is releasing the latest version of its open source Xen Project Hypervisor. Version 4.13 reflects a wide array of contributions from both the community and ecosystem. This release also represents a fundamental shift for the long-term direction of Xen toward more resilience against security threats from side-channel attacks and hardware-related issues.

BeyondProd: Google’s Internal Model to Securing Cloud Native Microservices

Following up on its influential BeyondCorp model for securing enterprise networks, Google has released another idealized security architecture, called BeyondProd, for securing microservices. It is based on the company’s own considerable expertise in wrangling untold numbers of services across millions of containers.

How AWS Fargate Turned Amazon EKS into Serverless Container Platform

This is the third part of analyst Janakiram MSV‘s four-part series examining the evolution of automated container services on Amazon Web Services. In this part, he takes a closer look at the way Amazon Elastic Kubernetes Service (EKS) is extended to support Fargate. He also explains how service discovery works between Fargate and EKS.

Party On

Nikita Jiandani of Lighthouse Labs and Myles Borin of Google pose for the camera while at Node+JS Interactive in Montreal.

The New Stack Makers podcast is available on: — Pocket CastsStitcher — Apple PodcastsOvercastSpotifyTuneIn

Technologists building and managing new stack architectures join us for short conversations at conferences out on the tech conference circuit. These are the people defining how applications are developed and managed at scale.
Pre-register to get the Cloud Native Storage ebook in October.

How should developers connect cloud native workloads to storage? The New Stack’s ebook on cloud native storage takes this question to industry experts who are approaching the problem from three different perspectives: cloud native storage vendors, traditional storage vendors and the big-three cloud providers.

In this 48-page ebook, developers and DevOps professionals will learn:

  • Best practices and patterns for handling state in cloud native applications.
  • The storage attributes and data needs you should consider up front.
  • Storage options for containerized applications running in a microservices architecture on Kubernetes.
  • How operations roles change as developers gain the ability to provision storage.
  • And more.
Download Ebook
We are grateful for the support of our exclusive ebook sponsor:

Copyright © 2019 The New Stack, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list