Machine learning and artificial intelligence are coming to DevOps. View in browser »
The New Stack Update

ISSUE 152: AIOps is the Next Big Thing

Talk Talk Talk

“Open source plays a different role for different players of the ecosystem, but the best one is empowering developers. In a typical company, developers didn’t make a lot of software decisions, but that has changed in a big way.” 

Neha Narkhede, co-creator of Kafka.
Add It Up
Scope of current hardware capabilities

Where will and AI/ML workloads be executed and who should handle them? The industry-wide rising tide towards the public cloud is not a foregone conclusion, as we were reminded by a Micron-commissioned report by Forrester Consulting that surveyed 200 business professionals who manage architecture, systems, or strategy for complex data at large enterprises in the US and China.

As of mid-2018, 72 percent are analyzing complex data within on-premises data centers and 51 percent do so in a public cloud. Three years from now, on premises-only use will drop to 44 percent and public cloud use for analytics will rise to 61 percent. Those using edge environments to analyze complex data sets will rise from 44 to 53 percent.

Those figures make a strong case for the cloud, but they do not actually say the complex data is actually related to AI/ML. Many analytics workloads actually deal with business intelligence (BI) instead of tasks that require high-performance computing (HPC) capabilities. While not all AI/ML workloads fall into that category, some do require hardware customized to maximize performance when training AI/ML models. Early adopters of AI/ML models actually have been relying more on the public cloud rather than their own equipment.

Currently, 42 percent of respondents exclusively rely on third-party cloud providers’ hardware built to create AI/ML models, but only 12 percent will do so three years from now. Instead, a majority will use a combination of both on-premises and public cloud. Many companies may have gone first to cloud providers because they wanted to quickly launch AI/ML activities. These same companies may migrate to on-premises environments for specialized workloads to reduce costs as they scale-up into production or use proprietary data.

What's Happening

In this episode of The New Stack Makers podcast, we spoke with Tom O’Neill, co-founder and chief technology officer of Periscope Data, who is responsible for overseeing the technology vision for the company. Periscope Data is a platform for modern data teams.

Tom O'Neill of Periscope Data on What Data Scientists Do and Why You Care

AIOps is the Next Big Thing

Machine learning and artificial intelligence are coming to DevOps.

This week, on at The New Stack, TNS British correspondent Mary Branscombe wrote about a growing crop of vendors who are looking at ways to apply the AI and ML capabilities to daily IT operations. With orchestration and monitoring playing such key roles in DevOps, using AI to support and even automate operations roles by delivering real-time insights about what’s happening in your infrastructure seems an obvious fit. Branscombe discusses companies such as The New Stack sponsor AppDynamics, which has developed a system to collect metric and events from Java microservices running in AWS Lambda. Another product, Nastel’s AutoPilot, uses ML to correlate events and data from multiple systems across hybrid cloud, on-premises and mobile systems for applications. Other products she discusses include ScienceLogic’s S1 and BMC’s TrueSight.

In a contributed piece for us this week, TNS sponsor DevOps Evangelist Steve Burton discussed the concept that AIOps can help in speeding continuous delivery. “The bottom line is that using ML to assist and verify production deployments can dramatically reduce the amount of time it takes to identify and remove deployment errors,” he writes. We also discuss the concept with Burton on our weekly TNS Analyst podcast.

Expect a lot more coverage of AIOps from The New Stack as we prepare an ebook on the subject, which will be available later this year.

SuperGloo Unifies Management of Multiple Service Meshes, originally known for the hybrid app gateway Gloo, has come up with “multimesh” management software called SuperGloo. Like Gloo, it uses functions as the common denominator across a range of mesh technologies, including Linkerd, Istio, AWS App Mesh, HashiCorp Consul and more. Like Solo’s other projects, such as Gloo, which is built on the Envoy proxy, and Squash, a debugger for microservices and Kubernetes, SuperGloo is open source.

The Convergence of Object Storage and Cloud Native Technologies

This is a contributed piece from Red Hat’s Irshad Raihan, director of marketing for storage, that discuss efforts underway in the open source community to address Kubernetes storage blind spots. Initiatives like the recently announced Ceph Foundation, as well as open source projects and technologies like Rook and API interfaces, are filling in the container storage holes that Kubernetes has yet to fill. They address issues such as scalability, scale out, and management of large datasets.

Automate (Or Else) for Speedy Cloud Deployments

In this sponsored post from Dynatrace, DevOps Activist Andreas Grabner discusses the velocity of DevOps in companies such as Amazon Web Services, and how these best practices can be replicated in your enterprise. Techniques include automating performance feedback, and building feedback loops through trust.

Party On

Cheryl Hung, currently the director of ecosystem for the Cloud Native Computing Foundation, was already intent on working for Google as a young teenager.

While Google and its obvious connection to containers and Kubernetes represent a focal point in Hung’s career, Hung largely spoke of her lifework before and after her stint at the search engine giant, during a podcast from KubeCon + CloudNativeCon 2018 in Shanghai.

Free Serverless Ebook

Experts and visionaries in distributed systems believe serverless technologies are the next evolution of application infrastructure beyond microservices. Leading edge companies have already embraced serverless and have dramatically reduced operational overhead and streamlined the DevOps cycle, while increasing scalability and resiliency. Still, there are many challenges to serverless adoption, such as operational control, complexity and monitoring.

The New Stack’s Guide to Serverless Technologies will help practitioners and business managers place these pros and cons into perspective by providing original research, context and insight around this quickly evolving technology. 

Download The Ebook
We are grateful for the support of our ebook sponsors:

Copyright © 2019 The New Stack, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list