Many organizations are finding that there are additional steps needed to truly get the application code into production. View in browser »
The New Stack Update


Talk Talk Talk

“If you are still writing loops — you’re not a bad person. Just think about whether you need to write loops or if there’s a better alternative.”

Marco Emrich of IT consulting firm codecentric speaking at this year's OSCON conference in Portland.
Add It Up
Future of Data Center Management
Rule of thumb: Don’t ask survey questions about long-term intentions. It is hard enough knowing what will happen six months from now, let alone in six years. Yet, rules are made to be broken. We are not writing about Vertiv’s “Data Center 2025: Closer to the Edge” report because it accurately predicts the future. Instead, the responses from over 800 data center professionals provide a glimpse of what 2025 may look like, and examine how buzzwords, like “self-healing,” have fared, and illuminate new trends like “edge computing.”

Survey respondents have decided that data center management and control is moving towards a self-optimizing future. When the longitudinal study was first conducted in 2014, 43% indicated “self-healing” would be a key to the industry in 2025. Flash forward five years and less than half share that opinion. While the terms both refer to automation, there are differences. Self-healing systems can detect and resolve problems automatically. In contrast, self-optimization improves performance or reduces costs by, for example, routing a workload to a different cloud provider. Perhaps hands-on experience with vendor-promoted, self-healing technology has created skepticism. Just like in the burgeoning market for AIOps, it is more realistic to incrementally optimize performance by using automation tools that may or may not utilize artificial intelligence.
What's Happening

In this episode of The New Stack Makers podcast, we speak with two DigitalOcean alumni and co-chairs of SREcon 2020 Americas conference who have led two very different journeys to become one of the most wanted roles in tech — site reliability engineer. As the name suggests, an SRE is someone focused on the reliability of an organization’s most important systems.

Emil Stolarsky is a frontend-turned-infrastructure engineer who has built scriptable load balancers for Shopify, built an internal Kubernetes platform for DigitalOcean, and is now writing a book on how the enterprise SRE role can be adapted to smaller orgs. Tammy Bütow began with disaster recovery testing in banking over a decade ago, then went over to DigitalOcean in incident response, before she joined Dropbox for an official SRE role. Finally, in 2017, she joined Chaos as a Service provider Gremlin as its principal SRE.

The Evolution of the Site Reliability Engineer


We cover quite a bit on setting up and running continuous integration and continuous deployment pipelines, which are necessary to update code and stay competitive in this day and age. But many organizations are finding that there are additional steps needed to truly get the application code into production.

This week on The New Stack, two VMware engineers introduced a new concept to help IT managers bridge this divide, called “Continuous Verification.” In this post, Dan Illson and Bahubali (Bill) Shetti define continuous verification as “A process of querying external system(s) and using information from the response to make decision(s) to improve the development and deployment process.”

It’s a new term, but addresses the age-old issue that developers have of making sure that the production environment that they are deploying into is ready for their new program. The idea with CV is that it can augment the CI/CD process by moving many of the post-deployment steps into feedback loop(s) within the pipeline. The tools are already there. In fact, any monitoring or security tool that can report on its work by way of an API can be fit into an existing pipeline (depending on how flexible the CI/CD server is at handling APIs, of course), Illson further explained in this week’s The New Stack Context podcast, which will be posted later today.

Time will tell if CV becomes an accepted term, though the challenge of managing external resources is here with us today, and will need to be addressed in some fashion for cloud native computing to be truly programmable.

What Is Robotic Processing Automation?

The ongoing push to make companies as efficient as possible helped create the need for a technology called Robotic Processing Automation (RPA). RPA involves using specialized software that’s capable of evaluating data and dealing with it appropriately according to the task at hand. The software features artificial intelligence (AI) components that facilitate computers handling high-volume and repetitive tasks previously done by humans. Unlike static tools that perform linearly, RPA software is dynamic. It looks for patterns in the inputted data and makes decisions about how to treat similar kinds of information in the future.

Pipe: How the System Call That Ties Unix Together Came About

It’s remarkable to remember that Unix’s pipe command was implemented in a single day by Bell Labs’ Doug McIlroy. It represents not only a great moment in computing history, but also a uniquely important moment for its profound impact on the culture of Unix. And it has changed the way we’ve programmed ever since. A look back from our Sunday cultural reporter David Cassel.

Primer: How Kubernetes Came to Be, What It Is, and Why You Should Care

When Catherine Paganini took on a job as run marketing for Kublr, an enterprise-grade Kubernetes platform, she needed to learn what Kubernetes was, which is a tall order for any professional walking into the cloud native landscape. She is a fast learner, however, and gathered what she learned in this excellent primer of what Kubernetes is, the value proposition it provides, and even the challenges it portends for the enterprise.

Party On

Enjoying the TNS 5th birthday: Co-owners Judy and Alex Williams with Emily Chin of GitLab.

Libations and merriment were had at the TNS 5th Anniversary!

Congratulations to Jim Perrin and Amye Scavarda Perrin who got married during OSCON last week. Cheers!

On The Road


Open Source Summit NA
Looking for the big picture when it comes to open source software? The Linux Foundation’s Open Source Summit offers developers and other technologists a base from which to collaborate, share information, and learn about the latest open source technologies. Experts will be here from a wide range of disciplines, including networking, cloud native, edge computing, AI and many more. 15% off with code OSSNANWST19Register now!
The New Stack Makers podcast is available on: — Pocket CastsStitcher — Apple PodcastsOvercastSpotifyTuneIn

Technologists building and managing new stack architectures join us for short conversations at conferences out on the tech conference circuit. These are the people defining how applications are developed and managed at scale.
Free Guide to Cloud Native DevOps Ebook

Cloud native technologies — containers, microservices and serverless functions that run in multicloud environments and are managed through automated CI/CD pipelines — are built on DevOps principles. You cannot have one without the other. However, the interdepencies between DevOps culture and practices and cloud native software architectures are not always clearly defined.

This ebook helps practitioners, architects and business managers identify these emerging patterns and implement them within an organization. It informs organizational thinking around cloud native architectures by providing original research, context and insight around the evolution of DevOps as a profession, as a culture, and as an ecosystem of supporting tools and services. 

Download The Ebook
We are grateful for the support of our ebook sponsors:

Copyright © 2019 The New Stack, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list