AI has the power to disrupt entire fields, rendering traditional measurements of “value” obsolete. View in browser »
The New Stack Update

ISSUE 157: The Machine-Learning Dark Horse

Talk Talk Talk

“The traditional market for VM security is really a market that’s been defined by taking legacy technologies that were built for on-premises data centers and traditional kind of server datacenter endpoint protection.”

John Morello, chief technology officer of Twistlock.
Add It Up
Security Integration Throughout Software Development Lifecycle Is a Pipe Dream
Risk and vulnerability management is the top reason to implement security throughout the software development lifecycle (SDLC), but the second most common reason is improving code quality according to the DevSecOps Community Survey 2019, which was primarily completed by people in development, DevOps and architect roles. However, this does not appear to be enough motivation to integrate security automation into the development process.
What's Happening

The stack-level security that is required for VMs as well as for cloud native deployments and service meshes was discussed during a podcast that Alex Williams, founder and editor-in-chief of The New Stack, hosted with John Morello, chief technology officer of Twistlock.

Twistlock Brings Container-Native Security to VMs

The Machine-Learning Dark Horse

One of the problems that machine learning poses to the whole IT pundit community is that it can be very difficult to predict how much it will change the field. Unlike a single technology, such as gRPC, which makes a very clear value proposition (lower network latency for microservices), AI has the power to disrupt entire fields, rendering traditional measurements of “value” obsolete.

For example, who could have ever guessed that AI could change the field of semiconductor fabrication. But last week, The New Stack Science Correspondent Kimberly Mok explained how a team of researchers from the Massachusetts Institute of Technology (MIT), Russia’s Skolkovo Institute of Science and Technology, and Singapore’s Nanyang Technological University are showing that it is indeed possible to push semiconductor materials to their limits — by using artificial intelligence to help predict and control these small-scaled modifications.

This approach, Mok pointed out, could dramatically streamline the silicon design process, “providing a more efficient and accurate method of determining the precise amount of strain and the ideal physical configuration, thus reducing the number of complex calculations that are needed. The team believes that a tool such as this could help experts discover new ways to ‘tune’ existing materials for future innovations in microelectronics, optoelectronics, photonics, and energy technologies.”

At TNS, we are covering closely is how Kubernetes and related cloud native technologies can expedite the machine learning life cycle. Machine learning involves an entire IT cycle of technologies that are very early on in terms of productization: Data must be harvested and cleansed, models must be tested and the most useful models must be pressed into a production, with a feedback loop of some sort to ensure the models can be updated. This week, Mary Branscombe takes a look at some architectures being built up to help support this cycle.

“Managing the complexity of these pipelines is getting harder, especially when you’re trying to use real-time data and update models frequently,” she writes. “There are dozens of different tools, libraries and frameworks for machine learning, and every data scientist has their own particular set that they like to work with, and they all integrate differently with data stores and the platforms machine learning models run in.”

Scytale Launches SPIFFE-Based Service Identity Management

SPIFFE, which stands for Secure Production Identity Framework For Everyone, is an open source framework for authenticating service identity across microservices and servers. It uses SPIRE (SPIFFE Runtime Environment) as a central server for handling identity. The SPIFFE/SPIRE project became a Cloud Native Computing Foundation sandbox project last March. Now Scytale, the founding sponsor for the SPIFFE/SPIRE projects, is launching Scytale Enterprise, a cloud-based subscription based on SPIFFE/SPIRE.

Humanity vs. Clippy: Lessons from Microsoft’s Failed Virtual Assistant

Clippy shimmered back into view last month, a forgotten ghost from the 1990s. As one of Microsoft’s earliest virtual assistants — and one of its most spectacular failures — the animated talking paperclip lives on in the memories of Twitter users (especially Microsoft employees). But Clippy’s life offers some interesting insights into how we humans interact with our technology.

How Ticketmaster Used Kubernetes Operators to Fill a DevOps Gap

Kubernetes Operators have enabled each individual project team at Ticketmaster, a leading live entertainment company, to run its own specific instance of Prometheus. “We’re running full steam ahead with Kubernetes and Prometheus. Those are the biggest Cloud Native Computing Foundation (CNCF) projects that we’re adopting,” said Tim Nichols, vice president, technology platform, at Ticketmaster.

On The Road
Cloud Foundry Summit North America // APRIL 03, 2019 // PHILADELPHIA, PA @ PENNSYLVANIA CONVENTION CENTER


Cloud Foundry Summit North America
Join Cloud Foundry technical and community leaders to discuss how tools, workflows and an inclusive community can make it easier for those who are building the future. Register now!
The New Stack Makers podcast is available on: — Pocket CastsStitcher — Apple PodcastsOvercastSpotifyTuneIn

Technologists building and managing new stack architectures join us for short conversations at conferences out on the tech conference circuit. These are the people defining how applications are developed and managed at scale.
Free Serverless Ebook

Experts and visionaries in distributed systems believe serverless technologies are the next evolution of application infrastructure beyond microservices. Leading edge companies have already embraced serverless and have dramatically reduced operational overhead and streamlined the DevOps cycle, while increasing scalability and resiliency. Still, there are many challenges to serverless adoption, such as operational control, complexity and monitoring.

The New Stack’s Guide to Serverless Technologies will help practitioners and business managers place these pros and cons into perspective by providing original research, context and insight around this quickly evolving technology. 

Download The Ebook
We are grateful for the support of our ebook sponsors:

Copyright © 2019 The New Stack, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list