There is an important distinction between machine learning and reinforcement learning, a subset of ML that has some unique characteristics. View in browser »
The New Stack Update

ISSUE 257: AI Needs a Little "Reinforcement Learning"

Talk Talk Talk

“We asked the question: ‘What if every Jira ticket (feature) is a git branch so every branch can be in a separate environment?’”

Add It Up
Public Cloud Usage at Enterprises

The numbers seem too good to be true. Google Cloud Platform is used by 49% of enterprise respondents surveyed for Flexera’s “2021 State of the Cloud Report,” up from 20% in the 2019 study. Oracle Infrastructure Cloud similarly skyrocketed from 16% to 32%. However, Amazon Web Services (AWS) and Microsoft Azure are still atop the public cloud pack.

All the major cloud providers saw increased adoption, and most are also seeing increased spending. That does not mean they are actually the customer’s first choice for their specific multicloud architecture. More than three-quarters of enterprises surveyed use AWS and Microsoft Azure, and the customers usually run significant workloads with them.

In contrast, fewer than half of those using VMware on AWS, IBM Public Cloud, and Oracle clouds say they use the vendor for significant workloads. Google Cloud is in the middle. It gets more than a niche of workloads, yet still doesn’t capture as much as the two market leaders.

What's Happening

This episode of The New Stack Makers podcast series with Okta explores database and authentication requirements for securing mobile applications.

MongoDB Senior Product Manager for Mobile Ian Ward, and Okta Senior Security Architect Aaron Parecki are guests for this podcast, which was hosted by Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at Okta, an API security firm.

Okta Series - Mobile Security Dev, a Database and Authentication POV

AI Needs a Little "Reinforcement Learning"

In a recent InfoQ podcast interview, Phil Winder, CEO of Winder Research, elucidated an important distinction between machine learning and reinforcement learning, a subset of ML that has some unique characteristics. Whereas a plain vanilla ML model will attempt to make the best guess as to a right answer based on all the data it presently has at hand, reinforcement learning will instead formulate an answer based on historical data. 

Winder offers a great example: A robot trying to walk through a maze. Using standard ML models, the robot may become stuck in a dead-end pathway that ends very close to the exit. The ML model drives the robot to get as close as possible to the exit, which is actually harmful in this case. "You get trapped in these dead ends that are almost optimal but not quite optimal," Winder said. But it would be a reinforcement learning-bot, working within a larger context, that would be smart enough to backtrack and find the exit. 

In a sense, the field or AI as a whole, bereft of a larger context, is caught in one of these dead ends. AI models today are rife with biases. As an example, one can only look at Microsoft’s failed Tay AI experiment, the infamous Twitter bot of a few years back that was easily influenced by people talking to it in racist and sexist ways.

One of the guidelines to “Fair AI” research, a movement that seeks to eliminate bias in AI systems, is to incorporate a larger context in these models. The researcher should incorporate how the users will react to the results of an AI model, and incorporate those findings into the model itself, argues Microsoft AI researcher Danah Boyd and her colleagues in a well-known 2019 Association for Computing Machinery paper 2019 paper, "Fairness and Abstraction in Sociotechnical Systems".

"Certain assumptions will hold in some social contexts but not others," the researchers assert. And knowing these social contexts is key.

This is a lesson that Google, for one, may not have wanted to hear. While we definitively do not know why Google AI chief Jeff Dean fired AI ethics researcher Timnit Gebru last December, the paper at the middle of surrounding controversy — “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” — argues that Google’s practice of indiscriminately collecting masses of data from its users results in racial, sexual and other biases polluting the models it then uses for search and other services. 

“Large datasets based on texts from the internet overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalized populations,” Gebru and her co-authors write. Without a way of placing all this information within a larger understanding, Google becomes a bot stuck within its own cultural maze. 

Why Disaster Happens at the Edges: An Introduction to Queue Theory

“When it comes to IT performance, amateurs look at averages. Professionals look at distributions,” advises Avishai Ish-Shalom, a developer advocate at ScyllaDB, in a post that offers insights on choosing the right metrics for evaluating the success of your systems.

Microsoft Open Sources the Power Fx Language for Customizing Logic in Low-Code Apps

Microsoft has formally released Power Fx, an open source language for "low code" programming that’s based on Excel. It is a strongly typed, declarative, functional language, where developers can use imperative logic and state management if they need to. Microsoft is hoping that, in the future, more programs will be written by business experts who know what the software needs to do rather than by developers who work from requirements documents.

Maiot: Bridging the Path to ML Production

The Munich-based startup Maiot has released Python-based ZenML, an extensible open source tool for creating machine learning pipelines. It’s designed to address the problems such as versioning data, code, configuration, and models; reproducing experiments across environments; establishing a reliable link between training and deployment and tracking metadata and artifacts that are produced.

On The Road
March 23-25 // Virtual
SoloCon will bring together experts to speak about their use of enterprise and open source technologies. Some of the topics covered include Service Mesh Management, WebAssembly, and  Cloud Native API Management. Register Now!
The New Stack Makers podcast is available on: — Pocket CastsStitcher — Apple PodcastsOvercastSpotifyTuneIn

Technologists building and managing new stack architectures join us for short conversations at conferences out on the tech conference circuit. These are the people defining how applications are developed and managed at scale.
Best of DevSecOps: Trends in Cloud Native Security Practices

This is the first in a new series of anthologies that assemble some of our best articles on a trending subject, paired with our editors’ insightful analysis to frame the bigger picture. These exclusive ebooks help developers, architects, operators and management go in-depth, quickly, on hot topics in at-scale development and management.

In this ebook, we explore how security practices are now being integrated into the development process, as well as the build pipeline and runtime operations of cloud native applications. You’ll learn more about:

  • How DevSecOps enables faster deployment cycles.
  • Why DevSecOps is necessary for cloud native architectures.
  • The challenges and benefits of DevSecOps practices.
  • The new role of developers and operators in security.
  • How to measure DevSecOps success.
  • Tools and best practices for adoption.
  • Emerging trends to pay attention to.
Download Ebook
We are grateful for the support of our ebook sponsor:

Copyright © 2021 The New Stack, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp