AI Needs a Little "Reinforcement Learning"
In a recent InfoQ podcast interview, Phil Winder, CEO of Winder Research, elucidated an important distinction between machine learning and reinforcement learning, a subset of ML that has some unique characteristics. Whereas a plain vanilla ML model will attempt to make the best guess as to a right answer based on all the data it presently has at hand, reinforcement learning will instead formulate an answer based on historical data.
Winder offers a great example: A robot trying to walk through a maze. Using standard ML models, the robot may become stuck in a dead-end pathway that ends very close to the exit. The ML model drives the robot to get as close as possible to the exit, which is actually harmful in this case. "You get trapped in these dead ends that are almost optimal but not quite optimal," Winder said. But it would be a reinforcement learning-bot, working within a larger context, that would be smart enough to backtrack and find the exit.
In a sense, the field or AI as a whole, bereft of a larger context, is caught in one of these dead ends. AI models today are rife with biases. As an example, one can only look at Microsoft’s failed Tay AI experiment, the infamous Twitter bot of a few years back that was easily influenced by people talking to it in racist and sexist ways.
One of the guidelines to “Fair AI” research, a movement that seeks to eliminate bias in AI systems, is to incorporate a larger context in these models. The researcher should incorporate how the users will react to the results of an AI model, and incorporate those findings into the model itself, argues Microsoft AI researcher Danah Boyd and her colleagues in a well-known 2019 Association for Computing Machinery paper 2019 paper, "Fairness and Abstraction in Sociotechnical Systems".
"Certain assumptions will hold in some social contexts but not others," the researchers assert. And knowing these social contexts is key.
This is a lesson that Google, for one, may not have wanted to hear. While we definitively do not know why Google AI chief Jeff Dean fired AI ethics researcher Timnit Gebru last December, the paper at the middle of surrounding controversy — “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” — argues that Google’s practice of indiscriminately collecting masses of data from its users results in racial, sexual and other biases polluting the models it then uses for search and other services.
“Large datasets based on texts from the internet overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalized populations,” Gebru and her co-authors write. Without a way of placing all this information within a larger understanding, Google becomes a bot stuck within its own cultural maze.
|