The Machine-Learning Dark Horse
One of the problems that machine learning poses to the whole IT pundit community is that it can be very difficult to predict how much it will change the field. Unlike a single technology, such as gRPC, which makes a very clear value proposition (lower network latency for microservices), AI has the power to disrupt entire fields, rendering traditional measurements of “value” obsolete.
For example, who could have ever guessed that AI could change the field of semiconductor fabrication. But last week, The New Stack Science Correspondent Kimberly Mok explained how a team of researchers from the Massachusetts Institute of Technology (MIT), Russia’s Skolkovo Institute of Science and Technology, and Singapore’s Nanyang Technological University are showing that it is indeed possible to push semiconductor materials to their limits — by using artificial intelligence to help predict and control these small-scaled modifications.
This approach, Mok pointed out, could dramatically streamline the silicon design process, “providing a more efficient and accurate method of determining the precise amount of strain and the ideal physical configuration, thus reducing the number of complex calculations that are needed. The team believes that a tool such as this could help experts discover new ways to ‘tune’ existing materials for future innovations in microelectronics, optoelectronics, photonics, and energy technologies.”
At TNS, we are covering closely is how Kubernetes and related cloud native technologies can expedite the machine learning life cycle. Machine learning involves an entire IT cycle of technologies that are very early on in terms of productization: Data must be harvested and cleansed, models must be tested and the most useful models must be pressed into a production, with a feedback loop of some sort to ensure the models can be updated. This week, Mary Branscombe takes a look at some architectures being built up to help support this cycle.
“Managing the complexity of these pipelines is getting harder, especially when you’re trying to use real-time data and update models frequently,” she writes. “There are dozens of different tools, libraries and frameworks for machine learning, and every data scientist has their own particular set that they like to work with, and they all integrate differently with data stores and the platforms machine learning models run in.”