By Paul Devlin (University of Dallas; STEM-Trek Blogger)
I started my day with the “Machine Learning in HPC Environments” workshop. The workshop moderator briefly talked about Alan Turing’s proposal of intelligent computers and how the term “machine learning” was coined in reference to teaching a computer to play games. ML developers still work hard to game computers good at games, but the moderator named some important changes that have occurred in the field. He pointed to better algorithms, big data, and more computing power as the primary culprits for the dramatic improvement in machine learning software in recent years. After all these huge advances, how will machine learning be able to progress even further? By the end of the workshop, I was convinced that the answer is training parallelization.
The first keynote talk was by Dr. Azalia Mirhoseini from Google Brain. She discussed a novel architecture for deep neural networks …