Google’s What-If Tool And The Future Of Explainable AI
The rise of deep learning has been defined by a shift away from transparent and understandable human-written code towards sealed black boxes whose creators have little understanding of how or even why they yield the results they do. Concerns over bias, brittleness and flawed representations have led to growing interest in the area of “explainable AI” in which frameworks help interrogate a model’s internal workings to shed light on precisely what it has learned about the world and help its developers nudge it towards a fairer and more faithful internal representation. As companies like Google roll out a growing stable of explainable AI tools like its What-If Tool, perhaps a more transparent and understandable deep learning future can help address the limitations that have slowed the field’s deployment.
Since the dawn of the computing revolution, the underlying programming that guided those mechanical thinking machines was provided by humans through transparent and visible instruction sets. While the complexity of today’s software can yield myriad interaction effects that yield behaviors outside the scope of a programmer’s intent, at the end of the day it is a human that fully expresses and understands the choices their software is designed to make and can modify it to address changing conditions.
In contrast, deep learning systems outsource the codification process, handing the design process off to algorithms that return a sealed black box that, from the outside, meets the design requirements, but whose internal workings are largely unknown. Such systems can unwittingly encode unexpected bias, since even the most carefully curated data can still encode traces of demographic and other traits that can be inferred from combinations of other variables. They are also extremely brittle, since the contours of their encoded worldview are not visible, meaning a system that performs with human-like fluency can abruptly and unexpectedly degrade into gibberish with a single changed word that it had incorrectly learned as a key variable. The internal world representations learned by systems can run afoul of even the most carefully curated data, as machines learn the predictive power of spurious artifacts of their training data their human creators look past.
Read more here:
Leave a Reply