Teamwork is successful when it’s based on mutual trust and respect. This becomes complicated when teamwork is meant to occur between humans and autonomous systems. Humans do best by observing and gaining insights regarding team member’s skills, experience, and reliability. Unfortunately, autonomous systems are not yet capable of giving real-time feedback for changing conditions. The machines lack something that humans rely upon: awareness. Humans are able to discern competency and inability, whereas a machine cannot.
To aid in the growth of machine and human partnerships, the Defense Advanced Research Projects Agency (DARPA) is launching the Competency-Aware Machine Learning (CAML) program. CAML strives to produce deep machine learning systems that are capable of continuously assessing their performance within time critical, dynamic situations and then communicate that to their human counterparts.
Jiangying Zhou, a program manager in DARPA’s Defense Sciences Office, says, “If the machine can say, ‘I do well in these conditions, but I don’t have a lot of experience in those conditions,’ that will allow a better human-machine teaming. The partner can then make a more informed choice.” The enhancement in the dynamics between human and machine could make for more efficient and effective partnerships.
On the flipside, Zhou also recognizes the challenges with state-of-the-art autonomous systems that can’t communicate or assess their own competency in changing situations. She says, “Under what conditions do you let the machine do its job? Under what conditions should you put supervision on it? Which assets, or combination of assets, are best for your task? These are the kinds of questions CAML systems would be able to answer.”
Using autonomous car technology as a hypothetical situation example, Zhou posed a situation where a driver would be trying to decide between two self-driving cars for a night time excursion. Using CAML systems, one car may be able to communicate that at night and in the rain it can distinguish between a person or inanimate object with a 90 percent accuracy and has completed this task more than 1,000 times. The second car may be able to communicate that it can distinguish between a person and inanimate object with a 99 percent accuracy, but has only performed the task less than 100 times. With this information, the rider could make a much more informed decision.