Abstract Summary
Machine vision systems are increasingly used for smart city applications such as infrastructure condition monitoring, vehicle compliance detection, etc. While facilitating fast decision-making at scale, these systems can be easily repurposed for unaccounted usages that are potentially harmful to stakeholders. To restrict the usage of such systems, it is important to draw a clear boundary of what they are capable of doing and safeguarding the system for only intended usages. The challenge however comes in two folds: 1) state-of-the-art machine vision systems are black-boxes whose behaviors are unintelligible to humans, 2) it’s often unclear what the system should be knowing, which is essential for limiting system usage. This project aims to develop human-in-the-loop methods and tools for understanding the capability of machine vision systems and safeguarding their usages. We consider humans both as the computational agents for interpreting machine behaviors and as the domain experts and stakeholders to create requirements of what a system should know. We also consider computational methods as vital tools to assist humans in the reasoning for what a system knows that goes beyond what it should know. As the outcome of the project, we envision the development of a rejection mechanism that can safely reject the output of machine vision systems when the system leverages input information that goes beyond its intended capability.