A novel optoelectronic sensor capable of automatically compensating for sudden variations in light exposure is demonstrated, laying the groundwork for the potential development of a new generation of edge-computing cameras that allow computer vision applications to be carried out in a more energy- and computationally-efficient way.
There have been significant advancements in the field of artificial intelligence (AI) over the past two decades, allowing such technologies to acquire popularity in both industry and academia. Face recognition software on Facebook or the iPhone, self-driving cars, and image recognition software are examples of AI applications that have become more prevalent in our daily lives. Computer vision, which is the study of enabling machines to gain a high-level understanding of images, such as pattern or object recognition, is a subfield of AI. Consequently, computer vision research has acquired significant importance. Convolutional neural networks (CNN) have garnered a great deal of interest due to their exceptional performance in image classification. Due to the computer's hardware limitations for self-learning and parallel computing, even the most advanced software is presently incapable of imbuing machines with human cognitive skills. Despite the progress made in computer vision, there are still problems to be resolved. One challenge is object recognition under varying illumination conditions. In particular, changes in illumination conditions can contaminate CNN's image segmentation, leading to erroneous object detection. Even though this issue can be mitigated by using a larger CNN training set, the enormous computational and energy resources required to continuously execute CNN for always-on applications, such as surveillance or self-navigation, pose a significant challenge for battery-dependent mobile systems. To address this age-old issue, a novel optoelectronic sensor capable of automatically compensating for sudden variations in light exposure is demonstrated in this thesis, without the need for sophisticated object detection software. With this method, effective fault-tolerant object detection may be developed with little training data, low energy consumption, and low computational expenses. Another prominent issue in computer vision is occlusion, which occurs when an object's key features momentarily vanish behind another body, making image detection difficult for the computer. While the human brain is capable of compensating for the portions of a blocked object that are not visible, computers lack these scene interpretation skills. Typically, cloud computing with convolutional neural networks is the preferred method for managing such a scenario. However, cloud computing should be minimized for mobile applications where energy consumption and computational costs are crucial. In this regard, a novel computer vision sensor that can effectively detect and track covered objects on a hardware level without relying heavily on occlusion management software is proposed. The underlying mechanism that allows the emergence of these smart optoelectronic sensors will be discussed in detail in this thesis, laying the groundwork for the potential development of a new generation of edge-computing cameras that allow computer vision applications to be carried out in a more energy- and computationally-efficient way.