01 02 03 04 05
06 07 08 09 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      
Add An Event

CCN Brown Bag Series

Wednesday, November 15, 2017,
  • Location: Wilson Hall • 111 21St Ave S • Nashville, TN 37240
  • Room: 115

Hojin Jang

Department of Psychology

Vanderbilt University

Comparison between humans and machines in object recognition under noisy conditions

In recent years, convolution neural networks (CNNs) have drawn great attention for their remarkable performance in various visual cognitive tasks. It has even been reported that CNNs can now surpass human-level recognition performance. However, there are insufficient studies which directly compare the recognition performance of humans and machines in noisy conditions, and therefore, it is unclear whether CNNs meet expectations under these conditions. Notably, humans have the advantage of additional resources in the brain such as top-down attention to suppress noise. In this talk, I will try to address three questions: (1) Which performs better in object recognition with noisy conditions, humans or machines? (2) How can we improve the robustness of machines to visual noise? and (3) Can machines make human-like decisions? We initially found that human object recognition performance was more robust to both Gaussian and Fourier noise compared to CNNs.  Interestingly, CNNs were heavily impaired by Gaussian noise while humans had greater difficulty with spatially structured Fourier noise. This discrepancy provides evidence that humans and state-of-the-art CNNs have a qualitative difference when dealing with noise. Additionally, we found that the robustness of CNNs can be significantly improved by simply adding noise variation to the input during the training phase, which suggests noise-invariance can be achieved through learning.