Questions: does the brain's way of learning to see have anything in common with how we train our networks?

Data augmentation is crucial for our networks learning using relatively little data. But what about animal brains? Naturally, animals learn from video, while moving, while shaking at the micro-scale. What if we fixed their bodies to avoid the shaking? What if we fix their bodies and only show them pictures, and never anything moving? Would they be able to learn to see? How much longer will it take?
We cannot measure individual neuron activity, but maybe we can understand animal brain learning better by presenting animal brains with much more limited training data, making it pictures instead of videos, and possibly limited class of pictures. We can combine various classes with positive/negative reinforcement (food, electric shocks) so that correctly classified pictures elicit sufficiently strong response from the brain to be measured.

Comments