A new AI-based visual system model developed by researchers at the HUN-REN Wigner Research Centre for Physics not only represents a step forward in modelling brain function but could also make machine vision systems more reliable and precise, the HUN-REN Hungarian Research Network said on Tuesday.


The human brain is a network of densely interconnected regions linked by bidirectional connections, but the nature and role of these opposing connections remain far from clear, the researchers told MTI. “When we see something, our brain processes information at multiple levels: from simple shapes to more complex concepts,” they said, adding that most current AI image recognition systems — such as those that identify a dog in a phone photo — operate using unidirectional processing, where information flows only from the bottom up. The human brain, however, works bidirectionally: the response of neurons at any given processing level is shaped not only by what has been processed at earlier stages but also by what will happen at the next level. “This means the brain always considers both the environment and context: not just what we see, but what it means — for example, whether the dog we see is a friend or a foe, approaching or retreating.” “The neural code is determined not only by what has occurred before a given processing stage but also by what will happen in the next,” they said.
The model developed by the HUN-REN Wigner researchers mimics this bidirectional flow of information, creating an AI system that not only sees but interprets images in a way similar to the human brain. This approach could help uncover the neural processes of information processing more precisely and enable the development of more reliable and adaptable machine vision systems, they added. To adapt flexibly to varied demands, traditional models are inadequate and instead deep generative models are needed, the statement said.
The team’s findings have been published in the journal Nature Communications.
Source: XpatLoop












