Neural networks could reveal how the brain understands time
EU-funded researchers have developed a framework for understanding how we record memories by creating a model that simulates the behaviour of single neurons across different timescales. The research could help build on our understanding of how human memory works and lead to developments in the burgeoning field of artificial neural networks, as well as areas as far-reaching as genetics and ecology.
© Tryfonov #119071681, source: stock.adobe.com 2019
Neural networks have recently achieved human-level performance in areas such as speech and image recognition. However, we have a poor understanding of how these networks work and how best to train them to provide the desired results. A greater understanding could improve artificial neural networks engineering systems inspired by the brains structure. This technology has wide-reaching applications in areas as diverse as the development of self-driving cars, predicting heart attacks and forecasting weather patterns.
Furthermore, we currently know very little about how the human brain understands time; building on our understanding in this area could help us treat illnesses such as dementia which currently affects 1 in 6 people over the age of 80.
In terms of biophysics, a huge number of timescales constantly interact. An action potential of a neuron occurs on a millisecond timescale, while calcium dynamics occurring in the brain take seconds, and protein expression can take minutes. The consolidation of a memory, meanwhile, can take a day.
Likewise, if you think of behaviour, syllables can take milliseconds, a full sentence can take seconds, an interview will take several minutes and interaction via email can go on for many months. You have to integrate that information over several timescales, explains project coordinator Omri Barak of Technion in Israel.
The EU-funded Multiple Timescales project aimed to fill some of the gaps in our knowledge by investigating, on a very theoretical level, how neural systems interpret the concept of time.
From single neuron to network
To date, timescales have generally been regarded as constant in the context of research. The Multiple Timescales project changed this focus by recognising them as dynamic. Mathematically, timescales were used as a dynamic variable in the project, resulting in its key finding a model that provides a good approximation of single neuron activity over prolonged timescales.
Whilst the work carried out during the Multiple Timescales project is basic science, according to Barak, undertaken with no target application and so unlikely to lead directly to new technologies, it does take us one step closer to cracking the complex problem of understanding human memory.
The project results have been presented at international conferences and were published in the peer-reviewed Journal of Neuroscience as well as Physical Review Letters.
Baraks work has led to further research on the inner workings of neural networks, for example, on how these networks can be trained to have memory.
Researchers are now working on memory in machine learning, adaptation in metastasis in cancer, and methods to analyse networks that arise in genetics and ecological networks. The approach involves developing mathematical tools to understand their dynamics.
The model arising from the Multiple Timescales project is proving to be the basis for research in a number of scientific fields. This can go anywhere, says Barak. I am seeing the same things happening both in the machine learning that is in your cell phone and in genetic networks and in your brain. The project findings are proving relevant to many different areas and are taking us in several different directions, demonstrating the value of investment in basic science.