Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Summer School on Theoretical Neuroscience

August 4th, 2015 / in Research News / by Helen Wright

christosThe following is a guest post from Christos Papadimitriou, the C. Lester Hogan Professor of EECS Computer Science Division at the University of California at Berkeley.

Every summer, Berkeley’s Redwood Institute for Theoretical Neuroscience organizes a ten-day summer course on Neuroscience, bringing to Berkeley several dozen young researchers (graduate students and postdocs) from all walks of science with a serious interest in learning about Neuroscience, and especially about techniques for mining and modeling of neuroscience data. This year, the Simons Institute and the Mathematical Sciences Research Institute were, for the first time, co-organizers of this course, in order to attract more computer scientists and mathematicians to this important field. The program included one day of lectures by Vitaly Feldman and myself on the theory of computation, learning theory, and computational models by Valiant, and recently by Vempala and myself.

The Computer Community Consortium (CCC), which has been recently very active in promoting the emerging research interface between Computation and Brain Science, generously agreed to fund this CS component of the summer course.

We advertised the course to CS departments, and from a field of about a dozen applications we selected four CS graduate students, who ended up attending the course: Chihua Ma (UI Chicago), Yunjie Liu (UC Davis and Lawrence Berkeley Labs), Yu Liu (UC Davis), and Antonio Moretti (Columbia).

Below are their contributions about highlights of the summer course.

Chihua Ma:

I am a computer science PhD student in the electronic visualization lab at University of Illinois at Chicago. My research interests mainly focus on data visualization and human computer interaction. I am currently collaborating with neuroscientists to work on the visualization of dynamic mouse brain networks. Therefore, I am interested in knowing more about the neuroscience data and methods of analyzing the data. I learnt a lot from the neuroscience summer school in Berkeley. From my point of view, I was most impressed by the evening talk given by Prof. Jack Gallant and the lectures taught by Prof. Sonja Gruen.

Prof. Gallant gave a talk on modeling fMRI data to discover how the brain represents information about the world, as well as about its own mental states. We know that each cortical area represents information implicit in the input. Neuroscientists should identify cortical areas of interest first, and then determine what specific information is mapped across each area. Prof. Gallant indicated the biggest challenge to understand brain computation is not finding a more effective encoding model, or apply more powerful computations, but data measurement — and that is something that I hadn’t realized before. Once the data are collected, they use some statistical tools and machine learning approaches to fit computational models to the brain data. Prof. Gallant showed us his amazing results, including his technique for re-constructing the images seen by an animal. In addition, they also developed a web-based interactive visualization called brain viewer. I am quite interested in this visualization since it is relevant to my current visualization project that combines spatial and non-spatial structures. After his talk, Prof. Gallant told me he thought visualization would be a very important field in the future of Neuroscience.

Prof. Gruen lectured about correlation analysis of parallel spike trains. From her class, I had a general idea why analysis of parallel spike trains is important, and learnt some basic concepts about spike train statistics, pairwise comparison of spike trains, and multiple-neuro spike patterns. Now, I understand that neuroscientists need to collect and analyze the activity of multiple neurons simultaneously to understand how concerted activity of ensembles of neurons is related to behavior and cognition. I was especially impressed by a representation of intersection matrix designed by her team to visualize the detected spatio-temporal patterns. I had a discussion with Prof. Gruen after class about how important computer science is in studying parallel spike trains. She pointed out that identifying spatio-temporal patterns is very hard since what to search for finding such a pattern, and where to search for it, are both far from obvious. Prof. Gruen thought this is a great challenge for both neuroscientists and computer scientists, and she hopes computer scientists will help in addressing it.

Yunjie Liu:

Intelligence is a computational problem as Jeff Hawkins said in his book “On Intelligence”. Neuroscientists are interested in experimenting and collecting data from areas of the brain analyzing individual neuron spiking, hoping that neuron behavior can guide us on how neuron circuit and the entire brain works. Despite many fruitful explorations of single and multiple neuron spike train analysis, linear/non-linear transformation functions of neuron as well as mountains of accessible neural science data nowadays, there is few productive theories of how brain function as a whole. Of course, brain is made of a network of neurons, but what makes a network of neurons brain-like? ‘Neurons that fire together wire together,’ is Hebb’s maxim. Perhaps capturing and modeling the behavior of a population of neurons could be the first step towards theories of whole brain function. The brain has roughly 85 billions of neurons, the connectivity between which are as complex as one could imagine. The dynamics of information propagation between them are quite a mystery — almost magic — and so is the spatial and temporal scales of neuron interconnections.

The Earth climate is another complicated system that scientists are still working hard to understand. But nowadays, researchers are relying more and more on climate modeling for understanding the complex feedback mechanism between individual components and behaviors of the whole system. One cannot perturb the real word climate system in order to test hypotheses, but it is comparatively easy and quick in climate modeling. Would a conceptual neuron circuit model be a helpful tool to understand how the brain works?

Prof. Shea-Brown’s lectures on population coding and neural circuit model were quite interesting to me. In the population coding model that he talked about, one of the key factors are the correlation between neurons, how and on what scale do they correlate. Such correlations are mostly stimulus driven: firing patterns and correlation variations depend on the dynamics of stimulus. But where do neurons group together and when do they react together and how strong their dependencies? From the information propagation perspective, the activation of downstream neuron depends on the correlation of upstream neurons, where low correlation usually cancels the fluctuation, which inhibits downstream neuron, but high correlation act as opposite. On the scale of entire network, the neurons are highly non-randomly connected, but rather can be thinking of as a stack of numerous small motifs (small connected units). This regular stacking pattern makes the whole network connections predictable. However, one should pay attention that connection is not the whole story, while connection strength also plays an important role of correlation between neurons and propagation of signal to downstream neuron. The connection and connection strength can change over response time to dynamic stimulus, which makes the neuron network even more magic. But what reasonable theory can model this or do we even understand what’s going on from experiments data? A computational model of neuron circuits definitely would help us, in my opinion, understanding the complexity of the network, which would, hopefully, stimulate a series of theories on how brain computes.

If asked what is the most import challenge in neural science now, I would vote for the need of a neuron circuit models that are able to reasonably capture the connectivity, correlation among a population of neurons! If evolution is an optimization process, the brain must be a very fine tuned model, and the model must start from somewhere simple.

Yu Liu:

I am currently a last year Ph.D. candidate at the Networks Lab, Department of Computer Science, UC Davis. I was motivated to learn the key techniques and methods used in modern brain science and neural networks, and apply them in my field for designing our next generation wireless networks. I am very happy with what i got out of this summer program and here is what impressed me most in the lecture of “Image statistics” given by Professor Odelia Schwartz.

Sensory systems aim to form an efficient code by reducing the redundancies and statistical dependencies of the input. Starting with this argument, Dr. Schwartz started by discussing the problem of fitting a receptive field model to experimental data. When estimating a statistical model, one needs to go back and check that model estimates match statistical assumptions. She first discussed bottom-up scene statistics, efficient coding scheme and relation of linear transforms to visual filters. One could do PCA on visual scene data, but ICA (independent component analysis) gives much better sparse representations, where the components resemble features detected by neurons of the retina. She showed us how Information Theory informs the study of visual images. Then she discussed generative models, which are applied largely to understand more complex visual representations, contextual effects, and she address nonlinearities such as complex cells and divisive normalization in a richer way. It is quite impressive how Dr. Schwartz uses a broad range of tools to study sensory systems at the neural level and ultimately understand visual perception.

Antonio Moretti:

I’m a first year PhD student in the Computer Science department at Columbia University, interested in Machine Learning. I’m also affiliated with the Neural Engineering lab at the City College of New York (CCNY). As someone without much neuroscience background, my time at the Redwood Center at Berkeley was a good introduction to some of the challenges of analyzing spike trains and techniques used to study how neurons represent information. I appreciated how both presenters and students drew on expertise from many different disciplines.

I think of the quote (by Dijkstra?) that ‘computer science has no more to do with computers than astronomy has to do with telescopes or surgery has to do with knives.’ My time at the Redwood Center has given me a deeper appreciation for neuroscience as a field that both drives and is driven by the study of computation, and which naturally has many parallels with other disciplines. I was particularly excited by Professor Jose Carmena’s talk on neuroprosthetics and sensorimotor learning. This is one of the coolest and most substantive applications of statistical machine learning and something I hope to explore in depth during my time as a graduate student.

Summer School on Theoretical Neuroscience

Comments are closed.