Introduction: Diving into the intricate relationship between brain activity and behavior poses considerable challenges for neuroscientists, particularly when studying living organisms like the flexible roundworm and the deformable jellyfish. To address such challenges, researchers have developed innovative AI tools designed to enhance the tracking and identification of neurons in real time.
Summary: Understanding how the brain’s activity corresponds to behavior requires the monitoring of individual neurons as they function—a daunting task, especially when the subject is a constantly moving roundworm or a morphed jellyfish. Researchers have created three advanced AI tools to overcome the “alignment and annotation” challenges in this field.
These tools—BrainAlignNet, AutoCellLabeler, and CellDiscoveryNet—can accurately identify and track cells with an impressive 99.6% precision, even as the animals flex and shift. This advancement eliminates months of manual labor, allowing for rapid, automated analysis and offering a novel approach for decoding the nervous systems of active organisms.
Key Facts
- The “Wiggle” Problem: Tracking neurons in live animals is complicated due to the constant relative movement of various body parts, which alters the positions of brain cells.
- BrainAlignNet: This tool efficiently tracks cells throughout lengthy video series with a 99.6% accuracy rate, operating 600 times faster than earlier methods.
- AutoCellLabeler: This tool identifies specific cell types (e.g., the “NSM” neuron) with a 98% accuracy through a color-coding system.
- CellDiscoveryNet: This innovative tool identifies and clusters cell types among different animals without requiring human training or guidance, achieving expert-level performance.
- Broad Application: Originally developed for the roundworm C. elegans, these tools have also been successfully adapted for tracking the more complex nervous system of the C. hemisphaerica jellyfish.
Source: Picower Institute at MIT
Understanding the connection between behavior and brain cell activity is a critical objective in neuroscience. Often, researchers opt for simple, transparent lab animals because their neurons can be fluorescently observed during various behaviors. However, visual access alone is inadequate.
Effectively tracking the position and identity of each neuron while the animals twist and turn during their intricate movements is a formidable challenge. In a new study published in eLife, MIT neuroscientists introduced three advanced AI-based tools designed to tackle this issue.
“In a live, behaving animal, we can now monitor neurons over time and accurately discern the identities of most neurons. This is crucial for our goal of linking brain activity to behavior,” remarked study senior author Steven Flavell, associate professor at The Picower Institute for Learning and Memory, MIT’s Department of Brain and Cognitive Sciences, and an HHMI Investigator.
The three tools work in unison: “BrainAlignNet” tracks cells across extended video sequences, “AutoCellLabeler” identifies cell types in each image with initial training, and “CellDiscoveryNet” identifies cell types autonomously without prior training.
The efficiency of these tools has significantly reduced the need for manual labeling in the lab, according to Flavell, suggesting a model for other labs to adopt when dealing with extensive image datasets—be it from human tissues or samples from other species.
“Researchers today are inundated with microscopy data,” Flavell noted. “The automatic identification of all cells in each image has become a pressing challenge for many.”
While Flavell’s team focuses on understanding brain function and behavior in C. elegans, they successfully applied BrainAlignNet to the jellyfish C. hemisphaerica in collaboration with colleague and co-author Brady Weissbourd. Weissbourd emphasized how this tool has greatly assisted in extracting neural activity data from videos of the jellyfish as they behave (albeit while gently restricted under a coverslip).
“They call it a jellyfish for a reason,” Weissbourd, an assistant professor in Biology and Brain and Cognitive Sciences, stated. “Any section can move independently from another. We’ve collected videos, but our key hurdle was understanding how to extract neural activity data because of the unpredictable movement of the neurons. This tool enabled us to align our videos so we could derive neural activity from them.”
Bottlenecks Begone
In a similar vein, Flavell’s lab faced significant challenges in 2022 during critical studies of brainwide activity and the effects of serotonin on behavior. Researchers needed extensive training and up to five hours to manually label each neuron from the videos recorded of the worms.
This was despite the use of a sophisticated four-color-channel barcoding system called NeuroPAL, which was initially developed at Columbia University. Faced with the immense time requirements and high cost estimates for outsourcing the task, lab members were feeling overwhelmed.
Yet, by the following week, Adam Atanas, the lead author and a former graduate student in Flavell’s lab, presented the first version of AutoCellLabeler.
Each of these tools builds upon existing neural network frameworks, which Atanas and co-authors optimized and refined to specifically tackle the alignment and annotation challenges. While some tools require training data, CellDiscoveryNet operates independently without needing such guidance.
Importantly, Flavell emphasized that the researchers did not have to direct the neural networks to focus on specific parameters (like cell colors, shapes, or positions) for them to function effectively. The networks could autonomously learn which image features led to successful task completion, such as aligning cells over time or determining cell identity.
Each tool addresses the challenges of “alignment and annotation” through unique methodologies, but they have all been fine-tuned to achieve high accuracy, the researchers report.
- BrainAlignNet efficiently resolves the alignment issue, asking, “Is the cell that was positioned here in this image now located over here in this subsequent image?” It operates 600 times faster than the lab’s previous method while maintaining single-pixel, 99.6% accuracy.
- AutoCellLabeler identifies each cell type in a given image, questioning, “Is this the neuron ‘NSM’ in this image?” This tool requires some initial training from human-annotated data but still performs well with incomplete labeling, achieving 98% accuracy with NeuroPAL data.
- CellDiscoveryNet can align and categorize fluorescently labeled cell types across various animals, asking, “Is this neuron in worm A the same cell type as this neuron in worm B?” This tool operates without supervision or training, matching the performance of skilled human annotators.
Despite these advancements, Flavell and Weissbourd acknowledged there is still work to be done. Weissbourd is focused on labeling all the cells in the jellyfish, as only one type of cell, making up only 10% of the total, was included in this study. He is also developing a microscope capable of imaging jellyfish as they swim freely.
Alongside Atanas, Flavell, and Weissbourd, the study included contributions from authors Alicia Kun-Yang Lu, Brian Goodell, Jungsoo Kim, Saba Baskoylu, Di Kang, Talya Kramer, Eric Bueno, Flossie Wan, and Karen Cunningham.
Funding: This research was supported by funding from various organizations, including the National Institutes of Health, the National Science Foundation, the McKnight Foundation, The Alfred P. Sloan Foundation, The Howard Hughes Medical Institute, and the Freedom Together Foundation.
Key Questions Answered:
A: Picture trying to keep track of a single grape in a bowl of Jell-O while someone shakes it—that’s akin to what scientists experience when attempting to follow a neuron in a moving worm. Previously, a single video could take a human expert five hours to label manually. This new AI accomplishes the task in seconds.
A: Although these specific tools were designed for transparent lab animals like worms and jellyfish (where every neuron is visible), the underlying AI framework can be adapted for any extensive series of microscopic images, including samples from human tissues.
A: Yes. One of the tools, CellDiscoveryNet, can autonomously group and identify cell types across varied animals without any human involvement or training, effectively unveiling the organization of the nervous system.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by staff.
About this AI and neuroscience research news
Author: David Orenstein
Source: Picower Institute at MIT
Contact: David Orenstein – Picower Institute at MIT
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Deep neural networks to register and annotate cells in moving and deforming nervous systems” by Adam A. Atanas, Alicia Kun-Yang Lu, Brian Goodell, Jungsoo Kim, Saba N. Baskoylu, Di Kang, Talya S. Kramer, Eric Bueno, Flossie K. Wan, Karen L. Cunningham, Brady Weissbourd, and Steven W. Flavell. eLife
DOI:10.7554/eLife.108159.2
Abstract
Deep neural networks to register and annotate cells in moving and deforming nervous systems
Aligning and annotating the heterogeneous cell types that comprise complex cellular tissues remains a significant hurdle in biomedical imaging data analysis.
This study introduces a series of deep neural networks designed for automatic non-rigid registration and cell identification, focusing on freely moving and deforming invertebrate nervous systems.
A semi-supervised learning approach was utilized to train a Caenorhabditis elegans registration network (BrainAlignNet) capable of aligning pairs of images of the bending C. elegans head with single-pixel-level accuracy. Integrating this network into an image analysis pipeline enables neuron tracking over time with a 99.6% accuracy rate. Remarkably, this network can also be adapted for aligning neurons in the jellyfish Clytia hemisphaerica, which has a significantly different body plan and movement dynamics.
A separate network (AutoCellLabeler) was trained to identify over 100 neuronal cell types in the C. elegans head based on multi-spectral fluorescence from genetic markers. This network labels more than 100 different cell types per animal with an impressive 98% accuracy, surpassing individual human labeler performance by consolidating knowledge from manually annotated datasets.
Finally, a third network (CellDiscoveryNet) was developed to perform unsupervised discovery of over 100 cell types in the C. elegans nervous system; by analyzing multi-spectral imaging data from various animals, it can automatically identify and annotate cell types without any human labels. The performance of CellDiscoveryNet closely matched that of trained human annotators.
These tools are poised to be valuable for a wide array of biological applications and should be adaptable for numerous contexts requiring the alignment and annotation of dense heterogeneous cell types within complex tissues.