figs added

This commit is contained in:
ackman678
2022-04-22 12:05:55 -07:00
parent c072c77f91
commit c7e3bd7067
2 changed files with 177 additions and 87 deletions

View File

@@ -64,3 +64,7 @@
* [ ] 9. Distributions of component shape, vacular vs hyperellipsoid neural blobs, small domains, possible types of cellular localization of calcium src
* [ ] 10. Then leads to: Where the calcium flux comes from, multi spots, small comp in opp hemisphere from the larger singular src blobs, axon traj or max prob of tissue src origination
## Results
* [ ] Check when Fig. S6 is referred to in text

260
main.md
View File

@@ -1,23 +1,19 @@
# Data-driven filtration and segmentation of mesoscale neural dynamics
**Authors and Affiliated Institutions:**
Sydney C. Weiser¹, Brian R. Mullen¹, Desiderio Ascencio², & James B. Ackman¹
Sydney C. Weiser¹, Brian R. Mullen¹, Desiderio Ascencio², & James B. Ackman¹
these authors contributed equally to this work
these authors contributed equally to this work
¹Department of Molecular, Cell, and Developmental Biology, University of California Santa Cruz, Santa Cruz, CA, USA
²Department of Psychology, University of California Santa Cruz, Santa Cruz, CA, USA
Correspondence and requests for materials should be addressed to: james.ackman@gmail.com
# Abstract
## Abstract
In order to understand how information flows across cerebral networks we need to be able to record neuronal activity across the cortical hemispheres from awake behaving animals. One method recently developed in mice uses calcium imaging data to access information flow. However, analyzing video recordings of calcium imaging comes with challenges, including optical and movement-associated blood vessel artifacts and limited references for time series extraction. Here we present a data-driven workflow that isolates artifacts from calcium activity patterns, and segments independent functional units across the cortical surface using Independent Component Analysis (ICA). ICA utilizes the statistical interdependence of pixel activation to completely unmix signals from background noise, given sufficient spatial and temporal samples. We characterize each component using a set of extracted spatial and temporal metrics and develop the tools for building a supervised learning classifier that automatically separates neural activity and artifact components. We demonstrate that the performance of the machine classifier matches human identification of signal components in novel data sets. We also analyze and compare control data recorded in a glial cell reporter and non-fluorescent mouse lines that validates human and machine identification of functional component class. We additionally utilize isolated signal components to produce segmentations of the cortical surface, unique to each individuals functional patterning. Time series extraction from these maps maximally represent the underlying signal in a highly compressed format. This workflow of data-driven video decomposition and machine classification of signal sources will aid robust and scalable mapping of complex cerebral dynamics. These improved techniques for data pre-processing, spatial segmentation, and time series extraction result in optimal signals for further analysis and model development.
# Introduction
## Introduction
Understanding cerebral dynamics at multiple scales is important for exploring how environmental and genetic influences give rise to altered neural connectivity patterns linked to behavioral phenotypes 8,9.
Optical techniques have long been used to monitor the functional dynamics in sets of neuronal elements ranging from isolated crustacean nerve fibers1 to entire regions of mammalian cerebral cortex 2,3. Imaging calcium flux with calcium sensors 4,5 allows neural activity monitoring across the entire neocortex with high enough spatiotemporal resolution to identify sub-areal networks of the neocortex 6,7. These techniques have the potential to map group function at unprecedented resolution and scale across the neocortical sheet in awake behaving mice; however identifying neural signals from calcium imaging sessions is challenging due to numerous confounding signal sources.
@@ -37,46 +33,127 @@ Further, we explore the resolution-dependent effect of signal extraction on ICA
To record neural activity patterns in the cortex of awake behaving adult mice, we transcranially imaged fluorescence from a mouse that has the genetically encoded calcium indicator, GCaMP6s, expressed in all neurons under the control of the Snap25 promoter 25. We expose and illuminate the cranium with blue wavelength light and capture emitted green light with a sCMOS camera at high spatial resolution (2160x2560 pixels, 5.5 megapixels; 6.9 µm/pixel). To observe the spatiotemporal properties of the recorded activity patterns, we crop the video to only neural tissue, and compare the change in fluorescence over the mean fluorescence: ∆F/F over time (Fig. 1A-B). In order to identify eigenvectors associated with artifacts and hemodynamic responses, similar data was recorded and processed in three sets of age matched control mice: cx3cr1 GFP (microglia; mGFP), adhl1 GFP (astrocyte; aGFP), and the non-transgenic C57/black 6 (Bl6) mice.
ICA separates signal sources from high resolution data
### ICA separates signal sources from high resolution data
A spatial ICA decomposition on a video, wherein the global mean was subtracted, produces a series of spatial components and a mixing matrix, which represents the components influence at each frame in the video (Fig. 1C). The components are sorted by temporal variance, and flipped so that they all represent positive spatial effects (See Methods). The independent components can be sorted into 3 major categories based on their spatiotemporal properties: neural components, artifact components, and noise components (not shown). Group analysis on these components was performed and discussed later in this manuscript.
Neural components represent a distinct area of cortical tissue, which we refer to as its cortical domain. The spatial morphology of these neural components can vary in both spatial extent and eccentricity. Occasionally neural components can also contain multiple domains, which have similar enough activation patterns to be identified as a single neural component. In the examples in Fig. 1C, the second neural component appears to represent a higher order visual network, with multiple domains on the left hemisphere, and a small mirrored domain on the right hemisphere.
<figure><img src="figs/methods-figure1.png" width="400px"><figcaption>
**Figure 1** Transcranial calcium imaging video data is separated into its underlying signal and artifact components, and can be rebuilt from only signal components for artifact filtration. A) Recording schematic and fluorescence image of transcranial calcium imaging preparation, cropped to cortical regions of interest. B) Sample video montage of raw video frames after dF/F calculation. C) ICA video decomposition workflow. A demeaned dF/F movie is decomposed into a series of statistically independent components that are either neural, artifact, or noise associated (not displayed). Each component has an associated time course from the ICA mixing matrix. Neural components can be rebuilt into a filtered movie (rICA). Alternatively, artifact components can be rebuilt into an artifact movie. Circular panels show higher resolution spatial structure in example in the rightmost components.
</figcaption></figure>
Artifact components can take many forms, including those from blood vessels, movement, and optical distortions on the imaging surface. The left two artifact examples (Fig 1C) likely represent hemodynamics from the superior sagittal sinus vein with the bottom artifact likely representing blood flow through the middle cerebral artery26. A very high resolution map of the vessel patterns can be rebuilt from these components, with branching structures as small as 12 µm in diameter (shown in Fig 1C). Noise components lack a spatial domain, and have little to no temporal structure. Signal and artifact components can be sorted manually in graphical user interface (Fig. S1) or with a machine learning classifier.
<figure><img src="figs/methods-figureS1.png" width="400px"><figcaption>
**Figure S1** A Tkinter-based graphical user interface (GUI) for browsing independent component analysis results. A) 15 independent components, order 60-74 by variance. Components displayed in grey are selected as artifact either manually or using a machine learning classifier. A click on the display for any given component manually toggles its classification as either signal or artifact associated. Components colored in the cool/warm colormap are signal associated. Components colored in the black/white colormap are artifact associated. Buttons on the bottom panel control GUI movement through the dataset. The text panel at the bottom displays where the index for the signal/noise cutoff. B) The component viewer displays additional temporal metrics about any given component. The top controls allow movement through the dataset by manual scrolling with (+/-) buttons, up/down keys, or through typing a desired component in the text box. PC timecourse displays the mixing matrix timecourse extracted by ICA for the given components. The Wavelet power spectrum is displayed in the bottom right, and an integrated wavelet or fourier representation is available on the bottom left. 0.95 significance as estimated by the AR(1) red-noise null hypothesis is displayed as a dot-dash line.C) The domain map correlation page shows the pearson cross correlation value between a selected seed domain and every other domain detected on the cortical surface. The seed domain can be changed through the arrow keys, the (+/-) buttons, or by clicking on a different domain on the displayed domain map. D) The Component region assignment page allows manual region assignment for each domain. After the region is selected from the menu on the right, each domain clicked on the domain map is assigned to that region.
</figcaption></figure>
Video data can be reconstructed using any combination of these components. In particular, a filtered video can be constructed by excluding all artifact components. The artifact movie can also be reconstructed to verify that desired signal was not removed with the artifact filtration (Fig. S2 video).
High resolution spontaneous activity improves noise separation and increasing data length results in a stable number of signal components
<figure><img src="figs/methods-figureS2.png" width="400px"><figcaption>
**Figure S2 (Video)** ICA filtration removes artifacts for superior neural signal unmixing. Original video (left) is decomposed into artifact components and neural signal. The filtered artifact movie (center) can be rebuilt to visualize artifacts that were isolated and removed during the filtration process. The rebuilt neural signal (right) depicts just the filtered neural signal. 0.5Hz filtered mean is re-added to both filtered artifacts and neural signal (27). Video is a real-time 1-minute excerpt. Values displayed are in dF/F.
</figcaption></figure>
### High resolution spontaneous activity improves noise separation and increasing data length results in a stable number of signal components
Signal components can be separated from noise components based on their spatial structure, log temporal variances, or lag-1 autocorrelations. Signal components have discrete spatial structure and a high lag-1 autocorrelation. Conversely, noise components have dispersed signal across the cortex and have a low lag-1 autocorrelation. The lag-1 autocorrelation metrics from these two groups are so discrete enough to separate these populations by their lag-1 autocorrelation alone (Fig. 2A).
To automate this sorting process, a two-peak kernel density estimator (KDE) was fit to the histogram of lag-1 autocorrelation data. The KDE distribution is an easy way to summarize the two major peaks, as well as the minima between them, defined as the noise cutoff. The locations of these peaks, and the minima between them is highly stable across our 8 P21 test recordings. We found the non-noise peak (top) at an autocorrelation of 0.94 ± 0.01, and a noise peak (bottom) at 0.13±0.01. The central cutoff minima was slightly more variable, with an autocorrelation value of 0.61±0.05. A high degree of separation between these peaks (dpp = 0.82±0.01; p < 0.001) suggests that the signal and noise signal sources were completely separated, and thus all signal sources were distinctly identified.
To test how ICA component separation is affected by spatiotemporal resolution and video duration, we altered properties of the input video and observed its effects on the quality of signal separation through lag-1 autocorrelation distributions. Reducing the spatial resolution resulted in a steady decrease in peak separation, until the dual peaked structure collapsed at a resolution of 41 µm pixel width (Fig. 2B).
<figure><img src="figs/methods-figure2.png" width="400px"><figcaption>
**Figure 2** ICA decomposition quality is sensitive to recording duration, spatial and temporal resolution. A) Distributions for lag-1 autocorrelation (black) and temporal variance (purple) are displayed for components 1-1200. A dotted line representing the cutoff determined from the distribution in the right panel. In the right panel, a horizontal histogram on the lag-1 autocorrelation with a two-peaked kernel density estimator (KDE) fit reveals a two peaked-histogram, summarized by a barbell line. Group data for each peak, as well as the central cutoff value is summarized by the boxplots on the right (n=16 videos; from 8 different animals). B) 2-peaked KDE fits of horizontal histogram distributions under various spatial downsampling conditions, with barbell summary lines on the right. After spatial resolution decreases beyond 41 µm pixel width (px), this two peak structure collapses, and an x denotes the primary histogram peak. C) 2-peaked KDE fits of horizontal histogram distributions under various temporal downsampling conditions, with barbell summary lines on the right. D) Component stabilization for different length video subsets of six 20- minute video samples. (n=6 videos from 3 different animals) Individual thin lines show polynomial fit to signal or artifact components under each time condition. Thick lines denote the curve fit of the mean number of components in each category across these six experiments. The group distribution of components at 20 minutes is summarized by the boxplot on the right (n=16 videos; from 8 different animals).
</figcaption></figure>
Increasing the sampling rate above 10Hz showed little to no effect on the peak to peak distance (Fig. 2C; ∆pp < 0.01), and a slight decrease in the autocorrelation of the primary peak (∆p1 = 0.03). However temporal downsampling below 10Hz resulted in a shifting of the signal and noise peaks (∆p1 = 0.06), and a reduction in the peak to peak distance (∆pp = 0.02). This result agrees with previous analyses that found 10Hz to be the maximal sampling frequency required for measuring population calcium dynamics 20. Together, these findings suggest that the separation quality of our captured dynamics are highly sensitive to spatial resolution, and not as sensitive to temporal resolution. We considered collecting spatial samples higher than our current resolution of 6.9 µm/px, but computing decompositions on datasets this large would push the limits of available computing.
To determine the ideal duration of video collected, we calculated the number of significant signal and noise components for various video durations (Fig. 2D). We found that for ICA decompositions on activity patterns from a P21 mouse, the number of signal and artifact components leveled off after 20 minutes. Population analyses showed that this number was highly similar among P21 mice (n signal components: 244±25.7; n artifact components: 87.2±20.7; N=3 mice, 2 subsequent recordings each).
Spatiotemporal metrics can be derived from each component to assess the classification of each signal source
### Spatiotemporal metrics can be derived from each component to assess the classification of each signal source
Using the ICA decomposition from 20 minute duration videos, we inspected each set of experimental components from both controls and GCaMP6 expressing mice to classify each as neural or artifact (Fig. 3A). The artifacts were further distinguished between vascular and other for descriptive purposes. Neural components typically have globular spatial representation with highly dynamic properties. Vascular artifact components can be easily visually identified by the vascular-like spatial representation. Other artifact components that are commonly seen in the components are movement or preparation artifacts. These typically have a diffuse spatial representation with smaller or sparse temporal activations. We manually scored each component in the dataset as an artifact (vascular or other) or neural component (Fig. 3B). From all the GCaMP experiments, an average of 73.5 ± 5.9% of the components were identified as neuronal, where the remaining 26.5±6.3% were artifact (vascular: 8.7 ± 2.7%; other: 17.6 ± 7.1%). GCaMP mice had substantially higher numbers of neural components compared to the controls, resulting in four times as many as the GFP mouse lines and six times the number in Bl6 mice (mean number of neural components GCaMP: 235, mGFP:62, aGFP:54, Bl6: 39).
<figure><img src="figs/methods-figure3.png" width="400px"><figcaption>
**Figure 3** Class identity cannot be established by any individual extracted feature. A) Examples of independent components of neural (n) signal, vascular (v) artifacts, and other (o) artifacts. Components are defined by both the spatial representation (eigenvector) and its temporal fluctuations. Circular windows magnify key portions of the eigenvector. Eigenvector values represented by colormap from blue to red. Temporal representation is in relative intensity (black time course under the eigenvector), only 1 minute of the full 20 minutes are shown. B) A comparison of the number of neural signal (GCaMP: dark blue; controls: light blue) and the artifact components (vascular: red; other : orange) with each animal shown (GCaMP components: N=12 animals, n=3851; mGFP components: N=3, n = 484; aGFP components: N=3, n = 442; WT components: N=3, n=229). C) Examples of binarization of the eigenvector. Histogram shows the full distribution of eigenvector values. The dynamic threshold method to generate binarized masks was used to identify the high eigenvector signal pixels (yellow) against the gaussian background (blue). Windowed spatial representation shows binarization on the key portions of the eigenvector. D) Examples of neural and artifact wavelet analysis shown in the to signal power-noise ratio (PNR) plots. 95% red-noise cutoff was used to create signal to noise ratio (black dashed lines). E) Histograms of example spatial metrics derived from GCaMP eigenvector values, F) morphometrics from the shape of the binarized primary region, G) temporal metrics derived from relative temporal intensities, H) frequency metrics derived from the PNR.
</figcaption></figure>
We extracted spatial and morphological metrics of the neural and artifact components to characterize spatial feature differences (Fig. 3C). We can pull general spatial intensity metrics like global minimum and maximum from each components spatial eigenvector . Further, the largest eigenvector values correspond to the regions that have the most dynamic change after the video rebuild. Given that the shapes of the high pixel intensity values are used by humans to identify their classification, we decided on a dynamic thresholding technique to binarize the eigenvector. When examining the histogram of intensity values of the neural eigenvector, there is a large population of pixels centered around zero with a single long tail. We identified all pixels that were unique to the long tail by excluding all values that lie within range of the shorter gaussian tail. From these binarized masks, morphometrics of each primary region of the component can then be quantified, such as the axis lengths or eccentricity of the shape.
<figure><img src="figs/methods-figureS3.png" width="400px"><figcaption>
**Figure S3** Morlet wavelet transform can be used to create a signal to noise ratio that indicates frequencies of high likelihood of signal. A) Example neural time series, 90 sec of data recorded at 10hz reported in the temporal portion of a component B) Morlet wavelet (=4) was used for the wavelet transform. C) The power spectra of the wavelet transform (colorbar, purple to yellow) and the global spectral analysis (black, right). The 95% quantile is shown in dashed lines on the global spectral analysis. Reformatting the frequency spacing, produces. D) A signal to noise power ratio is calculated by dividing the power spectra by the 95% quantile. All values above 1 (dashed line) would indicate a high probability of signal. Anything below 1 would most likely be considered noise (colorbar, green to red). Reformatting the frequency spacing, produces F.
</figcaption></figure>
We characterized temporal dynamics of each component by extracting features from the corresponding temporal fluctuations and their frequency analysis (Fig. 3D). The relative intensity fluctuations allows us to pull out temporal features of each component, like standard deviation and global maxima/minima of each component contribution. We performed wavelet analysis on these time series to characterize only highly significant frequencies (Fig. S3). We calculated a power signal to noise ratio (PNR) with the 95% quantile of red noise defined by the autocorrelation value of each time series. With this ratio, significant frequencies resulted in a value above 1.
After extracting these metrics, we then compared the diverse populations of signal and artifact components, separated between vascular and other, for each feature of interest (Fig. 3E-H). A full list of all metrics and their respective definitions is provided, organizing between spatial, morphometric, temporal, and frequency features (Fig. S4). While the data shows trends, there is not one single metric that alone could predict the classification of artifacts and signal components.
<figure><img src="figs/methods-figureS4.png" width="400px"><figcaption>
**Figure S4** Spatial, Morphometric, Temporal, and Frequency features extracted from components. A) Spatial metrics from statistical characteristics of each eigenvector (spatial representation of the component). The histogram of all eigenvector values is shown the right of the eigenvector. B) Morphometrics collected from the binarized thresholded masked region of the eigenvector. The largest (primary) domain was used to generate the features for each eigenvector. The majority of metrics calculated utilizes sci-kit image region properties. C) Temporal metrics are statistical descriptors from the corresponding row of the mixing matrix for each eigenvector. D) Frequency analysis was done on the mixing matrix row, utilizing the PNR calculated from wavelet analysis (Figure S1). The longest of all continuous frequencies was used to extract each feature.
</figcaption></figure>
Control neural components are not distinct globular regions like those from the GCaMP line (Fig. S5); rather, they had co-activity with vascular units in the center of its domain. This resulted in the thresholded region being more similar to the vascular artifacts seen in GCaMP components. However, we were still able to find example components that only had vascular spatial representation without the surrounding tissue activation. Finally, we found similar artifacts of the other category in the control data that are also present in the GCaMP sets of independent components.
GCaMP mice have strong distinct globular domains that cover the entire cortical surface
<figure><img src="figs/methods-figureS5.png" width="400px"><figcaption>
**Figure S5** Examples of control components, resulting in similar artifact components to GCaMP recordings. A) Control components from 20 minutes of recording from cx3cr1 GFP (microglia; mGFP, left), adlh1 GFP (astrocyte; aGFP, center), and Black 6 (Non-transgenic; Bl6, right) mice. Two IC examples from each control group corresponding to hemodynamics/neural activity (top) and artifacts (bottom). Artifacts chosen show a vascular and other artifact commonly seen in GCaMP recordings. Similar data description in regards to temporal and spatial representations as seen in Figure 1. B) Examples of control binarization of the eigenvector only showing the windowed spatial representation on the key portions of the eigenvector. C) Examples of neural and artifact wavelet analysis shown in the to signal power-noise ratio (PNR) plots.
</figcaption></figure>
### GCaMP mice have strong distinct globular domains that cover the entire cortical surface
To investigate how well these metrics captured features of each component, we explored the coverage of the cortical surface with regions identified by the dynamic thresholding technique. By plotting all the contours from one experiment, the major footprint of the component shows the representative space of each component within the brain region (Fig. 4A). The majority of defined brain regions are represented by the GCaMP component footprints, with varying amounts of overlap associated with different cortical areas. Control data resulted in sparsely mapped footprints across the cortex. Further, the mapped centroid location of all components that had a thresholded region from all GCaMP experiments shows the neural components have high densities in the sensory portions of the brain (Fig. 4B).
Thresholded GCaMP neural components have high densities in the olfactory bulbs and posterolateral portions of the cortex, including visual, auditory, and somatosensory systems. There is less dense localization of centroids along the anteromedial portions of the cortex, including motor and retrosplenial cortices. Further, in both the GCaMP and control mice, we see the majority of artifact components localize along anatomical brain vasculature. The major venous systems including the rostral rhinal vein - the superior sagittal sinus, and the transverse sinus - all show high densities of artifact centroid locations 26. The cerebral arteries are less consistent in localizing the primary domain of their respective components. We see that many of the other artifacts align with the sagittal and lambda cranial sutures 27 .
<figure><img src="figs/methods-figure4.png" width="400px"><figcaption>
**Figure 4** Spatial thresholding and frequency data reliably produce neural metrics. A) Individual experiment preparation with corresponding spatial footprints by class of component: GCaMP neural (dark blue), control neural (light blue), vascular (red), other (orange). B) All model experiments (N=12) with corresponding centroid location of each of the class of metrics. Histograms show the resulting average distribution of spatial location across the field of view (error bars are standard deviation between experiments). C) Individual experiment (same as A), where components are sorted by temporal variance. PNR mapped to each component and organized between the classification of components. D) Main frequencies seen in each component class between each experimental condition. Dotted lines represent the mean within each animal, where the gray around the mean corresponds to the standard deviation of that animal. The color line corresponds to the grand mean all between experiments. E) Relative position of the types of components between experiments and transgenic model, shown as the average and standard deviation between experiments. F) The percent components that had footprints and frequency data that was above the noise cut-off, separated by component type and experimental condition.
</figcaption></figure>
We investigated the effects of wavelet analysis on feature generation by sorting each component within an experiment to its temporal standard deviation value. We then ordered each class of components based on variance and displayed a grayscale heatmap of the significant frequencies in each component across experimental conditions (Fig. 4C). Taking the average of each of the global wavelet spectrum across each experiment highlighted the prominent frequencies seen in each classification (Fig. 4D). Prominent GCaMP frequencies are between 0.3 to 3.5 hz, where control dynamics are typically seen between 0-1hz. Vascular components tend to have the same frequencies as their neural counterparts, while other components typically have faster frequencies (above 3 hz), most likely due to motion during the recordings.
We looked at the overall distribution of the class of components with respect to their relative variance (Fig. 4E). We found significant shifts in distribution in the types of components based on variance. Neural components were found with high variance, however they significantly tapered off nearing the noise floor. We found the highest percentage of vascular components with high variance, followed by constant low probability throughout the relative variance. The other components had the highest probability close to the noise floor, with low frequency throughout the rest of the relative variance position.
@@ -87,49 +164,112 @@ We found that wavelet analysis between all GCaMP experiments showed 95.8 ± 3.8%
Overall, this indicated that all metrics can be generated for the vast majority of neural signals and vascular artifacts. However, on average, 40% of the other artifacts do not have morphometrics for their components and 13% are lacking significant frequencies. Further, this also shows changes in the relative spatial and temporal variance distributions of each of these components. The neural and vascular components align with known and predictive anatomy and were found primarily in mapped locations of increased variance. The majority other components have less variance, and therefore less contribution to the original dataset; the ones that had a footprint were typically found along cranial sutures. All control data had fewer footprint and frequency metrics (Fig. 4 A,C).
Spatial metrics best separate neural components from artifacts
### Spatial metrics best separate neural components from artifacts
To build a classifier, we identified metrics that distinguish between neural and artifact components. Correlation of component class to with respect to their metric value and their respective t-statistic between class identified which features are most useful to classify each component (Fig. 5A, top). For this process, we randomly selected seven animals from our twelve experiment dataset for determining features for training the classifier. The remaining five were used as the novel dataset for validating the machine learning performance.
We trained the random forest classifier with all metrics to identify the importance of each feature (Fig. 5A, middle). The features having the greatest t-statistic magnitude had the most importance for proper classification. In particular, spatial and morphological metrics were found to have the highest relative importance for component classification. The final list of 10 feature metrics utilized in the machine learning process are shown (Fig. 5A, bottom).
Machine learning performs as well as human classification
<figure><img src="figs/methods-figure5.png" width="400px"><figcaption>
**Figure 5** Spatial and morphological metrics are most important to classify components at 97% on novel experimental data. A) Correlation and t-statistic between artifact and neural components for each feature (N=7, n=2190). Spatial (circles), morphological (triangle), temporal (diamond), and frequency (square) metrics plotted. Cut off values that helped in the selection process are dotted lines, rejected values in gray. Closed points are components that meet requirements. Relative importance metric from the Random forest classifier plotted against each metric by their respective classes. Selected metrics shown in the list within each type of feature, sorted by greatest t-statistics magnitude. B) The dataset was parsed into ML modeling dataset (N=7, n=2190) that was used to establish the machine learning pipeline and a novel dataset (N=5, n=1661) of full experiments that will not influence the classifier. Modeling data was stratified 70/30 split based on classification. 1000 iterations of training the machine learning classifier on selected metrics and validating the machine classification with human classifications. C) Performance of the ML training, using subsets of the ML modeling dataset. 1000 iterations resulted in accuracy, precision and recall boxplots. D) 1000 iterations of training on the full ML building dataset was performed and the novel dataset was assessed on its performance. F) SVD projection of metric data with human classification mapping (top) and the confidence of the ML classifier (bottom). E) Performance of each of the novel datasets the full classifiers performance, animals plotted separately showing distribution of the 1000 different trained classifiers. (G) Approximate location of false negatives and positives from novel datasets.
</figcaption></figure>
### Machine learning performs as well as human classification
We utilized the common approach of hiding a portion of the data from the learning algorithm to validate efficiency of machine learning and establish hyperparameters (Fig. 5B). Stratified sorting was used to ensure an equal ratio of artifact and neural signal was placed into each subset. We sampled and trained the random forest classifier 1000 times to see the distribution of results. We found that it performed well by all metrics assessed: mean accuracy of 97.1%, mean precision of 98.4%, and mean recall of 97.6% (Fig. 5C).
After establishing the efficacy of the classifier, we set out to assess the full classifier based on all data points from the machine learning dataset. We projected all features onto the first two components of a singular value decomposition (SVD), mapping both the human classification and the mean classifier confidence for 1000 iterations (Fig. 5F). As expected, we saw distinct neural and artifact clusters in feature space. Interestingly, the two different types of artifacts also separated into distinct portions of the projected feature space. The confidence of the classifier showed very few components between the extremes, illustrated by the top binned confidence value distributions for each human classification (Fig. 5F). We found 71.2 ± 0.2% of components were binned in highly confident values for neural signals (left-most bin), and 22.7±0.2 were binned in highly confident values for artifacts (rightmost bin) (7.0±0.2% for vascular; 15.7±0.1% for other). This indicates that the classifier exhibits reliable confidence in the decision boundaries (Fig. S6B).
<figure><img src="figs/methods-figureS6.png" width="400px"><figcaption>
**Figure S6** Great machine learning performance with multiple classification algorithms. A) Receiver operating characteristic (ROC) plots for each classifying algorithm utilized. Voting classifier was composed of the 4 other algorithms. Random Forest Classifier (RFC; \*) was used in all analyses in this paper. B) Histogram of human classification with the percent of in each bin confidence bin (same data as Figure 5F, bottom). Log-scale was used to highlight the low percentage points. C) Histogram of human classification with the percent in each binned novel classification. Classification bins based on the percent each classification occurred correctly in the 1000 trials. True positive (TP), False positive (FP), False negative (FN), True Negative (TN), human (h), machine (m)
</figcaption></figure>
To assess the efficacy of this classifier, we then tested 1000 iterations of novel data - completely new experiments that were not involved in training the classifier (Fig. 5D). We plotted the resulting 1000 iterations of each experiment separately (Fig. 5E). Notably, we found that the overall results were about the same as the subset classifier: mean accuracy of 96.9%, mean precision of 98.0%, and mean recall of 97.6%. From the histogram of classification frequency, we found similar results to the confidence of the classifier (Fig. S6C). Among all components, 69.7±2.0 were confidently classified by machine learning as neural signal (left-most bin), where 25.8±1.4% were confidently classified as artifact (rightmost bin; 7.5±0.4% of vascular; 18.2±1.6% of other). The remaining 5% were mis-classified. We investigated the locations of the components that consistently showed false positive or false negative (Fig. 5G). The majority of these components were either on the edge of the region of interest for the cortical hemispheres or within the olfactory bulb.
Global mean needs a high-pass filter to account for removed artifacts before re-addition
### Global mean needs a high-pass filter to account for removed artifacts before re-addition
Removal of artifact components will ensure that neural signals are the dominant signal after rebuilding with identified neural components; however, during reconstruction of ICA data, re-addition of the global mean must occur. Thus, we examined the influence of removing artifact components on the global mean and how filtration of the global mean should be considered. For example, vascular artifacts associated with the superior sagittal sinus contribute to the global mean and increase the range of signals recorded during periods of motion (Fig. S7). Assessment of the global mean from GFP control experiments showed pronounced signal in these slower frequency oscillations, suggesting the use of a high pass filter. Indeed, we found that application of a high-pass filter with a 0.5 Hz cutoff minimizes these types of global slow oscillations (Fig. S8). This type of filtration should not be applied to each component individually, as there are regional networks reliant on these slower oscillations 20. Removal of these low frequencies from the global mean gave improved identification of the cortical patch signal sources that contribute to neural activation (Fig. S9 video)
Domain maps optimize time course extraction from underlying data
<figure><img src="figs/methods-figureS7.png" width="400px"><figcaption>
**Figure S7** Vascular and other artifacts are more correlated to movement than neural components. A) All neural (blue), vascular (red), and other (orange) components and their correlation to the motion vector from each animal. B) Spatial location and corresponding correlation (green to pink) of each component to motion based on their respective classification and genetic background. neural: left column, vascular: center column, other: right column. Top row: GCaMP, second row: mGFP, third row: aGFP, last row: Bl6
</figcaption></figure>
<figure><img src="figs/methods-figureS8.png" width="400px"><figcaption>
**Figure S8** Mean filtration to minimize global slow oscillations seen in GFP control data. A) 30sec examples of the global mean that was subtracted and stored at the initiation of the pipeline, before the decomposition into eigenvectors for GCaMP, mGFP, aGFP and Bl6. B) Global wavelet spectrum (top) and its corresponding power to noise ratio (PNR; bottom) of GCaMP (N=4), mGFP (N=3), aGFP (N=3), and Bl6 (N=3), red indicates the omitted frequencies from our applied high pass filter. C) High-pass filtration results of the same 30 sec in A.
</figcaption></figure>
<figure><img src="figs/methods-figureS9.png" width="400px"><figcaption>
**Figure S9 (Video)** Readdition of the global mean with high-pass filter better represents neural signal. Rebuilt video from all components (left) or with only neural components (center, right) with mean re-addition from the original global mean (left, center) or the global mean with 0.5 Hz high pass filtration (right). All movies are on the same scale of change in fluorescence over mean fluorescence (rainbow colorbar).
</figcaption></figure>
### Domain maps optimize time course extraction from underlying data
In addition to their applications for filtering, the components also are a rich source of information about spatial distributions of signal within the cortex. Components across the cortex show a wide diversity of spatial characteristics, and represent a detected independent unit of signal. We use the spatial domain footprints of each signal component to create a data-driven domain map of the cortical surface by taking a maximum projection through each component layer (Fig. 6A). For analysis, 8 maps were created, with an average of 230 ± 14 detected activation domains (Fig. S10).
<figure><img src="figs/methods-figure6.png" width="400px"><figcaption>
**Figure 6** Time series extracted from domain maps outperform time series generated from other downsampling methods. A) Schematic of domain map creation. A maximum projection is taken through each blurred signal component to form a domain map. Mean time courses are extracted from different domains generated from a domain map. B) Example of a mosaic movie frame rebuilt with respect to each downsampling technique. The non-downsampled filtered movie is represented on the left with subsequent downsampling based on domain, grid or voronoi maps. C) Percent total signal of the filtered video represented by extracted time courses. Percent of overall video signal captured in domain maps was calculated for each animal (green circle; N=8), and compared to signal content from a domain map generated from a separate video from the same animal (green triangle). Percent total signal represented by time courses extracted from grid (blue square) or randomly generated (blue diamond) maps were compared as controls. In the right panel, the percent signal relative to the domain map percent signal was summarized in a box plot. D) Variation between time courses extracted with each map method was then quantified as a sum signal variation for each experiment. In the right panel, the sum signal variation for each comparison map relative to the optimized domain map sum signal variation was summarized in a box plot.
</figcaption></figure>
At full resolution, there are approximately 1.5 million pixels along the surface of the cortex an impractical number of sources for most network analyses, which work best on 10-300 time series28. As such, data-driven domain maps are an optimal method for extracting time courses from the cortical surface. Time series were extracted by averaging the filtered movie under each domain. This results in a series of 230±14 time series per video recording, representing a 6,500-fold reduction in size (Fig. 6A; bottom).
To test how well the full filtered video was represented in these time series, we rebuilt mosaic movies, where each domain is represented by its mean extracted signal at any given time point (Fig. 6B, Fig. S11 video). By comparing the borders of the large higher order visual activation, one can see visually that the data appears more distorted in the voronoi and grid. To numerically compare whether this method of time course extraction was superior to alternate methods, we also compared mosaic movies rebuilt with either grid or voronoi maps.
<figure><img src="figs/methods-figureS10.png" width="400px"><figcaption>
**Figure S10** Individual domain maps from animals used in this study. A) Domain maps generated from Snap25 GCaMP6s recordings from littermates (a-c) and from subsequent recordings (#). B) Domain maps generated from the three different control lines.
</figcaption></figure>
The residuals between the mosaic movies and the filtered movies were compared to the total spatial variation in the filtered movie to quantify the amount of total signal represented by the extracted time courses (Fig. 6C, left). In nearly every experiment, the optimized domain map performed better than any other time course extraction method, and accounted for 68 ± 1.2 % of the total spatial signal in the filtered video (n=8). Domain maps generated from different videos from the same animal performed nearly as well as the optimized domain maps created from the video compared (Fig. 6C, right). These maps performed significantly better (p = 0.01) than the grid maps, and much better than the voronoi maps (p < 0.001).
Saving these extracted time courses and all associated metadata results in a file size of 100MB, representing an additional 60-fold compression compared to saving the full ICA compressed dataset. One potential benefit to accounting for the underlying regions of the brain while extracting time courses is reducing the amount of times that an extracted mean signal is diluted by signal from a neighboring region. Properly restricting time series extraction to statistically independent units should enhance the dynamic range between extracted time series.
<figure><img src="figs/methods-figureS11.png" width="400px"><figcaption>
**Figure S11 (Video)** The mosaic movie represents the neural signal captured from time series extracted from domains across the cortical surface. Signal from the filtered video (left) is segmented by the data-driven domain map (left overlay). Average time series from these segmented domains can be visualized as a mosaic movie, where each domain is represented by its averaged time series. The filtered video contains 1.77 megapixels representing the cortical signal, while the mosaic movie contains only 300 unique time series to describe the same signal with a 5900x compression rate. Video is a real-time 1-minute excerpt. Values displayed are in dF/F.
</figcaption></figure>
To test whether domain maps extracted time courses better extract the full range of variation in the cortical surface, we compared the total variation between time courses rebuilt under domain maps from the same video, same animal, or control grid and voronoi maps (Fig. 6D, left). When normalized to the performance of the optimized domain map, domain maps from the same animal again had similar performance, but grid and voronoi maps performed significantly worse (p < 0.001; Fig. 6D, right). There is a 15% reduction in signal variation in grid or voronoi maps compared to domain map extracted time courses.
Animal specific domain maps can be regionalized based on reference maps and domain features
### Animal specific domain maps can be regionalized based on reference maps and domain features
Domains were then manually sorted into regions, with informed decisions based on network analysis, metric comparisons, and reference maps (Fig. 7A-D). Domains did not exhibit uniform spatial characteristics across the neocortex. Different detected regions have different spatial characteristics such as area (ANOVA F=139, p < 0.001), as well as eccentricity (ANOVA F=60.0, p < 0.001). Generally, higher order and motor regions (R, V+, Ss, Mm, Ml) had larger domains than primary sensory areas (V1, A, Sc, Sb, S) (p>|t| = 0.000), and also exhibited higher eccentricity (p>|t| = 0.000).
To test the meaning of these maps, a series of comparisons were performed. Pairs of maps were overlaid on top of each other (Fig. 7F-H), and every domain was compared to its nearest domain in the comparison map. The Jaccard overlap was calculated for each of these domain pairs, and quantified for each pair of map comparisons. For a null hypothesis, randomly generated Voronoi maps were also compared.
<figure><img src="figs/methods-figure7.png" width="400px"><figcaption>
**Figure 7** Domain maps are created from ICA components and are unique to each recording, but highly similar among individual animals. Using data-guided methods to assign domains to cortical regions. A) Hierarchical clustering based on Pearsons correlation produces a set of 13 regions across the cortical surface. B) Domains colored by various calculated spatial and temporal metrics to aid region assignment. Region area is calculated as a percent of the total cortical surface. Region extend ranges from 0 to 1 and calculates the relative area of a domain to its bounding box. Temporal standard deviation is calculated from the extracted time series, and frequency range size is calculated from wavelet significance. C) The Allen Brain atlas map31 is additionally used for anatomical reference. D) The final manually assigned region, with associated labels. E) Domain area and eccentricity by region. Population analysis of distribution of spatial characteristics in individual domains within defined regions across multiple recordings (n=16 videos; from 8 different animals). F) Example overlay of one domain map on another from the same animal. Individual domain or region overlap is calculated using the Jaccard index (intersect / union). Population analysis of the Jaccard index for domain (G) and region (H) overlap comparisons. Maps are generated from a different recording on the same animal, a littermate, a non-littermate, or a randomly generated voronoi map. Significance is calculated using a two-way ANOVA, followed by post-hoc t-test analysis with Holm-Sidak correction. Retrosplenial: R; Higher order visual: V+; Auditory: A; Somataosensory Seondary: Ss; Somataosensory Core: Sc; Somataosensory Barrel: Sb; Somatosensory other: S; Motor medial: Mm; Motor lateral: Ml; Olfactory: O
</figcaption></figure>
Maps generated from different recordings from the same animal were found to be highly overlapping, and hence more similar (Fig. 7G, top; p < 0.001). There was no significant difference in comparisons between littermates vs non littermates. Non-littermate map comparisons were significantly more similar to each other than to voronoi maps (p < 0.001).
We additionally quantified whether detected regions were similar across map comparisons. We again found that comparisons between maps from the same animal were highly similar (Fig. 7H, bottom; p < 0.001), no difference was found between littermates and non-littermates, and comparisons between different animals were significantly more similar than a comparison between a region map and a randomly generated voronoi map (p < 0.001). In summary, regions and domains are similar between recordings either in the same or on different animals, compared to a null map distribution.
## Discussion
Wide-field calcium imaging has grown in popularity in the last decade due to advances in genetically encoded calcium indicators, however, the methods used to isolate neural signal sources are underdeveloped 29. Here we use an ICA-based algorithm that overcomes many of these limitationsEigendecomposition algorithms have been essential to understand signals across neuroscience. Another recent eigendecomposition pipeline has been developed to explore the functional activities across wide-field imaging of the cortex, but is limited by the use of a reference map and was not able to separate artifact signals from neural activations 29. The methods presented here are able to achieve similar expository results with artifact removal, allowing researchers to explore datasets of any age, treatment, genotype, or strain that would be impeded by the use of a reference map.
@@ -144,6 +284,8 @@ Maximal segmentation of the cortex was achieved after 20 minutes of spontaneous
Functional imaging in unanesthetized, behaving animals gives insight into the nature of physiological processes; however, nontrivial challenges arise during such sessions with intermixed sets of time varying signals. The methods presented here address the most common issues in analyzing large wide-field mesoscale datasets, including filtration of vessel artifacts, spatial mapping, and optimized time series analysis. This work demonstrates that signal components having maximal statistical independence captured in sufficiently sampled mono-chromatic calcium flux videos exhibit a combination of spatiotemporal features that allow machine classification of signal type. Implementation of automated machine classifiers for neural signals is practical given densely captured arrays of spatially and temporally variant data gathered from individual subjects .With these tools, neuroscientists can easily collect and analyze high quality neural dynamics across the cortical surface, allowing the investigation of complex networks at unprecedented scale.
<!--
—---------------------------------------- END OF DISCUSSION—----------------------------
Potential paragraphs that could be worked in:
@@ -151,77 +293,12 @@ Potential paragraphs that could be worked in:
Using optimized signal extraction results in a higher dynamic range of extracted time series, and thus would produce correlation maps and network analyses with enhanced separation between neighboring regions. Stimulation experiments or anatomical post-processing of brain tissue could provide insights on how well the sorted domain regions correspond with known functional and anatomical units of the brain, and may provide a new framework for unbiased exploration of novel functional domains and interactions. If these maps align well to anatomical regions of interest, they can additionally be used as a reference for in-vivo cortical mapping, with potential applications in live feedback or targeted injection experiments.
An additional benefit is the highly compressed data format. The original or video can be rebuilt with relatively little loss of information from a reduced set of ICA components, for a 10x reduction in file size. This results in a representation of a high resolution video that is much more manageable to work with on a local computer.
-->
---
Figures
<figure><img src="figs/figure1.png" width="500px"><figcaption>
Figure 1: Transcranial calcium imaging video data is separated into its underlying signal and artifact components, and can be rebuilt from only signal components for artifact filtration. A) Recording schematic and fluorescence image of transcranial calcium imaging preparation, cropped to cortical regions of interest. B) Sample video montage of raw video frames after dF/F calculation. C) ICA video decomposition workflow. A demeaned dF/F movie is decomposed into a series of statistically independent components that are either neural, artifact, or noise associated (not displayed). Each component has an associated time course from the ICA mixing matrix. Neural components can be rebuilt into a filtered movie (rICA). Alternatively, artifact components can be rebuilt into an artifact movie. Circular panels show higher resolution spatial structure in example in the rightmost components.
</figcaption></div>
<figure><img src="figs/figure2.png" width="500px"><figcaption>
Figure 2: ICA decomposition quality is sensitive to recording duration, spatial and temporal resolution. A) Distributions for lag-1 autocorrelation (black) and temporal variance (purple) are displayed for components 1-1200. A dotted line representing the cutoff determined from the distribution in the right panel. In the right panel, a horizontal histogram on the lag-1 autocorrelation with a two-peaked kernel density estimator (KDE) fit reveals a two peaked-histogram, summarized by a barbell line. Group data for each peak, as well as the central cutoff value is summarized by the boxplots on the right (n=16 videos; from 8 different animals). B) 2-peaked KDE fits of horizontal histogram distributions under various spatial downsampling conditions, with barbell summary lines on the right. After spatial resolution decreases beyond 41 µm pixel width (px), this two peak structure collapses, and an x denotes the primary histogram peak. C) 2-peaked KDE fits of horizontal histogram distributions under various temporal downsampling conditions, with barbell summary lines on the right. D) Component stabilization for different length video subsets of six 20- minute video samples. (n=6 videos from 3 different animals) Individual thin lines show polynomial fit to signal or artifact components under each time condition. Thick lines denote the curve fit of the mean number of components in each category across these six experiments. The group distribution of components at 20 minutes is summarized by the boxplot on the right (n=16 videos; from 8 different animals).
</figcaption></figure>
Figure 3: Class identity cannot be established by any individual extracted feature. A) Examples of independent components of neural (n) signal, vascular (v) artifacts, and other (o) artifacts. Components are defined by both the spatial representation (eigenvector) and its temporal fluctuations. Circular windows magnify key portions of the eigenvector. Eigenvector values represented by colormap from blue to red. Temporal representation is in relative intensity (black time course under the eigenvector), only 1 minute of the full 20 minutes are shown. B) A comparison of the number of neural signal (GCaMP: dark blue; controls: light blue) and the artifact components (vascular: red; other : orange) with each animal shown (GCaMP components: N=12 animals, n=3851; mGFP components: N=3, n = 484; aGFP components: N=3, n = 442; WT components: N=3, n=229). C) Examples of binarization of the eigenvector. Histogram shows the full distribution of eigenvector values. The dynamic threshold method to generate binarized masks was used to identify the high eigenvector signal pixels (yellow) against the gaussian background (blue). Windowed spatial representation shows binarization on the key portions of the eigenvector. D) Examples of neural and artifact wavelet analysis shown in the to signal power-noise ratio (PNR) plots. 95% red-noise cutoff was used to create signal to noise ratio (black dashed lines). E) Histograms of example spatial metrics derived from GCaMP eigenvector values, F) morphometrics from the shape of the binarized primary region, G) temporal metrics derived from relative temporal intensities, H) frequency metrics derived from the PNR.
Figure 4: Spatial thresholding and frequency data reliably produce neural metrics. A) Individual experiment preparation with corresponding spatial footprints by class of component: GCaMP neural (dark blue), control neural (light blue), vascular (red), other (orange). B) All model experiments (N=12) with corresponding centroid location of each of the class of metrics. Histograms show the resulting average distribution of spatial location across the field of view (error bars are standard deviation between experiments). C) Individual experiment (same as A), where components are sorted by temporal variance. PNR mapped to each component and organized between the classification of components. D) Main frequencies seen in each component class between each experimental condition. Dotted lines represent the mean within each animal, where the gray around the mean corresponds to the standard deviation of that animal. The color line corresponds to the grand mean all between experiments. E) Relative position of the types of components between experiments and transgenic model, shown as the average and standard deviation between experiments. F) The percent components that had footprints and frequency data that was above the noise cut-off, separated by component type and experimental condition.
Figure 5: Spatial and morphological metrics are most important to classify components at 97% on novel experimental data. A) Correlation and t-statistic between artifact and neural components for each feature (N=7, n=2190). Spatial (circles), morphological (triangle), temporal (diamond), and frequency (square) metrics plotted. Cut off values that helped in the selection process are dotted lines, rejected values in gray. Closed points are components that meet requirements. Relative importance metric from the Random forest classifier plotted against each metric by their respective classes. Selected metrics shown in the list within each type of feature, sorted by greatest t-statistics magnitude. B) The dataset was parsed into ML modeling dataset (N=7, n=2190) that was used to establish the machine learning pipeline and a novel dataset (N=5, n=1661) of full experiments that will not influence the classifier. Modeling data was stratified 70/30 split based on classification. 1000 iterations of training the machine learning classifier on selected metrics and validating the machine classification with human classifications. C) Performance of the ML training, using subsets of the ML modeling dataset. 1000 iterations resulted in accuracy, precision and recall boxplots. D) 1000 iterations of training on the full ML building dataset was performed and the novel dataset was assessed on its performance. F) SVD projection of metric data with human classification mapping (top) and the confidence of the ML classifier (bottom). E) Performance of each of the novel datasets the full classifiers performance, animals plotted separately showing distribution of the 1000 different trained classifiers. (G) Approximate location of false negatives and positives from novel datasets.
Figure 6: Time series extracted from domain maps outperform time series generated from other downsampling methods. A) Schematic of domain map creation. A maximum projection is taken through each blurred signal component to form a domain map. Mean time courses are extracted from different domains generated from a domain map. B) Example of a mosaic movie frame rebuilt with respect to each downsampling technique. The non-downsampled filtered movie is represented on the left with subsequent downsampling based on domain, grid or voronoi maps. C) Percent total signal of the filtered video represented by extracted time courses. Percent of overall video signal captured in domain maps was calculated for each animal (green circle; N=8), and compared to signal content from a domain map generated from a separate video from the same animal (green triangle). Percent total signal represented by time courses extracted from grid (blue square) or randomly generated (blue diamond) maps were compared as controls. In the right panel, the percent signal relative to the domain map percent signal was summarized in a box plot. D) Variation between time courses extracted with each map method was then quantified as a sum signal variation for each experiment. In the right panel, the sum signal variation for each comparison map relative to the optimized domain map sum signal variation was summarized in a box plot.
Figure 7: Domain maps are created from ICA components and are unique to each recording, but highly similar among individual animals. Using data-guided methods to assign domains to cortical regions. A) Hierarchical clustering based on Pearsons correlation produces a set of 13 regions across the cortical surface. B) Domains colored by various calculated spatial and temporal metrics to aid region assignment. Region area is calculated as a percent of the total cortical surface. Region extend ranges from 0 to 1 and calculates the relative area of a domain to its bounding box. Temporal standard deviation is calculated from the extracted time series, and frequency range size is calculated from wavelet significance. C) The Allen Brain atlas map31 is additionally used for anatomical reference. D) The final manually assigned region, with associated labels. E) Domain area and eccentricity by region. Population analysis of distribution of spatial characteristics in individual domains within defined regions across multiple recordings (n=16 videos; from 8 different animals). F) Example overlay of one domain map on another from the same animal. Individual domain or region overlap is calculated using the Jaccard index (intersect / union). Population analysis of the Jaccard index for domain (G) and region (H) overlap comparisons. Maps are generated from a different recording on the same animal, a littermate, a non-littermate, or a randomly generated voronoi map. Significance is calculated using a two-way ANOVA, followed by post-hoc t-test analysis with Holm-Sidak correction. Retrosplenial: R; Higher order visual: V+; Auditory: A; Somataosensory Seondary: Ss; Somataosensory Core: Sc; Somataosensory Barrel: Sb; Somatosensory other: S; Motor medial: Mm; Motor lateral: Ml; Olfactory: O
<figure><img src="figs/gui.png" width="500px"><figcaption>
Figure S1: A Tkinter-based graphical user interface (GUI) for browsing independent component analysis results. A) 15 independent components, order 60-74 by variance. Components displayed in grey are selected as artifact either manually or using a machine learning classifier. A click on the display for any given component manually toggles its classification as either signal or artifact associated. Components colored in the cool/warm colormap are signal associated. Components colored in the black/white colormap are artifact associated. Buttons on the bottom panel control GUI movement through the dataset. The text panel at the bottom displays where the index for the signal/noise cutoff. B) The component viewer displays additional temporal metrics about any given component. The top controls allow movement through the dataset by manual scrolling with (+/-) buttons, up/down keys, or through typing a desired component in the text box. PC timecourse displays the mixing matrix timecourse extracted by ICA for the given components. The Wavelet power spectrum is displayed in the bottom right, and an integrated wavelet or fourier representation is available on the bottom left. 0.95 significance as estimated by the AR(1) red-noise null hypothesis is displayed as a dot-dash line.C) The domain map correlation page shows the pearson cross correlation value between a selected seed domain and every other domain detected on the cortical surface. The seed domain can be changed through the arrow keys, the (+/-) buttons, or by clicking on a different domain on the displayed domain map. D) The Component region assignment page allows manual region assignment for each domain. After the region is selected from the menu on the right, each domain clicked on the domain map is assigned to that region.
</figcaption></figure>
Figure S2 (Video): ICA filtration removes artifacts for superior neural signal unmixing. Original video (left) is decomposed into artifact components and neural signal. The filtered artifact movie (center) can be rebuilt to visualize artifacts that were isolated and removed during the filtration process. The rebuilt neural signal (right) depicts just the filtered neural signal. 0.5Hz filtered mean is re-added to both filtered artifacts and neural signal (27). Video is a real-time 1-minute excerpt. Values displayed are in dF/F.
Figure S3: Morlet wavelet transform can be used to create a signal to noise ratio that indicates frequencies of high likelihood of signal. A) Example neural time series, 90 sec of data recorded at 10hz reported in the temporal portion of a component B) Morlet wavelet (=4) was used for the wavelet transform. C) The power spectra of the wavelet transform (colorbar, purple to yellow) and the global spectral analysis (black, right). The 95% quantile is shown in dashed lines on the global spectral analysis. Reformatting the frequency spacing, produces. D) A signal to noise power ratio is calculated by dividing the power spectra by the 95% quantile. All values above 1 (dashed line) would indicate a high probability of signal. Anything below 1 would most likely be considered noise (colorbar, green to red). Reformatting the frequency spacing, produces F.
Figure S4: Spatial, Morphometric, Temporal, and Frequency features extracted from components. A) Spatial metrics from statistical characteristics of each eigenvector (spatial representation of the component). The histogram of all eigenvector values is shown the right of the eigenvector. B) Morphometrics collected from the binarized thresholded masked region of the eigenvector. The largest (primary) domain was used to generate the features for each eigenvector. The majority of metrics calculated utilizes sci-kit image region properties. C) Temporal metrics are statistical descriptors from the corresponding row of the mixing matrix for each eigenvector. D) Frequency analysis was done on the mixing matrix row, utilizing the PNR calculated from wavelet analysis (Figure S1). The longest of all continuous frequencies was used to extract each feature.
Figure S5: Examples of control components, resulting in similar artifact components to GCaMP recordings. A) Control components from 20 minutes of recording from cx3cr1 GFP (microglia; mGFP, left), adlh1 GFP (astrocyte; aGFP, center), and Black 6 (Non-transgenic; Bl6, right) mice. Two IC examples from each control group corresponding to hemodynamics/neural activity (top) and artifacts (bottom). Artifacts chosen show a vascular and other artifact commonly seen in GCaMP recordings. Similar data description in regards to temporal and spatial representations as seen in Figure 1. B) Examples of control binarization of the eigenvector only showing the windowed spatial representation on the key portions of the eigenvector. C) Examples of neural and artifact wavelet analysis shown in the to signal power-noise ratio (PNR) plots.
Figure S6: Great machine learning performance with multiple classification algorithms. A) Receiver operating characteristic (ROC) plots for each classifying algorithm utilized. Voting classifier was composed of the 4 other algorithms. Random Forest Classifier (RFC; *) was used in all analyses in this paper. B) Histogram of human classification with the percent of in each bin confidence bin (same data as Figure 5F, bottom). Log-scale was used to highlight the low percentage points. C) Histogram of human classification with the percent in each binned novel classification. Classification bins based on the percent each classification occurred correctly in the 1000 trials. True positive (TP), False positive (FP), False negative (FN), True Negative (TN), human (h), machine (m)
Figure S7: Vascular and other artifacts are more correlated to movement than neural components. A) All neural (blue), vascular (red), and other (orange) components and their correlation to the motion vector from each animal. B) Spatial location and corresponding correlation (green to pink) of each component to motion based on their respective classification and genetic background. neural: left column, vascular: center column, other: right column. Top row: GCaMP, second row: mGFP, third row: aGFP, last row: Bl6
Figure S8: Mean filtration to minimize global slow oscillations seen in GFP control data. A) 30sec examples of the global mean that was subtracted and stored at the initiation of the pipeline, before the decomposition into eigenvectors for GCaMP, mGFP, aGFP and Bl6. B) Global wavelet spectrum (top) and its corresponding power to noise ratio (PNR; bottom) of GCaMP (N=4), mGFP (N=3), aGFP (N=3), and Bl6 (N=3), red indicates the omitted frequencies from our applied high pass filter. C) High-pass filtration results of the same 30 sec in A.
Figure S9 (Video): Readdition of the global mean with high-pass filter better represents neural signal. Rebuilt video from all components (left) or with only neural components (center, right) with mean re-addition from the original global mean (left, center) or the global mean with 0.5 Hz high pass filtration (right). All movies are on the same scale of change in fluorescence over mean fluorescence (rainbow colorbar).
Figure S10: Individual domain maps from animals used in this study. A) Domain maps generated from Snap25 GCaMP6s recordings from littermates (a-c) and from subsequent recordings (#). B) Domain maps generated from the three different control lines.
Figure S11 (Video): The mosaic movie represents the neural signal captured from time series extracted from domains across the cortical surface. Signal from the filtered video (left) is segmented by the data-driven domain map (left overlay). Average time series from these segmented domains can be visualized as a mosaic movie, where each domain is represented by its averaged time series. The filtered video contains 1.77 megapixels representing the cortical signal, while the mosaic movie contains only 300 unique time series to describe the same signal with a 5900x compression rate. Video is a real-time 1-minute excerpt. Values displayed are in dF/F.
Figure S12: Comparison of spatial and temporal information content through compression and filtering. A) The original spatial information captured as quantified by a mean subtracted absolute value projected spatially (left) or temporally (right). B) The difference in information between the original input data and the rebuilt ICA projection, excluding noise components beyond the 25% saved in the processed file. The difference movie is projected spatially or temporally to visualize where information was lost in compression. C) Information removed by artifact filter. The artifact movie is rebuilt and projected spatially or temporally to visualize where information was modified by the ICA-based artifact filter.
## Methods
@@ -291,6 +368,13 @@ When comparing maps generated from different animals, the optimal alignment was
Compression residuals are calculated while saving the ICA decomposition results. The original movie is rebuilt from the reduced ICA results, and residuals are calculated by taking the absolute value of the difference between the two videos (Fig. S12). The spatial and temporal projection of this absolute difference movie is saved as the spatial and temporal residuals of the decomposition, and is stored as metadata with each ICA decomposition.
<figure><img src="figs/methods-figureS12.png" width="400px"><figcaption>
**Figure S12** Comparison of spatial and temporal information content through compression and filtering. A) The original spatial information captured as quantified by a mean subtracted absolute value projected spatially (left) or temporally (right). B) The difference in information between the original input data and the rebuilt ICA projection, excluding noise components beyond the 25% saved in the processed file. The difference movie is projected spatially or temporally to visualize where information was lost in compression. C) Information removed by artifact filter. The artifact movie is rebuilt and projected spatially or temporally to visualize where information was modified by the ICA-based artifact filter.
</figcaption></figure>
### Domain residuals and domain signal analyses
To quantify the amount of signal present in the original movie that was not included in the domain map, residuals were calculated by subtracting the mosaic movie, representing time series from each spatial domain from the original movie. The absolute value was then applied so that all numbers represented a positive difference, and residuals were summed to create a single value. The time series was not re-added to either the original movie or mosaic movie, since this can be easily summarized as a different temporal metric. To represent the amount of relative variation to the original dataset, this number was compared to the summed absolute value of the mean-subtracted original movie.
@@ -301,12 +385,14 @@ Statistical significance was calculated using OLS models from statsmodel.formula
**Acknowledgements** The authors acknowledge C. Santo Thomas for maintaining the lab mouse lines, and University of California Santa Cruz's Hummingbird Computational Cluster for support and node maintenance. This work was supported by Startup funds from University of California, Santa Cruz, Division of Physical and Biological Sciences, grants from the National Institutes of Health, USA (NIH T32 GM 133391) to S.C.W. and B.R.M, and by a Hellman Fellows Fund Award to J.B.A. Funding for D.A. was provided by the UCSC Maximizing Access to Research Careers (MARC) program (T32-GM007910) and the UCSC Initiative for Maximizing Student Development (IMSD) (R25-GM058903).
Contributions ICA filtering, exploratory GUI, map creation and time series extraction and analysis code, was written by S.C.W. All recordings, metric extractions, mean frequency analysis, feature extraction and analysis, machine learning pipeline were performed by B.R.M. Optimizing and determination of hyperparameters was done by D.A. J.B.A. oversaw the project and provided feedback to experimental design, results, and paper preparation. The manuscript was prepared by B.R.M. and S.C.W, with input from all authors.
Competing Interests The authors declare that they have no competing financial interests.
**Contributions** ICA filtering, exploratory GUI, map creation and time series extraction and analysis code, was written by S.C.W. All recordings, metric extractions, mean frequency analysis, feature extraction and analysis, machine learning pipeline were performed by B.R.M. Optimizing and determination of hyperparameters was done by D.A. J.B.A. oversaw the project and provided feedback to experimental design, results, and paper preparation. The manuscript was prepared by B.R.M. and S.C.W, with input from all authors.
Correspondence Correspondence and requests for materials should be addressed to James B. Ackman (email: jackman@ucsc.edu) or Brian R. Mullen (email: brmullen@ucsc.edu).
**Competing Interests** The authors declare that they have no competing financial interests.
**Correspondence** Correspondence and requests for materials should be addressed to James B. Ackman (email: james.ackman@gmail.com) or Brian R. Mullen (email: brmullen@ucsc.edu).
## References
### References
1. Cohen, L. B., Keynes, R. D. & Hille, B. Light scattering and birefringence changes during nerve activity. Nature 218, 43841 (1968).
2. Grinvald, A., Lieke, E., Frostig, R. D., Gilbert, C. D. & Wiesel, T. N. Functional architecture of cortex revealed by optical imaging of intrinsic signals. Nature 324, 3614 (1986).
3. Grinvald, A. & Hildesheim, R. VSDI: a new era in functional imaging of cortical dynamics. Nat Rev Neurosci 5, 87485 (2004).