css img magnify

This commit is contained in:
ackman678
2022-05-02 19:04:58 -07:00
parent 87f9c920b2
commit 5242417514
2 changed files with 45 additions and 37 deletions

View File

@@ -115,6 +115,7 @@
* [x] Check when Fig. S6 is referred to in text
* [ ] Check the um/px value for the lateral spatial resolution
* [ ] Check colormap dots overlay in figureS7
* [ ] Check figure7 labels
* [ ] Check writing of figure7 legend
* [ ] Check Allen Brain atlas map31 in figure7 legend
* [ ] rw "to test the meaning of these maps"

81
main.md
View File

@@ -24,7 +24,7 @@ Sydney C. Weiser¹•, Brian R. Mullen¹•, Desiderio Ascencio², & James B. Ac
## Abstract
Demixing neural signals from artifact signals in videos of neuronal calcium flux across the cerebral hemispheres could help reveal functional features of cortical organization. Here we demonstrate that the general solution to multichannel source signal separation, independent component analysis, can optimally recover neural signal content in recordings of neuronal cortical calcium dynamics captured at a rate of 1.5×10⁶ pixels per one-hundred millisecond frame for seventeen minutes. We show that a set of spatial and temporal metrics can be used to build a random forest classifier which separates neural activity and artifact components automatically at human performance. We show how this data produces a functional segmentation of the neocortical sheet, providing a map of 230±14 domains from which extracted time courses maximally represent the underlying signal in each recording. This workflow of data-driven video decomposition and machine classification of signal sources will aid high quality mapping of complex cerebral dynamics.
Demixing neural signals from artifact signals in videos of neuronal calcium flux across the cerebral hemispheres could help map core functional features of cortical organization. Here we demonstrate that the general solution to multichannel source signal separation, independent component analysis, can optimally recover neural signal content in recordings of neuronal cortical calcium dynamics captured at a rate of 1.5×10⁶ pixels per one-hundred millisecond frame for seventeen minutes. We show that a set of spatial and temporal metrics can be used to build a random forest classifier which separates neural activity and artifact components automatically at human performance. We show how this data produces a functional segmentation of the neocortical sheet, providing a map of 230±14 domains from which extracted time courses maximally represent the underlying signal in each recording. This workflow of data-driven video decomposition and machine classification of signal sources will aid high quality mapping of complex cerebral dynamics.
@@ -74,11 +74,9 @@ These improved techniques for data pre-processing, spatial segmentation, and tim
Optical techniques have long been used to monitor the functional dynamics in sets of neuronal elements ranging from isolated invertebrate nerve fibers[^Cohen:1968][^Salzberg:1977] to entire regions of mammalian visual cortex in vivo[^Grinvald:1986][^Grinvald:2004][^Ackman:2012]. Imaging of calcium flux with calcium sensors[^Tsien:1989][^Chen:2013a] allows for transcranial neural activity monitoring across the cortical surface of mouse with high enough spatiotemporal resolution to identify sub-areal networks of the neocortex[^Vanni2014][^Ackman2014c]. These techniques have the potential to map supracellular group function at unprecedented resolution and scale across the neocortical sheet in awake behaving mice; however identifying neural signals from calcium imaging sessions is challenging due to numerous confounding signal sources.
<!-- Understanding cerebral dynamics at multiple scales is important for exploring how environmental and genetic influences give rise to altered neural connectivity patterns linked to behavioral phenotypes [^Ma:2016][^Kozberg:2016] -->
Wide-field cortical calcium imaging provides a unique combination of spatially and temporally resolved dynamics across the cortical surface, with scale ranging from complex activation patterns in high-order circuits, to discrete activations hundreds of micrometers in diameter, to whole cortical lobe activity patterns[^Vanni2014][^Ackman2014c]. However, it is affected by issues common to optical imaging recordings. Body or facial movements can create large fluctuations in autofluorescence of the brain and blood vessels, which produce significant artifacts in the data. Vascular artifacts are commonly seen due to vasodynamics and the resulting changes in blood flow to meet the energy demands of surrounding tissue. Fluid exchange between vascular and neural tissue causes cortical hemodynamics, resulting in region specific changes of optical properties among cerebral lobes[^Ma:2016]. Further, though the skull is fixed to a specific location during the experiment, slight brain movements occur within the cranium, thereby influencing the recordings. Any optical property differences that originate from the experimental preparation may be highlighted in the dataset as signal due to changes in tissue contrast.
Researchers have recorded wide field calcium dynamics at frame rates ranging from 5-100Hz[^Ackman2014c][^Murphy2016][^Valley2020]. In addition, spatial resolution varies between different researchers setups, but is typically in the range of 256x256 to 512x512 pixels (0.06 to 0.2 megapixels) for the entire cortical surface, and is often further spatially reduced for processing[^Ackman2014c][^Murphy2016][^Allen2017]. Selection of resolution is often dependent on the video observers perceived quality of the data or available computational resources, rather than a quantified comparison of signal content.
Researchers have recorded wide field calcium dynamics at frame rates ranging from 5-100Hz[^Ackman2014c][^Murphy2016][^Valley2020]. In addition, spatial resolution varies between different researchers setups, but is typically in the range of 256×256 to 512×512 pixels (0.06 to 0.2 megapixels) for the entire cortical surface, and is often further spatially reduced for processing[^Ackman2014c][^Murphy2016][^Allen2017]. Selection of resolution is often dependent on the video observers perceived quality of the data or available computational resources, rather than a quantified comparison of signal content.
It is common to use sensory stimulation to identify specific regions in the neocortex and align a reference map based on the location of these defined regions[^Allen2017][^Vanni2017][^Clancy2019]. Even if these maps are reliable for locating primary sensory areas, they often lack specificity for higher order areas, or even completely lack sub-regional divisions. This is especially true in areas with a high degree of interconnectedness and overlapping functionality, such as motor cortex[^Mountcastle:1997]. Moreover, there is evidence that the shape and location of higher order regions can vary from subject to subject[^Zhuang:2017][^Glasser2016]. Improper map alignment or misinformed regional boundaries can lead to a loss in dynamic range between signals across a regional border. Thus, to extract the most information from a recorded dataset, the level of parcellation must reflect the quality and sources present within the data. Thus, a flexible data-driven method is necessary and must also respect functional boundaries of the cortex and be sensitive to age, genotype and individual variation.
@@ -88,6 +86,7 @@ Here we present an ICA-based workflow that isolates and filters artifacts from c
Further, we explore the resolution-dependent effect of signal extraction on ICA quality, and find a quantified increase in ICA signal separation for collecting wide-field calcium imaging at mesoscale resolution. Using neural components, we additionally generate data-driven maps that are specific to functional borders from individual animals. We use these maps to extract time series from functional regions of the cortex, and show that this method for time series extraction produces a reduced set of time series while optimally representing the underlying signal and variation from the original dataset. Together, these methods provide a set of optimized techniques for enhanced filtering, segmentation, and time series extraction for wide-field calcium imaging videos.
<!-- Understanding cerebral dynamics at multiple scales is important for exploring how environmental and genetic influences give rise to altered neural connectivity patterns linked to behavioral phenotypes [^Ma:2016][^Kozberg:2016] -->
## Results
@@ -107,19 +106,20 @@ A spatial ICA decomposition on a video, wherein the global mean was subtracted,
Neural components represent a distinct area of cortical tissue, which we refer to as its cortical domain. The spatial morphology of these neural components can vary in both spatial extent and eccentricity. Occasionally neural components can also contain multiple domains, which have similar enough activation patterns to be identified as a single neural component. In the examples in Fig. 1C, the second neural component appears to represent a higher order visual network, with multiple domains on the left hemisphere, and a small mirrored domain on the right hemisphere.
<figure><img src="figs/methods-figure1.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figure1.png" width="400px"><figcaption>
**Figure 1** Transcranial calcium imaging video data is separated into its underlying signal and artifact components, and can be rebuilt from only signal components for artifact filtration. A) Recording schematic and fluorescence image of transcranial calcium imaging preparation, cropped to cortical regions of interest. B) Sample video montage of raw video frames after dF/F calculation. C) ICA video decomposition workflow. A demeaned dF/F movie is decomposed into a series of statistically independent components that are either neural, artifact, or noise associated (not displayed). Each component has an associated time course from the ICA mixing matrix. Neural components can be rebuilt into a filtered movie (rICA). Alternatively, artifact components can be rebuilt into an artifact movie. Circular panels show higher resolution spatial structure in example in the rightmost components.
</figcaption></figure>
</figcaption></figure></div>
Artifact components can take many forms, including those from blood vessels, movement, and optical distortions on the imaging surface. The left two artifact examples (Fig 1C) likely represent hemodynamics from the superior sagittal sinus vein with the bottom artifact likely representing blood flow through the middle cerebral artery[^Xiong2017]. A very high resolution map of the vessel patterns can be rebuilt from these components, with branching structures as small as 12 µm in diameter (shown in Fig 1C). Noise components lack a spatial domain, and have little to no temporal structure. Signal and artifact components can be sorted manually in graphical user interface (Fig. S1) or with a machine learning classifier.
<figure><img src="figs/methods-figureS1.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figureS1.png" width="400px"><figcaption>
**Figure S1** A Tkinter-based graphical user interface (GUI) for browsing independent component analysis results. A) 15 independent components, order 60-74 by variance. Components displayed in grey are selected as artifact either manually or using a machine learning classifier. A click on the display for any given component manually toggles its classification as either signal or artifact associated. Components colored in the cool/warm colormap are signal associated. Components colored in the black/white colormap are artifact associated. Buttons on the bottom panel control GUI movement through the dataset. The text panel at the bottom displays where the index for the signal/noise cutoff. B) The component viewer displays additional temporal metrics about any given component. The top controls allow movement through the dataset by manual scrolling with (+/-) buttons, up/down keys, or through typing a desired component in the text box. PC timecourse displays the mixing matrix timecourse extracted by ICA for the given components. The Wavelet power spectrum is displayed in the bottom right, and an integrated wavelet or fourier representation is available on the bottom left. 0.95 significance as estimated by the AR(1) red-noise null hypothesis is displayed as a dot-dash line.C) The domain map correlation page shows the pearson cross correlation value between a selected seed domain and every other domain detected on the cortical surface. The seed domain can be changed through the arrow keys, the (+/-) buttons, or by clicking on a different domain on the displayed domain map. D) The Component region assignment page allows manual region assignment for each domain. After the region is selected from the menu on the right, each domain clicked on the domain map is assigned to that region.
</figcaption></figure>
</figcaption></figure></div>
@@ -147,10 +147,15 @@ To automate this sorting process, a two-peak kernel density estimator (KDE) was
To test how ICA component separation is affected by spatiotemporal resolution and video duration, we altered properties of the input video and observed its effects on the quality of signal separation through lag-1 autocorrelation distributions. Reducing the spatial resolution resulted in a steady decrease in peak separation, until the dual peaked structure collapsed at a resolution of 41 µm pixel width (Fig. 2B).
<figure><img src="figs/methods-figure2.png" width="400px"><figcaption>
<div class="container">
<div class="container"><figure>
<img src="figs/methods-figure2.png" width="400px"><figcaption>
**Figure 2** ICA decomposition quality is sensitive to recording duration, spatial and temporal resolution. A) Distributions for lag-1 autocorrelation (black) and temporal variance (purple) are displayed for components 1-1200. A dotted line representing the cutoff determined from the distribution in the right panel. In the right panel, a horizontal histogram on the lag-1 autocorrelation with a two-peaked kernel density estimator (KDE) fit reveals a two peaked-histogram, summarized by a barbell line. Group data for each peak, as well as the central cutoff value is summarized by the boxplots on the right (n=16 videos; from 8 different animals). B) 2-peaked KDE fits of horizontal histogram distributions under various spatial downsampling conditions, with barbell summary lines on the right. After spatial resolution decreases beyond 41 µm pixel width (px), this two peak structure collapses, and an x denotes the primary histogram peak. C) 2-peaked KDE fits of horizontal histogram distributions under various temporal downsampling conditions, with barbell summary lines on the right. D) Component stabilization for different length video subsets of six 20- minute video samples. (n=6 videos from 3 different animals) Individual thin lines show polynomial fit to signal or artifact components under each time condition. Thick lines denote the curve fit of the mean number of components in each category across these six experiments. The group distribution of components at 20 minutes is summarized by the boxplot on the right (n=16 videos; from 8 different animals).
</figcaption></figure>
</figcaption></figure></div>
</div>
Increasing the sampling rate above 10Hz showed little to no effect on the peak to peak distance (Fig. 2C; ∆pp < 0.01), and a slight decrease in the autocorrelation of the primary peak (∆p1 = 0.03). However temporal downsampling below 10Hz resulted in a shifting of the signal and noise peaks (∆p1 = 0.06), and a reduction in the peak to peak distance (∆pp = 0.02). This result agrees with previous analyses that found 10Hz to be the maximal sampling frequency required for measuring population calcium dynamics[^Vanni2017]. Together, these findings suggest that the separation quality of our captured dynamics are highly sensitive to spatial resolution, and not as sensitive to temporal resolution. We considered collecting spatial samples higher than our current resolution of 6.9 µm/px, but computing decompositions on datasets this large would push the limits of available computing.
@@ -162,20 +167,20 @@ To determine the ideal duration of video collected, we calculated the number of
Using the ICA decomposition from 20 minute duration videos, we inspected each set of experimental components from both controls and GCaMP6 expressing mice to classify each as neural or artifact (Fig. 3A). The artifacts were further distinguished between vascular and other for descriptive purposes. Neural components typically have globular spatial representation with highly dynamic properties. Vascular artifact components can be easily visually identified by the vascular-like spatial representation. Other artifact components that are commonly seen in the components are movement or preparation artifacts. These typically have a diffuse spatial representation with smaller or sparse temporal activations. We manually scored each component in the dataset as an artifact (vascular or other) or neural component (Fig. 3B). From all the GCaMP experiments, an average of 73.5 ± 5.9% of the components were identified as neuronal, where the remaining 26.5±6.3% were artifact (vascular: 8.7 ± 2.7%; other: 17.6 ± 7.1%). GCaMP mice had substantially higher numbers of neural components compared to the controls, resulting in four times as many as the GFP mouse lines and six times the number in Bl6 mice (mean number of neural components GCaMP: 235, mGFP:62, aGFP:54, Bl6: 39).
<figure><img src="figs/methods-figure3.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figure3.png" width="400px"><figcaption>
**Figure 3** Class identity cannot be established by any individual extracted feature. A) Examples of independent components of neural (n) signal, vascular (v) artifacts, and other (o) artifacts. Components are defined by both the spatial representation (eigenvector) and its temporal fluctuations. Circular windows magnify key portions of the eigenvector. Eigenvector values represented by colormap from blue to red. Temporal representation is in relative intensity (black time course under the eigenvector), only 1 minute of the full 20 minutes are shown. B) A comparison of the number of neural signal (GCaMP: dark blue; controls: light blue) and the artifact components (vascular: red; other : orange) with each animal shown (GCaMP components: N=12 animals, n=3851; mGFP components: N=3, n = 484; aGFP components: N=3, n = 442; WT components: N=3, n=229). C) Examples of binarization of the eigenvector. Histogram shows the full distribution of eigenvector values. The dynamic threshold method to generate binarized masks was used to identify the high eigenvector signal pixels (yellow) against the gaussian background (blue). Windowed spatial representation shows binarization on the key portions of the eigenvector. D) Examples of neural and artifact wavelet analysis shown in the to signal power-noise ratio (PNR) plots. 95% red-noise cutoff was used to create signal to noise ratio (black dashed lines). E) Histograms of example spatial metrics derived from GCaMP eigenvector values, F) morphometrics from the shape of the binarized primary region, G) temporal metrics derived from relative temporal intensities, H) frequency metrics derived from the PNR.
</figcaption></figure>
</figcaption></figure></div>
We extracted spatial and morphological metrics of the neural and artifact components to characterize spatial feature differences (Fig. 3C). We can pull general spatial intensity metrics like global minimum and maximum from each components spatial eigenvector . Further, the largest eigenvector values correspond to the regions that have the most dynamic change after the video rebuild. Given that the shapes of the high pixel intensity values are used by humans to identify their classification, we decided on a dynamic thresholding technique to binarize the eigenvector. When examining the histogram of intensity values of the neural eigenvector, there is a large population of pixels centered around zero with a single long tail. We identified all pixels that were unique to the long tail by excluding all values that lie within range of the shorter gaussian tail. From these binarized masks, morphometrics of each primary region of the component can then be quantified, such as the axis lengths or eccentricity of the shape.
<figure><img src="figs/methods-figureS3.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figureS3.png" width="400px"><figcaption>
**Figure S3** Morlet wavelet transform can be used to create a signal to noise ratio that indicates frequencies of high likelihood of signal. A) Example neural time series, 90 sec of data recorded at 10hz reported in the temporal portion of a component B) Morlet wavelet (=4) was used for the wavelet transform. C) The power spectra of the wavelet transform (colorbar, purple to yellow) and the global spectral analysis (black, right). The 95% quantile is shown in dashed lines on the global spectral analysis. Reformatting the frequency spacing, produces. D) A signal to noise power ratio is calculated by dividing the power spectra by the 95% quantile. All values above 1 (dashed line) would indicate a high probability of signal. Anything below 1 would most likely be considered noise (colorbar, green to red). Reformatting the frequency spacing, produces F.
</figcaption></figure>
</figcaption></figure></div>
We characterized temporal dynamics of each component by extracting features from the corresponding temporal fluctuations and their frequency analysis (Fig. 3D). The relative intensity fluctuations allows us to pull out temporal features of each component, like standard deviation and global maxima/minima of each component contribution. We performed wavelet analysis on these time series to characterize only highly significant frequencies (Fig. S3). We calculated a power signal to noise ratio (PNR) with the 95% quantile of red noise defined by the autocorrelation value of each time series. With this ratio, significant frequencies resulted in a value above 1.
@@ -185,19 +190,19 @@ After extracting these metrics, we then compared the diverse populations of sign
<figure><img src="figs/methods-figureS4.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figureS4.png" width="400px"><figcaption>
**Figure S4** Spatial, Morphometric, Temporal, and Frequency features extracted from components. A) Spatial metrics from statistical characteristics of each eigenvector (spatial representation of the component). The histogram of all eigenvector values is shown the right of the eigenvector. B) Morphometrics collected from the binarized thresholded masked region of the eigenvector. The largest (primary) domain was used to generate the features for each eigenvector. The majority of metrics calculated utilizes sci-kit image region properties. C) Temporal metrics are statistical descriptors from the corresponding row of the mixing matrix for each eigenvector. D) Frequency analysis was done on the mixing matrix row, utilizing the PNR calculated from wavelet analysis (Figure S1). The longest of all continuous frequencies was used to extract each feature.
</figcaption></figure>
</figcaption></figure></div>
Control neural components are not distinct globular regions like those from the GCaMP line (Fig. S5); rather, they had co-activity with vascular units in the center of its domain. This resulted in the thresholded region being more similar to the vascular artifacts seen in GCaMP components. However, we were still able to find example components that only had vascular spatial representation without the surrounding tissue activation. Finally, we found similar artifacts of the other category in the control data that are also present in the GCaMP sets of independent components.
<figure><img src="figs/methods-figureS5.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figureS5.png" width="400px"><figcaption>
**Figure S5** Examples of control components, resulting in similar artifact components to GCaMP recordings. A) Control components from 20 minutes of recording from cx3cr1 GFP (microglia; mGFP, left), adlh1 GFP (astrocyte; aGFP, center), and Black 6 (Non-transgenic; Bl6, right) mice. Two IC examples from each control group corresponding to hemodynamics/neural activity (top) and artifacts (bottom). Artifacts chosen show a vascular and other artifact commonly seen in GCaMP recordings. Similar data description in regards to temporal and spatial representations as seen in Figure 1. B) Examples of control binarization of the eigenvector only showing the windowed spatial representation on the key portions of the eigenvector. C) Examples of neural and artifact wavelet analysis shown in the to signal power-noise ratio (PNR) plots.
</figcaption></figure>
</figcaption></figure></div>
@@ -212,10 +217,10 @@ To investigate how well these metrics captured features of each component, we ex
Thresholded GCaMP neural components have high densities in the olfactory bulbs and posterolateral portions of the cortex, including visual, auditory, and somatosensory systems. There is less dense localization of centroids along the anteromedial portions of the cortex, including motor and retrosplenial cortices. Further, in both the GCaMP and control mice, we see the majority of artifact components localize along anatomical brain vasculature. The major venous systems including the rostral rhinal vein - the superior sagittal sinus, and the transverse sinus - all show high densities of artifact centroid locations[^Xiong2017]. The cerebral arteries are less consistent in localizing the primary domain of their respective components. We see that many of the other artifacts align with the sagittal and lambda cranial sutures[^Wei2017].
<figure><img src="figs/methods-figure4.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figure4.png" width="400px"><figcaption>
**Figure 4** Spatial thresholding and frequency data reliably produce neural metrics. A) Individual experiment preparation with corresponding spatial footprints by class of component: GCaMP neural (dark blue), control neural (light blue), vascular (red), other (orange). B) All model experiments (N=12) with corresponding centroid location of each of the class of metrics. Histograms show the resulting average distribution of spatial location across the field of view (error bars are standard deviation between experiments). C) Individual experiment (same as A), where components are sorted by temporal variance. PNR mapped to each component and organized between the classification of components. D) Main frequencies seen in each component class between each experimental condition. Dotted lines represent the mean within each animal, where the gray around the mean corresponds to the standard deviation of that animal. The color line corresponds to the grand mean all between experiments. E) Relative position of the types of components between experiments and transgenic model, shown as the average and standard deviation between experiments. F) The percent components that had footprints and frequency data that was above the noise cut-off, separated by component type and experimental condition.
</figcaption></figure>
</figcaption></figure></div>
We investigated the effects of wavelet analysis on feature generation by sorting each component within an experiment to its temporal standard deviation value. We then ordered each class of components based on variance and displayed a grayscale heatmap of the significant frequencies in each component across experimental conditions (Fig. 4C). Taking the average of each of the global wavelet spectrum across each experiment highlighted the prominent frequencies seen in each classification (Fig. 4D). Prominent GCaMP frequencies are between 0.3 to 3.5 hz, where control dynamics are typically seen between 0-1hz. Vascular components tend to have the same frequencies as their neural counterparts, while other components typically have faster frequencies (above 3 hz), most likely due to motion during the recordings.
@@ -234,10 +239,10 @@ To build a classifier, we identified metrics that distinguish between neural and
We trained the random forest classifier with all metrics to identify the importance of each feature (Fig. 5A, middle). The features having the greatest t-statistic magnitude had the most importance for proper classification. In particular, spatial and morphological metrics were found to have the highest relative importance for component classification. The final list of 10 feature metrics utilized in the machine learning process are shown (Fig. 5A, bottom).
<figure><img src="figs/methods-figure5.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figure5.png" width="400px"><figcaption>
**Figure 5** Spatial and morphological metrics are most important to classify components at 97% on novel experimental data. A) Correlation and t-statistic between artifact and neural components for each feature (N=7, n=2190). Spatial (circles), morphological (triangle), temporal (diamond), and frequency (square) metrics plotted. Cut off values that helped in the selection process are dotted lines, rejected values in gray. Closed points are components that meet requirements. Relative importance metric from the Random forest classifier plotted against each metric by their respective classes. Selected metrics shown in the list within each type of feature, sorted by greatest t-statistics magnitude. B) The dataset was parsed into ML modeling dataset (N=7, n=2190) that was used to establish the machine learning pipeline and a novel dataset (N=5, n=1661) of full experiments that will not influence the classifier. Modeling data was stratified 70/30 split based on classification. 1000 iterations of training the machine learning classifier on selected metrics and validating the machine classification with human classifications. C) Performance of the ML training, using subsets of the ML modeling dataset. 1000 iterations resulted in accuracy, precision and recall boxplots. D) 1000 iterations of training on the full ML building dataset was performed and the novel dataset was assessed on its performance. F) SVD projection of metric data with human classification mapping (top) and the confidence of the ML classifier (bottom). E) Performance of each of the novel datasets the full classifiers performance, animals plotted separately showing distribution of the 1000 different trained classifiers. (G) Approximate location of false negatives and positives from novel datasets.
</figcaption></figure>
</figcaption></figure></div>
@@ -248,10 +253,10 @@ We utilized the common approach of hiding a portion of the data from the learnin
After establishing the efficacy of the classifier, we set out to assess the full classifier based on all data points from the machine learning dataset. We projected all features onto the first two components of a singular value decomposition (SVD), mapping both the human classification and the mean classifier confidence for 1000 iterations (Fig. 5F). As expected, we saw distinct neural and artifact clusters in feature space. Interestingly, the two different types of artifacts also separated into distinct portions of the projected feature space. The confidence of the classifier showed very few components between the extremes, illustrated by the top binned confidence value distributions for each human classification (Fig. 5F). We found 71.2 ± 0.2% of components were binned in highly confident values for neural signals (left-most bin), and 22.7±0.2 were binned in highly confident values for artifacts (rightmost bin) (7.0±0.2% for vascular; 15.7±0.1% for other). This indicates that the classifier exhibits reliable confidence in the decision boundaries (Fig. S6B).
<figure><img src="figs/methods-figureS6.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figureS6.png" width="400px"><figcaption>
**Figure S6** Great machine learning performance with multiple classification algorithms. A) Receiver operating characteristic (ROC) plots for each classifying algorithm utilized. Voting classifier was composed of the 4 other algorithms. Random Forest Classifier (RFC; \*) was used in all analyses in this paper. B) Histogram of human classification with the percent of in each bin confidence bin (same data as Figure 5F, bottom). Log-scale was used to highlight the low percentage points. C) Histogram of human classification with the percent in each binned novel classification. Classification bins based on the percent each classification occurred correctly in the 1000 trials. True positive (TP), False positive (FP), False negative (FN), True Negative (TN), human (h), machine (m)
</figcaption></figure>
</figcaption></figure></div>
To assess the efficacy of this classifier, we then tested 1000 iterations of novel data - completely new experiments that were not involved in training the classifier (Fig. 5D). We plotted the resulting 1000 iterations of each experiment separately (Fig. 5E). Notably, we found that the overall results were about the same as the subset classifier: mean accuracy of 96.9%, mean precision of 98.0%, and mean recall of 97.6%. From the histogram of classification frequency, we found similar results to the confidence of the classifier (Fig. S6C). Among all components, 69.7±2.0 were confidently classified by machine learning as neural signal (left-most bin), where 25.8±1.4% were confidently classified as artifact (rightmost bin; 7.5±0.4% of vascular; 18.2±1.6% of other). The remaining 5% were mis-classified. We investigated the locations of the components that consistently showed false positive or false negative (Fig. 5G). The majority of these components were either on the edge of the region of interest for the cortical hemispheres or within the olfactory bulb.
@@ -261,16 +266,18 @@ To assess the efficacy of this classifier, we then tested 1000 iterations of nov
Removal of artifact components will ensure that neural signals are the dominant signal after rebuilding with identified neural components; however, during reconstruction of ICA data, re-addition of the global mean must occur. Thus, we examined the influence of removing artifact components on the global mean and how filtration of the global mean should be considered. For example, vascular artifacts associated with the superior sagittal sinus contribute to the global mean and increase the range of signals recorded during periods of motion (Fig. S7). Assessment of the global mean from GFP control experiments showed pronounced signal in these slower frequency oscillations, suggesting the use of a high pass filter. Indeed, we found that application of a high-pass filter with a 0.5 Hz cutoff minimizes these types of global slow oscillations (Fig. S8). This type of filtration should not be applied to each component individually, as there are regional networks reliant on these slower oscillations 20. Removal of these low frequencies from the global mean gave improved identification of the cortical patch signal sources that contribute to neural activation (Fig. S9 video)
<figure><img src="figs/methods-figureS7.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figureS7.png" width="400px"><figcaption>
**Figure S7** Vascular and other artifacts are more correlated to movement than neural components. A) All neural (blue), vascular (red), and other (orange) components and their correlation to the motion vector from each animal. B) Spatial location and corresponding correlation (green to pink) of each component to motion based on their respective classification and genetic background. neural: left column, vascular: center column, other: right column. Top row: GCaMP, second row: mGFP, third row: aGFP, last row: Bl6
</figcaption></figure>
</figcaption></figure></div>
<figure><img src="figs/methods-figureS8.png" width="400px"><figcaption>
<p>
<div class="container"><figure><img src="figs/methods-figureS8.png" width="400px"><figcaption>
**Figure S8** Mean filtration to minimize global slow oscillations seen in GFP control data. A) 30sec examples of the global mean that was subtracted and stored at the initiation of the pipeline, before the decomposition into eigenvectors for GCaMP, mGFP, aGFP and Bl6. B) Global wavelet spectrum (top) and its corresponding power to noise ratio (PNR; bottom) of GCaMP (N=4), mGFP (N=3), aGFP (N=3), and Bl6 (N=3), red indicates the omitted frequencies from our applied high pass filter. C) High-pass filtration results of the same 30 sec in A.
</figcaption></figure>
</figcaption></figure></div>
---
<figure>
<!-- <img src="figs/methods-figureS9.png" width="400px"> -->
@@ -286,10 +293,10 @@ Removal of artifact components will ensure that neural signals are the dominant
In addition to their applications for filtering, the components also are a rich source of information about spatial distributions of signal within the cortex. Components across the cortex show a wide diversity of spatial characteristics, and represent a detected independent unit of signal. We use the spatial domain footprints of each signal component to create a data-driven domain map of the cortical surface by taking a maximum projection through each component layer (Fig. 6A). For analysis, 8 maps were created, with an average of 230 ± 14 detected activation domains (Fig. S10).
<figure><img src="figs/methods-figure6.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figure6.png" width="400px"><figcaption>
**Figure 6** Time series extracted from domain maps outperform time series generated from other downsampling methods. A) Schematic of domain map creation. A maximum projection is taken through each blurred signal component to form a domain map. Mean time courses are extracted from different domains generated from a domain map. B) Example of a mosaic movie frame rebuilt with respect to each downsampling technique. The non-downsampled filtered movie is represented on the left with subsequent downsampling based on domain, grid or voronoi maps. C) Percent total signal of the filtered video represented by extracted time courses. Percent of overall video signal captured in domain maps was calculated for each animal (green circle; N=8), and compared to signal content from a domain map generated from a separate video from the same animal (green triangle). Percent total signal represented by time courses extracted from grid (blue square) or randomly generated (blue diamond) maps were compared as controls. In the right panel, the percent signal relative to the domain map percent signal was summarized in a box plot. D) Variation between time courses extracted with each map method was then quantified as a sum signal variation for each experiment. In the right panel, the sum signal variation for each comparison map relative to the optimized domain map sum signal variation was summarized in a box plot.
</figcaption></figure>
</figcaption></figure></div>
@@ -299,10 +306,10 @@ At full resolution, there are approximately 1.5 million pixels along the surface
To test how well the full filtered video was represented in these time series, we rebuilt mosaic movies, where each domain is represented by its mean extracted signal at any given time point (Fig. 6B, Fig. S11 video). By comparing the borders of the large higher order visual activation, one can see visually that the data appears more distorted in the voronoi and grid. To numerically compare whether this method of time course extraction was superior to alternate methods, we also compared mosaic movies rebuilt with either grid or voronoi maps.
<figure><img src="figs/methods-figureS10.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figureS10.png" width="400px"><figcaption>
**Figure S10** Individual domain maps from animals used in this study. A) Domain maps generated from Snap25 GCaMP6s recordings from littermates (a-c) and from subsequent recordings (#). B) Domain maps generated from the three different control lines.
</figcaption></figure>
</figcaption></figure></div>
The residuals between the mosaic movies and the filtered movies were compared to the total spatial variation in the filtered movie to quantify the amount of total signal represented by the extracted time courses (Fig. 6C, left). In nearly every experiment, the optimized domain map performed better than any other time course extraction method, and accounted for 68 ± 1.2 % of the total spatial signal in the filtered video (n=8). Domain maps generated from different videos from the same animal performed nearly as well as the optimized domain maps created from the video compared (Fig. 6C, right). These maps performed significantly better (p = 0.01) than the grid maps, and much better than the voronoi maps (p < 0.001).
@@ -325,11 +332,11 @@ Domains were then manually sorted into regions, with informed decisions based on
To test the meaning of these maps, a series of comparisons were performed. Pairs of maps were overlaid on top of each other (Fig. 7F-H), and every domain was compared to its nearest domain in the comparison map. The Jaccard overlap was calculated for each of these domain pairs, and quantified for each pair of map comparisons. For a null hypothesis, randomly generated Voronoi maps were also compared.
<figure><img src="figs/methods-figure7.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figure7.png" width="400px"><figcaption>
**Figure 7** Domain maps are created from ICA components and are unique to each recording, but highly similar among individual animals. Using data-guided methods to assign domains to cortical regions. A) Hierarchical clustering based on Pearsons correlation produces a set of 13 regions across the cortical surface. B) Domains colored by various calculated spatial and temporal metrics to aid region assignment. Region area is calculated as a percent of the total cortical surface. Region extend ranges from 0 to 1 and calculates the relative area of a domain to its bounding box. Temporal standard deviation is calculated from the extracted time series, and frequency range size is calculated from wavelet significance. C) The Allen Brain atlas map31 is additionally used for anatomical reference. D) The final manually assigned region, with associated labels. E) Domain area and eccentricity by region. Population analysis of distribution of spatial characteristics in individual domains within defined regions across multiple recordings (n=16 videos; from 8 different animals). F) Example overlay of one domain map on another from the same animal. Individual domain or region overlap is calculated using the Jaccard index (intersect / union). Population analysis of the Jaccard index for domain (G) and region (H) overlap comparisons. Maps are generated from a different recording on the same animal, a littermate, a non-littermate, or a randomly generated voronoi map. Significance is calculated using a two-way ANOVA, followed by post-hoc t-test analysis with Holm-Sidak correction. Retrosplenial: R; Higher order visual: V+; Auditory: A; Somataosensory Seondary: Ss; Somataosensory Core: Sc; Somataosensory Barrel: Sb; Somatosensory other: S; Motor medial: Mm; Motor lateral: Ml; Olfactory: O
k
</figcaption></figure>
</figcaption></figure></div>
Maps generated from different recordings from the same animal were found to be highly overlapping, and hence more similar (Fig. 7G, top; p < 0.001). There was no significant difference in comparisons between littermates vs non littermates. Non-littermate map comparisons were significantly more similar to each other than to voronoi maps (p < 0.001).
@@ -438,10 +445,10 @@ When comparing maps generated from different animals, the optimal alignment was
Compression residuals are calculated while saving the ICA decomposition results. The original movie is rebuilt from the reduced ICA results, and residuals are calculated by taking the absolute value of the difference between the two videos (Fig. S12). The spatial and temporal projection of this absolute difference movie is saved as the spatial and temporal residuals of the decomposition, and is stored as metadata with each ICA decomposition.
<figure><img src="figs/methods-figureS12.png" width="400px"><figcaption>
<div class="container"><figure><img src="figs/methods-figureS12.png" width="400px"><figcaption>
**Figure S12** Comparison of spatial and temporal information content through compression and filtering. A) The original spatial information captured as quantified by a mean subtracted absolute value projected spatially (left) or temporally (right). B) The difference in information between the original input data and the rebuilt ICA projection, excluding noise components beyond the 25% saved in the processed file. The difference movie is projected spatially or temporally to visualize where information was lost in compression. C) Information removed by artifact filter. The artifact movie is rebuilt and projected spatially or temporally to visualize where information was modified by the ICA-based artifact filter.
</figcaption></figure>
</figcaption></figure></div>
### Domain residuals and domain signal analyses