Neuroscience Review: The Auditory and Visual System (Quick Review Notes)


Free download. Book file PDF easily for everyone and every device. You can download and read online Neuroscience Review: The Auditory and Visual System (Quick Review Notes) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Neuroscience Review: The Auditory and Visual System (Quick Review Notes) book. Happy reading Neuroscience Review: The Auditory and Visual System (Quick Review Notes) Bookeveryone. Download file Free Book PDF Neuroscience Review: The Auditory and Visual System (Quick Review Notes) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Neuroscience Review: The Auditory and Visual System (Quick Review Notes) Pocket Guide.


Key points

The effects of intermodal attention were analyzed using the two bimodal conditions BA and BV. Because the stimuli presented in these conditions were identical, every independent factor was included in this analysis: modality of attention; imaging protocol; auditory stimulus intensity, ear of delivery and frequency; and visual stimulus type.

Percent signal change was calculated relative to the overall mean BOLD response for each voxel. Mean BOLD responses associated with each block were calculated by averaging across both functional images from the sparse imaging sessions and across images 2—8 i. Statistical contrasts were used to identify stimulus-dependent activations SDAs; activations related to unattended stimuli; see Figure 3 and attention-related modulations ARMs; see Figure 4. SDAs were obtained by subtracting activations in unimodal conditions from activations in bimodal conditions that differed from the unimodal conditions only by the addition of task-irrelevant stimulation in the unattended modality.

These contained identical stimuli, and differed only in the modality attended. A circled cross indicates the occipital pole. The calcarine sulcus is indicated by the yellow arrow pointing away from the foveal towards the peripheral visual field regions. Data from sessions using sparse image acquisition. Auditory ARMs red were found in auditory cortex along the superior temporal plane with additional foci in the lingual gyrus and cuneus auditory occipital activations: AOAs.

The color scale shows mean percent signal change. Insets right : mean occipital activations from sparse and continuous image acquisition sessions. Due to the variable relationship between gyral structure and stereotaxic coordinates in individual subjects [67] we projected the Talairach coordinates from Stenbacka and Vanni to the nearest point on the cortical surface for each individual in the control database of 60 whole-brain T1 scans white and green dots in Figure 5.

We also measured the 3D Talairach coordinates of AOA maxima in the cuneus and lingual gyrus for both hemispheres. A Left: average cortical surface anatomy showing occipital regions box. AOAs in all 9 subjects, depicted on maps of their individual occipital cortex surface curvature. Bottom right: the activation map from one subject who underwent retinotopic mapping of the horizontal and vertical meridians green lines and two eccentric annuli white and yellow lines.

B Cortical surface projections of the Talairach coordinates reported by Stenbacka et al. We used a region of interest ROI analysis to evaluate the reliability of AOA generation and to test whether the AOAs were implicated in perceptual analysis of sensory information or in attention-related cognitive processes. ROIs were defined using the data acquired during sparse fMRI acquisition and their responses were analyzed using the independent data set obtained in sessions using continuous imaging.

ROI voxels were required to meet three criteria: percent signal change from baseline 0. Two ROIs in pericalcarine visual cortex were chosen for analysis: 1 an AOA region, including the clusters in the lingual gyrus and cuneus, and 2 a central vision region in the posterior calcarine sulcus based on the visual ARM cluster in this area. The effects of intermodal attention i.

Review ARTICLE

Separate analyses were also performed using data from either the auditory BA and UA conditions or visual BV and UV attention conditions' data alone, in order to compare activations in the presence and absence of stimuli in the unattended modality. We evaluated the hypothesis that AOAs might reflect cognitive operations associated with task switching at block boundaries [56] by analyzing event-related time course regressors modeling the beginning and end of bimodal stimulus blocks where attention switched from the auditory to the visual modality or vice versa.

Event-related time course regressors were created to model the BOLD response produced when subjects switched between performing the auditory and visual tasks. Task-switching events were modeled as square waves beginning at the conclusion of one block and ending 2 seconds later in the following block.

Switching events were included for the transitions between all temporally adjacent bimodal blocks with different task modalities. These boxcar time courses were convolved with a standard, bigamma hemodynamic response function [69]. A fixed-effects t-test assessed the fit between the modeled and observed BOLD time courses for each surface voxel.

Event-related time course regressors were also used to determine whether AOAs primarily reflected detection of the unpredictable auditory targets. The measured time course of subjects' button press responses associated with auditory target hits were convolved with a hemodynamic response function HRF for both the sparse and continuous imaging sessions. These target-related regressors were contrasted with regressors representing the periods during which subjects made no responses. Within auditory attention blocks, response events were modeled as positive square waves spanning the ms prior to a recorded response, and non-response epochs of variable length, spanning the intervals between each two response events were modeled as negative square waves.

The resulting two boxcar time courses were normalized to have equal energy, summed together, and were convolved with the standard HRF. A fixed-effects t-test assessed where the time courses for each surface voxel was non-zero. The results from the analyses described below revealed that AOAs were positively correlated with sustained auditory attention and negatively correlated with activations in central visual areas during auditory attention conditions. However, because subjects switched attention between auditory and visual stimulus blocks, there was no truly activation-independent baseline.

Thus, it is possible that AOAs could reflect relative deactivations of peripheral visual regions due to foveal attention during visual attention blocks [70] , [71] rather than activations of peripheral visual regions during auditory attention blocks. Alternatively, if AOAs were part of a cortical network activated during auditory attention, AOAs should be unrelated to activity in central visual field regions of visual cortex but correlated with activations in auditory cortex.

We therefore also tested the hypothesis that there was a positive correlation between responses in the AOA ROI and auditory cortex. In order to find consistent correlation values across subjects i. We first computed the partial correlations of the AOA ROI with every voxel on the cortical surface during unimodal visual blocks while partialling out the global fMRI signal the mean of the entire cortical surface and the three main head motion correction components.

Second, we calculated the partial correlation under all task conditions between the AOA ROI and an auditory cortex ROI in the same hemisphere defined from sparse data see Supplemental Figure S1 while partialling out 1 the global signal and head motion parameters, 2 an ROI from both hemispheres defined as all visual ARM voxels in the posterior occipital region, and 3 indicator variables for bimodal vs.

The first cortical surface partial correlation examined whether there were significant correlations between the AOAs and the posterior occipital region, while the latter ROI-based partial correlation was designed to test the hypothesis that there were correlations between the AOAs and auditory cortex that could not be explained by visual functional activations or by any of the attention block conditions.

Figure 3 shows SDAs on the average inflated cortical surface. Visual SDAs blue, cyan were localized to the foveal region of retinotopic cortex and surrounding parafoveal zones with additional activations seen in higher visual areas in the temporal and occipital lobes and the intraparietal sulcus. Auditory SDAs were restricted to auditory sensory cortex on Heschl's gyrus and in surrounding regions on the superior temporal plane. There was no evidence of auditory SDAs in occipital cortex.

Figure 4 shows attention-related modulations ARMs , isolated by contrasting activations from bimodal visual attention blocks with activations from bimodal auditory attention blocks. AOAs had similar amplitudes and distributions in fMRI sessions using continuous and sparse image acquisition Figure 4 , insert and were observed in every subject Figure 5. The results from one subject's retinotopic mapping are shown in Figure 5. Figure 6A shows the left hemisphere ARM activation map from the sparse imaging data, in which the ROIs are composed of all activated pixels falling within the outlined regions.

The corresponding map from the continuous imaging data used to analyze the ROIs is shown alongside. In these plots responses were averaged across corresponding but independently defined ROIs from both hemispheres. A Left: ARM activation maps from the sparse imaging data, plotted on the mean curvature map of the left hemisphere. The color scale and statistical thresholds are the same as in Figure 3. All significant voxels circumscribed by the yellow and green lines were designated as the AOA and central vision ROIs, respectively.

Right: activation map from the continuous imaging data set used to analyze the ROIs, illustrated using identical thresholds. B Mean percent signal change for the four main task conditions in continuous imaging sessions: bimodal auditory BA , unimodal auditory UA , bimodal visual BV and unimodal visual UV. Bars show standard errors of the mean. Figure 7A shows the task switching regressor contrast map for the left hemisphere. There was no evidence of AOAs being associated with attentional transitions at the beginning or end of stimulation blocks.

A Task-switching. Event-related time course regressors modeled activations associated with block termination and switching between auditory and visual tasks. Shown is the left hemisphere map from the continuous imaging data. Significant AOA regions white outlines overlapped very little with regions activated by task switching red voxels. B Auditory target detection.

Left hemisphere map is shown. AOA regions were not activated by target detection. C Inhibition by foveal visual cortex. Mixed-effects z-scores for the average correlation coefficient between the time course of each surface voxel and the mean time course of the AOA ROI, during unimodal visual conditions. Note the absence of significant correlations with central visual field voxels region surrounding the circled cross. Thus, AOAs appeared to primarily reflect tonic attention-related activity rather then activity specifically related to target detection. AOAs showed no significant correlations with activity in foveal visual cortex.

This supports the hypothesis that AOAs are components of a network of brain regions engaged when subjects actively listen to sounds. In this study, AOAs depended critically on the engagement of auditory attention. AOAs were not generated by unattended sounds during visual attention conditions, regardless of sound intensity, location or frequency. In contrast, reliable AOAs were found in all subjects when they actively discriminated sounds. AOA magnitudes were not influenced by sound frequency or location, suggesting that they did not reflect the analysis of acoustic features.

The only acoustic parameter that modulated AOA magnitudes did so in a manner more consistent with an attentional account of AOA function than with a sensory role. AOAs were larger in blocks with low intensity sounds than in blocks with high intensity sounds. This effect is the opposite of fMRI sound intensity effects that are observed in core auditory sensory regions [74] — [79]. Sound intensity was also the only acoustic parameter that affected behavioral performance.

3rd Edition

Thus, one explanation of AOA enhancements to low-intensity sounds is that they reflected the increased engagement of sustained auditory attention during the more difficult low-intensity task conditions. AOAs were localized to regions of visual cortex with visual receptive fields sensitive to stimuli in the far periphery [66] , [80] — [82]. Lesions of these regions impair sound localization performance [35] , and transient disruptions in processing in these regions from transcranial magnetic stimulation impairs performance on sound localization tasks [36].

The fact that AOA magnitudes were greater during behaviorally difficult blocks with low sound intensity suggests that AOAs are associated with auditory performance in sighted subjects, as has previously been reported in the blind [4] , [6] , [8] , [19] , [20]. The current results show that reliable AOAs can occur during non-spatial auditory discrimination tasks in sighted subjects, consistent with incidental reports of AOAs in previous studies of non-spatial attention tasks [40] , [43] , [83] , [84].

One common feature of experiments in which AOAs are detected in sighted subjects is that sounds were delivered through earphones. In contrast, decreased occipital activations have been reported during auditory attention tasks when sounds were presented through visible loudspeakers located in the frontal spatial plane [6] , [8]. These results suggest that when attention is directed to sound sources that are subjectively localized outside the visual field as when sounds are delivered through headphones peripheral regions of visual cortex are activated.

Thus, AOAs may represent a special case of location-specific activation of visual cortex associated with cross-modal attention to spatial locations outside the visual field [85] , [86]. As in previous reports, we found no consistent difference in the distribution of AOAs over the two hemispheres when sounds were delivered to one ear or the other [37].

This lack of spatial specificity suggests that invisible sound sources may prime peripheral visual cortex bilaterally, perhaps because stimuli localized outside the visual field can enter the visual field from unpredictable directions. This study adds to growing evidence that AOAs occur in sighted as well as in blind subjects. It is now well-established that blind individuals, especially the congenitally or early blind, often have superior auditory task performance and larger AOAs than those found in sighted subjects [19].

The enhanced auditory performance of blind individuals is especially pronounced for sounds presented in the peripheral auditory field [24] , [25]. Conversely, deaf individuals exhibit enhanced visual target detection, but only in the visual periphery [87] , [88]. Enhanced performance in the blind may reflect cortical reorganization consequent to the disruption of normal visual input to the occipital lobe [30].

Recent studies [4] have suggested that AOAs in the blind may be mediated by anatomical projections between auditory association cortex and retinotopic visual cortex [11]. These projections terminate preferentially in peripheral visual cortex [10] , [89] , [90] and may play a role in the functional coupling of auditory and visual processing [12] seen in the current experiment.

Earlier studies have implicated fronto-parietal systems in such sensory-to-motor mappings of visual and auditory information Andersen, ; Kalaska and Crammond, ; Wise et al. Thus, at least some of the activation observed in the present study within the fronto-parietal network may be associated with the response selection aspect of our tasks.

The regions of co-activation that we observed in the precentral sulcus and parietal cortex and possibly anterior midline cortex also appear to be part of a network that is important for the control of attention Driver and Spense, ; Mesulam, Indeed, numerous studies have reported activity in all or portions of this network to varying degrees and extents when attention is directed to vision Posner et al. Thus in the present experiments, this network may have acted to direct attention to targets within the same or different sensory modalities as required by each type of motion task.

Moreover, recent theories of visual motion processing have implicated attention directly in the tracking and velocity estimation of moving targets Blaser et al. This suggests the possibility that the control of attention and the motion computations themselves may be intimately intertwined and mediated by common or partially overlapping mechanisms within the fronto-parietal system. Midline structures are reported to be involved in high-level processing of complex stimuli Posner et al. Enhancement of anterior midline cortex observed during our cross-modal audio- visual comparison may reflect the particularly demanding aspects of the task such as cross-modal attentional allocation or error detection and compensation Corbetta et al.

In our tasks, subjects were required to maintain visual fixation throughout the fMRI scans. Although they could readily comply with this requirement, the motion-discrimination tasks placed additional demands on the systems responsible for suppressing both saccades and overt visual tracking of the moving targets. Earlier studies suggested that specific oculomotor systems, which overlap portions of the co-activated cortex in our experiments, may mediate the inhibition of reflexive eye movements Sheliga et al.

However, cortex mediating overt saccadic eye movements may also mediate covert shifts of visual attention Corbetta et al. This raises the possibility that auditory spatial attention could be closely associated with oculomotor control systems traditionally thought to be under visual control. An additional consideration regarding lateral frontal and anterior cingulate activation in this study was the involvement of working memory.

In the tone discrimination task and the unimodal, 1-back speed comparison, subjects were required to use working memory to recall the speed of the immediately preceding target and then respond. Concordant with this notion, we observed activation in the lateral precentral and superior frontal sulci overlapping cortex reported to be involved in spatial working memory Jonides et al.

Similarly, activity along the medial wall pre-SMA and anterior cingulate cortex overlapped cortex reported to be active during working memory delays, especially with regard to maintaining a state of preparedness for selecting a motor response Petit et al. Thus, aspects of working memory in our tasks may account for a portion of the activation observed in the anterior midline as well as lateral frontal cortex. Based on previous reports, we had expected to observe poly- modal co-activation in the anterior insula and in, or near, the STS.

In an earlier study, Griffiths et al. We too observed activation of this region bilaterally during our auditory motion task but also during our pitch and visual motion tasks, thereby suggesting a non-specific functional role for this area. Similarly we had expected to find polymodal activation in the STS since, in monkeys, this region is known to contain cells responsive to multiple modalities Bruce et al. However, the STS responses we observed were typically weak and scattered, and did not approach the robustness of responses observed at other sites.

Thus, polymodal activation of the STS may be dependent on stimulus or task factors not present in our paradigms. Our uncertainty concerning the possible polysensory role of the STS is indicated in Fig.

Sensory perception in autism | Nature Reviews Neuroscience

Overall, the results of this study indicate that the integration and comparison of motion information between the visual and auditory modalities involves a specific network of both unimodal and polymodal cortical areas. Parietal cortex, and perhaps lateral frontal cortex, appear to be optimally situated to mediate the integration and attentional selection of motion information across modalities. However, interactions between the two modalities can involve both enhancing and suppressive effects, depending on the nature of the stimuli and the task being performed by the subject.

We thank Koss Inc. Milwaukee, WI for the production of custom MRI compatible electrostatic headphones used in this study; Jon Wieser and Kelly Williams for assistance with data processing and construction of the three-dimensional and flattened Talairach maps; and David Van Essen and Heather Drury for use of the cortical flattening algorithm. This work was supported by grants EY to J.

Address correspondence to James Lewis, Ph. Email: james mcw. Center-of-mass coordinates of several regions of cortical activation from the isolated auditory- and visual-motion discrimination tasks, reported in stereotaxic space Talairach and Tournoux, Schematic illustration of the auditory and visual motion paradigms. A Left depicts the time line of the auditory motion paradigm s total , with a 20 s pre-task baseline period, and 20 s ON task and OFF control periods. Middle depicts sound intensity heard in each ear to produce sensation of sound motion based on interaural intensity differences.

Steeper slopes correspond to faster perceived motion. Right inset shows the visual fixation target viewed throughout the entire scan. B Left shows the timeline for the isolated visual motion paradigm. Right illustrates a snapshot of the visual display. Dotted lines indicate bipartite annulus of coherent motion. Refer to Materials and Methods for details. FMRI responses from A the isolated auditory motion paradigm and B the isolated visual motion paradigm averaged across seven subjects.

Anatomical underlay from one subject. Axial sections panels are labeled by their Z -coordinate mm in Talairach coordinate space. Three-dimensional models and flat map representation of Talairach brain refer to Materials and Methods of the right hemisphere showing the group-averaged fMRI activity data from Figure 2. A , B Pattern of activation red and suppression dark green resulting from the visual motion paradigm. C , D Pattern of activation yellow and suppression blue resulting from the auditory motion paradigm.

E Flat map representation of regions of fMRI activity overlap are indicated by intermediate colors see color inset. Visual related suppression dark green was omitted for clarity. Left hemisphere activity pattern was similar, except for wider separation of visual and auditory activation foci in the STS, and the presence of visual and auditory suppression near the central sulcus. Other label conventions as in Figure 2. Individual cases illustrating overlap of fMRI changes during auditory and visual motion paradigms. A Parietal cortex showing partial overlap orange of auditory yellow and visual red activation.

B Dorsal occipital cortex showing suppression during the auditory motion task blue , activation during the visual motion task red and regions of overlap magenta. Red visual-related and blue auditory-related averaged waveforms were derived from the magenta voxels only 18—20 voxels in a three-dimensional ROI approximated by white circles. Baseline for each time series approximated by green lines was determined from signal during the pre-stimulus period for the auditory paradigm.

Images were transformed into Talairach space. Enhancement of response in parietal cortex during cross-modal versus uni-modal speed comparisons two subjects A , B. Top row: pattern of activation near the IPS for attend-auditory condition Auditory , cross-modal comparison X-modal , and attend-visual comparison Visual. Talairach coordinates of the focus of enhancement for case A was 41, —56, 54, and for case B was —27, —60, Note that the MR intensity color scale red to yellow is different from those in Figures 3 and 4.

Middle row: s time series orange waveforms , averaged across an 18 voxel three-dimensional ROI approximated by white circles. Activation white and suppression black produced by pitch discrimination task averaged across four subjects who also performed the motion tasks. Other conventions as in Figure 2. Data averaged from four subjects subset from Fig. Schematic diagram summarizing cortical areas engaged by visual, auditory and cross-modal motion tasks used in this study.

Areas activated by auditory-only task shown in white. Light gray indicates areas activated by the visual-only task. Co-activated systems are shown as overlapping ellipses dark gray where relative sizes of the ellipses indicate either comparable or unequal volume of activation. Dashed outline indicates areas that were suppressed during auditory-only task. Black ovals show sites where enhancement was observed during the cross-modal speed comparison. Thin connecting lines reflect known anatomical connections for simian cortex. A brief list of references include: 1—3 Van Buren and Borke, ; Morel et al.

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide. Sign In or Create an Account. Sign In. Advanced Search. Article Navigation. Close mobile search navigation Article Navigation.

Neuroscience of music

Volume Article Contents. Materials and Methods. Oxford Academic. Google Scholar. Michael S. Edgar A. Cite Citation. Permissions Icon Permissions. Abstract Visual and auditory motion information can be used together to provide complementary information about the movement of objects. Table 1. Consequently, cluster sizes mm 3 are directly comparable within tasks, but not between tasks. The anterior midline foci involved portions of both hemispheres. View Large. Figure 1.


  1. Auditory Attention Activates Peripheral Visual Cortex;
  2. Head and Neuroanatomy - Latin Nomencl. (THIEME Atlas of Anatomy).
  3. Introduction.
  4. Turning Over A New Leaf;
  5. Greyhound Tales.

View large Download slide. Figure 2. Figure 3. Figure 4. Figure 5. Figure 6. Figure 7. Figure 8. Andersen RA Azuma M, Suzuki H New York: Plenum. Chubb C, Sperling G In: Higher order processing in the visual system Ciba Foundation Symposium , pp. New York: Wiley. Corbetta M Cox RW Desimone R, Gross CG Desimone R, Ungerleider LG Amsterdam: Elsevier.

Driver J, Spence C Ettlinger G, Wilson WA Geiger B Three-dimensional modeling of human organs and its application to diagnosis and surgical planning. Technical report Institut National de Recherche en Informatique et Automatique. Griffiths TD, C. B, Frackowiak RSJ Hadjikhani N, Roland PE Kennedy H, Bullier J Knudsen EI, Konishi M Lewis JW The intraparietal sulcus of the macaque and connected cortical regions: anatomical parcellation and connections throughout the hemisphere.

Doctoral dissertation, California Institute of Technology. After a 60 min waiting period in the setup, the animal was imaged again to monitor the effect of the chemogenetic manipulation in the exact same neurons as during the control session. The brain and blood vessels were illuminated through the cranial window by a red intrinsic signal: wavelength nm or a green blood vessel pattern: wavelength nm light-emitting diode. To localize the visual cortex, the reflected light was collected at 15 Hz by a charge-coupled device CCD camera Foculus IEEE coupled to the epifluorescence light path of the Femtonics microscope no emission or excitation filter.

A slow drifting bar protocol was used: a white vertical bar was drifting horizontally over the screen width for 10 cycles at 0. After band-passing, the measured signals around 0. Similarly, elevation maps were obtained in a subset of the animals using a horizontal bar drifting vertically. These maps coincided with those obtained in previous studies Marshel et al.

A custom-made Matlab program controlled image acquisition and sound delivery. Sounds were trains of 20 white noise bursts or pure tone pips separated by smooth 20 ms gaps down and up 10 ms linear ramps. The functional localization of the auditory cortex in this study corresponded to the response map produced by white noise.

A Python software was used to capture images from the camera at 50 Hz, synchronized with the cortical recordings. These movies were analyzed off-line using custom automatic Matlab programs that traced the contours, first of the eyelid, second of the pupil. The eye lid shape was approximated by two arcs and involved the estimation of 6 parameters four for the coordinates of the two points where the arcs join, two for the y-coordinates of the crossings of the arcs with the vertical line at halfway between these two points; see Figure 2C , bounds for the parameters were set by hands and appear in yellow.

The pupil shape was approximated by an ellipse, described by four parameters center x and y, radius and eccentricity. Both estimations were performed by maximizing the difference between average luminance inside and outside the shape, as well as the luminance gradient normal to the shape boundary; in addition, they were inspected manually and corrected occasionally inside a dedicated graphical user interface. Figure 2D , therefore we discarded all trials from these contexts displaying saccades. The data obtained from the different imaging experiments consisted of the following. In 18 sessions seven mice , animals were also stimulated with looming visual stimuli and bimodal stimuli from which we observed no response or modulation.

Data analysis was performed with custom-made Matlab scripts available upon request. Every frame recorded was corrected for horizontal motion to a template image using rigid body registration all sessions with visible z motion were discarded. Regions of interest were then automatically selected and human checked as the cell bodies of neurons with visually identifiable activity Roland et al.

This fraction was set to 0. Analyses were performed either on these normalized fluorescence signals, or on estimations of the firing rate obtained by temporal deconvolution Bathellier et al. This simple method efficiently corrects the strong discrepancy between fluorescence and firing rate time courses due to the slow decay of spike-triggered calcium rises Bathellier et al. However, it does not correct for the relatively slow rise time of GCAMP6s, producing a time delay on the order of 70 ms between peak firing rate and peak deconvolved signal.

Data analysis was performed with custom-made Matlab and Python scripts. Clustering of auditory responses in the auditory and visual cortices Figure 1 , and of bimodal conditions in the visual cortex Figure 6 was performed using the following procedure. Deconvolved calcium responses were averaged across all valid trials eye movement filtering. We then subtracted the baseline average activity from 0 to 0. Hierarchical clustering was performed using the Euclidean metric and Ward method for computing distance between clusters.

To determine the number of clusters, we moved down the clustering threshold clusters for all AC datasets, 25 clusters for V1 data until clusters became redundant overclustering as assessed visually. This method clusters neurons irrespective of whether they significantly responded to the stimuli. These thresholds were chosen by visual inspection of the obtained clusters. Some clusters which were homogeneous but obviously captured a systematic perturbation of the signal correlated noise were also manually put in the non-responsive cluster.

To make sure cell type distributions were not skewed by the fact that clustering outputs the most robust auditory responses Figure 1 , we re-aggregated neurons discarded as non-responsive if the mean correlation of their activity signature with any of the identified clusters was larger than 0. Multi-electrode silicon probe recordings were done in mice already implanted with a cranial window above AC at least 2 or 3 weeks before the experiment. The brain was covered with Kwik-CastTM silicon World precision Instruments and the animal was removed from the setup to recover from anesthesia for at least 1 hr.

Electrophysiological recordings were done using four shank Buzsaki32 silicon probes Neuronexus. The animal was placed back on the setup-up and the Kwik-Cast was removed. The insertion of the probe was controlled by a micromanipulator MP, Butter Instrument. Spikes were then sorted using KlustaKwik spike sorting algorithm Harris et al.

All posterior data analyses were performed using custom Python scripts. Firing rates were calculated in 25 ms time bins, averaging over 20 sound repetitions. Data are displayed as mean and S. The null-hypothesis is that both populations have the same distribution of clusters. For each partition, we computed the difference between the fractions of each cluster across the two partitions. This led to distributions of expected fractional differences for each cluster under the null hypothesis. The p-value was computed from the percentile of which the actual observed fractional difference was located.

Significant responses for individual neurons were detected using the non-parametric Wilcoxon rank-sum test. Raw calcium fluorescence traces were subtracted for pre-stimulus level, and averaged over a time window near the response peak. The vector of such responses for different trials was compared to the same computations performed on blank trials unless responses to two different conditions were compared, such as auditory responses in the dark vs.

Therefore, in all the histogram displays of the fractions of responding neurons e. To assess the significance of supra- or sub-linear responses to audio-visual combinations in individual clusters resulting from the clustering of bimodal responses, we used a bootstrap consisting in shuffling the different trial repetitions. The p-values were computed from the percentile in which the actual nonlinear difference was located. Simulations were performed using a rate model with 2 or 3 populations and no synaptic delay.

The spiking activity r i of population i followed the equation:. The first segment is not a constant zero accounts for the fact that the other inputs summarized in b context i are in fact stochastic and can lead the cell to fire even when its average potential is below threshold. Custom analysis scripts are available in Source Code 1. In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses.

A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included. Thank you for submitting your article "Context-dependent signaling of coincident auditory and visual events in primary visual cortex" for consideration by eLife. Your article has been reviewed by three reviewers and the evaluation has been overseen by a Reviewing Editor andAndrew King as the Senior Editor.

The following individuals involved in review of your submission have agreed to reveal their identity: Adi Mizrahi Reviewer 1 ; Anthony Holtmaat Reviewer 3. The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission. Psychophysical data suggest that the auditory and visual systems are intimately connected. This paper is about the circuit mechanisms that support these multimodal interactions. Deneux and colleagues perform an elegant set of experiments characterizing neuronal responses in specific cell types in both auditory and visual cortex to reveal a circuit that could support context-dependent integration of both modalities in the visual cortex.

They find that neurons in auditory cortex that project to visual cortex are more likely to be responsive to loud sounds and to loud sounds getting quieter. They show that these same sounds preferentially modulate the activity of neurons in the visual cortex, but that they do so in a context-dependent and layer-specific manner which depends on the ambient luminance.

Some of the data are of high quality and the authors perform a range of controls that lend rigor and confidence to their study. This will be of great interest to a wide audience interested in sensory integration and cortical circuits. However, the reviewers noted some major concerns that need to be addressed before publication.

The reviewers believe that these issues can be addressed in a couple of months. Specificity for "loud onsets sounds" has not been demonstrated.

1. Fundamentals

Although the effects they describe may be stronger for loud onsets than for quiet onsets, the results seem more of a bias within a continuous distribution rather than true specificity. Another major concern relates to the strong conclusion that the gating of context must occur in the visual cortex instead of in the inputs from the auditory cortex to the visual cortex. This does not seem supported by the data. The concerns listed below require some changes in analysis or data presentation and possibly some experiments that can be done rapidly. What was the rationale behind this comparison?

Wouldn't it have been more interesting to compare with the 'overall' population in L5? It is critical to provide images and fluorescence traces in L5 neurons. Along similar lines, the authors do not describe all differences in the response types between V1-projecting AC neurons and the control population. It seems that cluster 7 OFF-response type is not present in V1-procting neurons. More histological data needs to be provided to sow which other targets the AC neurons have. This needs to be cleared up. Also, in the realm of image analysis, the hierarchical clustering should be unpacked in a supplemental figure.

How similar are neurons in each cluster? How much variance do the clusters explain etc. It would have been more convincing to show data from mice where both V1-projecting and those that do not are imaged in the same mice you already have the tools to do this.

Imaging separate neurons in separate experiments with and without specificity is less compelling. Therefore, the inhibitory responses need to be understood well, especially given that the 'switch' in the model strongly hinges on this finding. However, the inhibitory effects of sound in darkness are a bit enigmatic and not very well supported by the figures. The authors provide mean deconvolved traces, but these are difficult to digest in the context of these types of responses, as this assumes that there is high baseline activity in V1 in darkness. Either there is a general but very consistent small decrease in all neurons, or a decreased response in a few neurons that are highly active under darkness — but then why the low variance.

For negative responses it is important to exclude the possibilities that technical issues have seeped in. For example, can the authors rule out changes due to vertical movements; and how much does the neuropil signal subtraction affect these responses? However, neutral responses i. Could the authors 'play' with these parameters to see how they affect the outcome e.

Altogether, it is essential to provide example images of groups of neurons preferably time lapse images and traces of individual neurons. They should also compare baseline neuronal activity of those neurons that are inhibited versus those who are not, and possibly perform movement z-plane correction in images in which they have neurons labeled in red. However, it is unclear that closing one eye could simply test the influence of arousal, as this would by itself represent yet a different state of arousal. It is not simply a matter of being in light or darkness, as there are many factors in an experimental setup that determine arousal.

The authors report that this showed a "similar effect, but less robust" as compared to the muscimol experiment. Whereas this trend might be true for the experiments under light, this remains inconclusive for the DREADD experiments in darkness since in one animal the average responses were drastically reduced. The authors should report the statistical comparison for the DREADD experiment and increase the n if they feel that this addition is necessary to support the conclusions.

For unimodal stimuli was there any background stimulus in either modality e. If not, why is the sound-only inhibitory response in V1 as seen in Figure 5 not reproduced in Figure 6? The claims of effect-specificity to loud onset sounds and looming stimuli are too strong.

After all, they only tested a limited amount of stimuli, both auditory and visual. And even within this limited set, the effects are not binary. Reproducing the empirical results with a model is a good starting point but eventually it has to provide something more e. Do they suggest specific biophysical mechanisms of the neurons are involved? This should be clarified. They then design a model to demonstrate how this might work through the application of distinct gain conditions and a non-linear threshold. However, another possibility as the authors acknowledge in the Discussion section is that the gating might occur in the auditory cortex such that the inputs to the L1 population are only active in the dark.

This is a hypothesis that the authors could test by measuring F in baseline conditions in the absence of auditory stimulation in the light and dark. Evidence that the L1 neurons that are driven only in the dark have higher baseline F in the dark while other less selective L1 interneurons do not show such strong modulation would significantly strengthen the authors' argument. In fact, up-ramp sounds are sufficient to drive suppression in the dark, though unlike down-ramp sounds do not evoke either excitation or inhibition in the light at least not on average as shown in Figure 3B.

Thus, the authors should make it clear that there are anatomical specializations that might also support the observed gating. To address these concerns, we have provided three new sets of experiments imaging in layer 5, electrophysiology in layer 5, and imaging of V1 projecting cells in AC in lit vs dark conditions. We also performed the requested complementary analysis. We agree that the results describe a different bias in the distribution of sound response types, rather than a strict specificity.

We thus carefully checked the text and made several changes in the Abstract and main text to avoid conveying the impression that there is strict specificity. Below are a few examples of modified text:. Plus a larger more detailed paragraph describing the changes in response type distribution for all cell subpopulations. We have done new experiments, recording V1 projecting cells in auditory cortex in light and dark conditions.

These experiments show that the illumination context does not modulate activity of auditory cortex neurons. Thus, the contextual gating or modulation must happen in V1. The results are shown in Figure 3F. Note that we have used this new dataset for the clustering analysis of V1 projecting AC neurons in Figure 1 which is thus also changed and extended with data from AC layer 5. We have toned down the conclusions related to specificity of loud onset. See our response to the first point above. We failed to achieve simultaneous imaging of V1 projecting and non-projecting cells in AC see below , a separation which is actually hard to guaranty with viral techniques partial infection.

We have thus now provided in Figure 1 a quantification of the distribution of sound envelop features in upper L5.

Sensory perception in autism

We observe differences in the distribution of responses type across all three datasets. For example, as the referees point out, quiet OFF neurons former cluster 7 are generally very rare see also Deneux et al. We provide now a small paragraph on these discrepancies in the Discussion:. Histological data showing which other targets the AC neurons have is freely available online on the website of the Allen Brain Institute and of the mouse i-connectome project www. We did not feel we can provide data of better quality during the time frame of the review process. We have also used more cluster labels, similar to our previous study on ramping sounds Deneux et al.

However, it turned out that GCAMP6s labelling was clearly weaker in the TdT labelled neurons than in the rest of the layer 5 population.

Neuroscience Review: The Auditory and Visual System (Quick Review Notes) Neuroscience Review: The Auditory and Visual System (Quick Review Notes)
Neuroscience Review: The Auditory and Visual System (Quick Review Notes) Neuroscience Review: The Auditory and Visual System (Quick Review Notes)
Neuroscience Review: The Auditory and Visual System (Quick Review Notes) Neuroscience Review: The Auditory and Visual System (Quick Review Notes)
Neuroscience Review: The Auditory and Visual System (Quick Review Notes) Neuroscience Review: The Auditory and Visual System (Quick Review Notes)
Neuroscience Review: The Auditory and Visual System (Quick Review Notes) Neuroscience Review: The Auditory and Visual System (Quick Review Notes)
Neuroscience Review: The Auditory and Visual System (Quick Review Notes) Neuroscience Review: The Auditory and Visual System (Quick Review Notes)

Related Neuroscience Review: The Auditory and Visual System (Quick Review Notes)



Copyright 2019 - All Right Reserved