Rogier Landman
Cambridge, Massachusetts, United States
332 followers
314 connections
About
Skills
Designing and building ML-based analytical platforms using digital…
Activity
-
Congratulations to my daughter, Megan van Alphen (BSc, UVM '24) with her graduation from the University of Vermont College of Agriculture and Life…
Congratulations to my daughter, Megan van Alphen (BSc, UVM '24) with her graduation from the University of Vermont College of Agriculture and Life…
Liked by Rogier Landman
-
Our Case Report titled “Using large language models for safety-related table summarization in clinical study reports” is published in JAMIA…
Our Case Report titled “Using large language models for safety-related table summarization in clinical study reports” is published in JAMIA…
Shared by Rogier Landman
-
Great to see this important topic and research from DiMe published. This is the first comprehensive quantitative analysis of the cost/value analysis…
Great to see this important topic and research from DiMe published. This is the first comprehensive quantitative analysis of the cost/value analysis…
Liked by Rogier Landman
Experience
Education
-
University of Amsterdam
-
(1) Study the role of primary visual cortex in change blindness
(2) Study high capacity iconic memory using visual cues with human psychophysics -
-
Research intern at National Institute for Brain Research in Amsterdam, The Netherlands, in the lab of Dr Jan de Bruin, with Bob Bermond.
Licenses & Certifications
Volunteer Experience
-
Board Member
De Bonte Leeuw
- Present 1 year 7 months
Education
De Bonte Leeuw is the First Dutch School In Boston. I help with recruitment and organizing cultural events. At this school, our kids learn the Dutch language and enrich themselves culturally and socially. The quality of education at De Bonte Leeuw is outstanding. That is thanks to the teachers, but volunteers play an important role as well. I am happy to contribute to making constant improvements to our school.
Publications
-
Close-range vocal interaction in the common marmoset (Callithrix jacchus).
PLoS ONE
Vocal communication in animals often involves taking turns vocalizing. In humans, turn-taking is a fundamental rule in conversation. Among non-human primates, the common marmoset is known to engage in antiphonal calling using phee calls and trill calls. Calls of the trill type are the most common, yet difficult to study, because they are not very loud and uttered in conditions when animals are in close proximity to one another. Here we recorded trill calls in captive pair-housed marmosets using…
Vocal communication in animals often involves taking turns vocalizing. In humans, turn-taking is a fundamental rule in conversation. Among non-human primates, the common marmoset is known to engage in antiphonal calling using phee calls and trill calls. Calls of the trill type are the most common, yet difficult to study, because they are not very loud and uttered in conditions when animals are in close proximity to one another. Here we recorded trill calls in captive pair-housed marmosets using wearable microphones, while the animals were together with their partner or separated, but within trill call range. Trills were exchanged mainly with the partner and not with other animals in the room. Animals placed outside the home cage increased their trill call rate and uttered more trills in response to their partner compared to strangers. The fundamental frequency, F0, of trills increased when animals were placed outside the cage. Our results indicate that trill calls can be monitored using wearable audio equipment and that minor changes in social context affect trill call interactions and spectral properties of trill calls.
Turn-taking is a fundamental feature of human conversation. Vocal exchanges develop in infancy and universally converge towards a general rule of minimizing overlap and minimizing the time between turns. Temporal regulation of vocal interactions of contact calls can be observed in non-human primates as well. These include loud calls exchanged periodically between widely separated individuals, and more quiet, frequently uttered calls while in dense vegetation where there is risk of becoming separated. In the common marmoset (Callithrix jacchus), turn-taking is observed in phee and trill calls. It is not known if marmoset trill calls are primarily exchanged with specific individuals, and whether separation affects trill call rate, exchange and spectral properties. -
Atypical behaviour and connectivity in SHANK3-mutant macaques
Nature
Mutation or disruption of the SH3 and ankyrin repeat domains 3 (SHANK3) gene represents a highly penetrant, monogenic risk factor for autism spectrum disorder, and is a cause of Phelan–McDermid syndrome. Recent advances in gene editing have enabled the creation of genetically engineered non-human-primate models, which might better approximate the behavioural and neural phenotypes of autism spectrum disorder than do rodent models, and may lead to more effective treatments. Here we report…
Mutation or disruption of the SH3 and ankyrin repeat domains 3 (SHANK3) gene represents a highly penetrant, monogenic risk factor for autism spectrum disorder, and is a cause of Phelan–McDermid syndrome. Recent advances in gene editing have enabled the creation of genetically engineered non-human-primate models, which might better approximate the behavioural and neural phenotypes of autism spectrum disorder than do rodent models, and may lead to more effective treatments. Here we report CRISPR–Cas9-mediated generation of germline-transmissible mutations of SHANK3 in cynomolgus macaques (Macaca fascicularis) and their F1 offspring. Genotyping of somatic cells as well as brain biopsies confirmed mutations in the SHANK3 gene and reduced levels of SHANK3 protein in these macaques. Analysis of data from functional magnetic resonance imaging revealed altered local and global connectivity patterns that were indicative of circuit abnormalities. The founder mutants exhibited sleep disturbances, motor deficits and increased repetitive behaviours, as well as social and learning impairments. Together, these results parallel some aspects of the dysfunctions in the SHANK3 gene and circuits, as well as the behavioural phenotypes, that characterize autism spectrum disorder and Phelan–McDermid syndrome. SHANK3 encodes major scaffolding proteins at excitatory synapses, coordinates the recruitment of signalling molecules and creates scaffolds for appropriate alignment of glutamatergic neurotransmitter receptors, which promotes the development and maturation of excitatory synapses. Mutation of SHANK3 accounts for about 1% of idiopathic forms of autism spectrum disorder, and disruption of SHANK3 is a major cause of neurodevelopmental deficits in Phelan–McDermid syndrome. Patients with a SHANK3 gene mutation often exhibit a variety of comorbid traits, which include
global developmental delay, severe sleep disturbances, severe language delay and autism spectrum disorder. -
Deep convolutional network for animal sound classification and source attribution using dual audio recordings
The Journal of the Acoustical Society of America
This paper introduces an end-to-end feedforward convolutional neural network that is able to reliably classify the source and type of animal calls in a noisy environment using two streams of audio data after being trained on a dataset of modest size and imperfect labels. The data consists of audio recordings from captive marmoset monkeys housed in pairs, with several other cages nearby. The network in this paper can classify both the call type and which animal made it with a single pass through…
This paper introduces an end-to-end feedforward convolutional neural network that is able to reliably classify the source and type of animal calls in a noisy environment using two streams of audio data after being trained on a dataset of modest size and imperfect labels. The data consists of audio recordings from captive marmoset monkeys housed in pairs, with several other cages nearby. The network in this paper can classify both the call type and which animal made it with a single pass through a single network using raw spectrogram images as input. The network vastly increases data analysis capacity for researchers interested in studying marmoset vocalizations, and allows data collection in the home cage, in group housed animals.
-
Discovering Language in Marmoset Vocalization.
Proc. Interspeech
Various studies suggest that marmosets ( Callithrix jacchus) show behavior similar to that of humans in many aspects. Analyzing their calls would not only enable us to better understand these species but would also give insights into the evolution of human languages and vocal tract. This paper describes a technique to discover the patterns in marmoset vocalization in an unsupervised fashion. The proposed unsupervised clustering approach operates in two stages. Initially, voice activity…
Various studies suggest that marmosets ( Callithrix jacchus) show behavior similar to that of humans in many aspects. Analyzing their calls would not only enable us to better understand these species but would also give insights into the evolution of human languages and vocal tract. This paper describes a technique to discover the patterns in marmoset vocalization in an unsupervised fashion. The proposed unsupervised clustering approach operates in two stages. Initially, voice activity detection (VAD) is applied to remove silences and non-voiced regions from the audio. This is followed by a group-delay based segmentation on the voiced regions to obtain smaller segments. In the second stage, a two-tier clustering is performed on the segments obtained. Individual hidden Markov models (HMMs) are built for each of the segments using a multiple frame size and multiple frame rate. The HMMs are then clustered until each cluster is made up of a large number of segments. Once all the clusters get enough number of segments, one Gaussian mixture model (GMM) is built for each of the clusters. These clusters are then merged using Kullback-Leibler (KL) divergence. The algorithm converges to the total number of distinct sounds in the audio, as evidenced by listening tests.
-
Opportunities and challenges in modeling human brain disorders in transgenic primates
Nature Neuroscience
Molecular genetic tools have had a profound impact on neuroscience, but until recently their application has largely been confined to a few model species, most notably mouse, zebrafish, Drosophila melanogaster and Caenorhabditis elegans. With the development of new genome engineering technologies such as CRISPR, it is becoming increasingly feasible to apply these molecular tools in a wider range of species, including nonhuman primates. This will lead to many opportunities for brain research…
Molecular genetic tools have had a profound impact on neuroscience, but until recently their application has largely been confined to a few model species, most notably mouse, zebrafish, Drosophila melanogaster and Caenorhabditis elegans. With the development of new genome engineering technologies such as CRISPR, it is becoming increasingly feasible to apply these molecular tools in a wider range of species, including nonhuman primates. This will lead to many opportunities for brain research, but it will also pose challenges. Here we identify some of these opportunities and challenges in light of recent and foreseeable technological advances and offer some suggestions. Our main focus is on the creation of new primate disease models for understanding the pathological mechanisms of brain disorders and for developing new approaches to effective treatment. However, we also emphasize that primate genetic models have great potential to address many fundamental questions about brain function, providing an essential foundation for future progress in disease research.
Brain disorders are among the largest causes of disease burden worldwide, affecting millions of people and imposing enormous societal and economic costs. Many of these disorders are chronic and incurable conditions, for which existing treatment options are inadequate and in some cases almost completely ineffective. Yet despite the urgent clinical need, there has been little recent progress in the development of new treatments for most common brain disorders, and many currently prescribed drugs are based on decades-old science. This may seem surprising given the rapid rate of progress in fundamental neuroscience, but it has proven extraordinarily difficult to translate advances in basic science into the development of new and better clinical treatments. -
A Framework for Automated Marmoset Vocalization Detection And Classification
Interspeech
This paper describes a novel framework for automated marmoset vocalization detection and classification from within long audio streams recorded in a noisy animal room, where multiple marmosets are housed. To overcome the challenge of limited manually annotated data, we implemented a data augmentation method using only a small number of labeled vocalizations. The feature sets chosen have the desirable property of capturing characteristics of the signals that are useful in both identifying and…
This paper describes a novel framework for automated marmoset vocalization detection and classification from within long audio streams recorded in a noisy animal room, where multiple marmosets are housed. To overcome the challenge of limited manually annotated data, we implemented a data augmentation method using only a small number of labeled vocalizations. The feature sets chosen have the desirable property of capturing characteristics of the signals that are useful in both identifying and distinguishing marmoset vocalizations. Unlike many previous methods, feature extraction, call detection, and call classification in our system are completely automated. The system maintains a good performance of 80% detection rate in data with high number of noise events and is able to obtain a classification error of 15%. Performance can be further improved with additional labeled training data. Because this extensible system is capable of identifying both positive and negative welfare indicators, it provides a powerful framework for non-human primate welfare monitoring as well as behavior assessment. Index Terms: Automated detection and classification, marmoset vocalization, primate behavioral analysis, primate clinical health and welfare monitoring, Teager energy operator.
-
A multimodal sensor system for automated marmoset behavioral analysis
IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN)
The common marmoset is emerging as an important transgenic model for improving the understanding of the underlying neurological basis of many brain disorders. Automated systems for quantitative monitoring of marmoset behaviors in naturalist settings over long period of time are needed to facilitate this process. This paper presents the preliminary work toward building a novel multimodal acquisition system for the automated marmoset behavior analysis in home cage. In addition to integrating…
The common marmoset is emerging as an important transgenic model for improving the understanding of the underlying neurological basis of many brain disorders. Automated systems for quantitative monitoring of marmoset behaviors in naturalist settings over long period of time are needed to facilitate this process. This paper presents the preliminary work toward building a novel multimodal acquisition system for the automated marmoset behavior analysis in home cage. In addition to integrating commercial available devices such as Microsoft Kinect sensors and microphones of different characteristics, we also developed a wireless flexible neck collar with acoustic and non-acoustic sensors onboard for marmoset vocalization recording and caller identification. Our initial effort has been focused on the real-time synchronization of multiple sensor outputs, the engineering design of the wireless collar, and algorithms for global 3D position and local head movement from a Microsoft Kinect sensor. With limited preliminary data, we are able to estimate 3D trajectories of two marmosets with a RMSE of ~3.2 mm and track colored ear tufts with an accuracy of RMSE ~1.8 mm. A larger dataset is needed for a complete assessment and validation. Our system architecture is modular and flexible, and can be extended to include more sensors and devices if needed.
-
Effect of distracting faces on visual selective attention in the monkey
Proceedings of the National Academy of Sciences
In primates, visual stimuli with social and emotional content tend to attract attention. Attention might be captured through rapid, automatic, subcortical processing or guided by slower, more voluntary cortical processing. Here we examined whether irrelevant faces with varied emotional expressions interfere with a covert attention task in macaque monkeys. In the task, the monkeys monitored a target grating in the periphery for a subtle color change while ignoring distracters that included faces…
In primates, visual stimuli with social and emotional content tend to attract attention. Attention might be captured through rapid, automatic, subcortical processing or guided by slower, more voluntary cortical processing. Here we examined whether irrelevant faces with varied emotional expressions interfere with a covert attention task in macaque monkeys. In the task, the monkeys monitored a target grating in the periphery for a subtle color change while ignoring distracters that included faces appearing elsewhere on the screen. The onset time of distracter faces before the target change, as well as their spatial proximity to the target, was varied from trial to trial. The presence of faces, especially faces with emotional expressions interfered with the task, indicating a competition for attentional resources between the task and the face stimuli. However, this interference was significant only when faces were presented for greater than 200 ms. Emotional faces also affected saccade velocity and reduced pupillary reflex. Our results indicate that the attraction of attention by emotional faces in the monkey takes a considerable amount of processing time, possibly involving cortical-subcortical interactions. Intranasal application of the hormone oxytocin ameliorated the interfering effects of faces. Together these results provide evidence for slow modulation of attention by emotional distracters, which likely involves oxytocinergic brain circuits.
The viewing of faces with an emotional expression affects emotion circuits in the brain, even when they are not directly attended. This has led to a debate about whether faces attract attention automatically. We tested the influence of emotional faces as irrelevant distracters in an attention task in monkeys. We conclude that, in monkeys, emotional distractors do attract attention away from other tasks but not instantly. Administration of the hormone oxytocin reduced the effect. -
Laminar differences in gamma and alpha coherence in the ventral stream
Proceedings of the National Academy of Sciences
Attention to a stimulus enhances both neuronal responses and gamma frequency synchrony in visual area V4, both of which should increase the impact of attended information on downstream neurons. To determine whether gamma synchrony is common throughout the ventral stream, we recorded from neurons in the superficial and deep layers of V1, V2, and V4 in two rhesus monkeys. We found an unexpected striking difference in gamma synchrony in the superficial vs. deep layers. In all three areas…
Attention to a stimulus enhances both neuronal responses and gamma frequency synchrony in visual area V4, both of which should increase the impact of attended information on downstream neurons. To determine whether gamma synchrony is common throughout the ventral stream, we recorded from neurons in the superficial and deep layers of V1, V2, and V4 in two rhesus monkeys. We found an unexpected striking difference in gamma synchrony in the superficial vs. deep layers. In all three areas, spike-field coherence in the gamma (40–60 Hz) frequency range was largely confined to the superficial layers, whereas the deep layers showed maximal coherence at low frequencies (6–16 Hz), which included the alpha range. In the superficial layers of V2 and V4, gamma synchrony was enhanced by attention, whereas in the deep layers, alpha synchrony was reduced by attention. Unlike these major differences in synchrony, attentional effects on firing rates and noise correlation did not differ substantially between the superficial and deep layers. The results suggest that synchrony plays very different roles in feedback and feedforward projections.
Anatomical and physiological studies have characterized the afferent inputs to and efferent inputs from neurons in different layers of visual cortical areas. However, physiological distinctions across layers, such as synchronous interactions, have not been fully identified. We first came across laminar differences in synchrony serendipitously. Gamma-band synchrony, measured either by spike-field or spike-spike interactions across multiple electrodes, is a prominent feature in visual cortex, and several studies have shown that attention enhances gamma-band synchrony in area V4. -
A backward progression of attentional effects in the ventral stream
Proceedings of the National Academy of Sciences
The visual processing of behaviorally relevant stimuli is enhanced through top-down attentional feedback. One possibility is that feedback targets early visual areas first and the attentional enhancement builds up at progressively later stages of the visual hierarchy. An alternative possibility is that the feedback targets the higher-order areas first and the attentional effects are communicated “backward” to early visual areas. Here, we compared the magnitude and latency of attentional…
The visual processing of behaviorally relevant stimuli is enhanced through top-down attentional feedback. One possibility is that feedback targets early visual areas first and the attentional enhancement builds up at progressively later stages of the visual hierarchy. An alternative possibility is that the feedback targets the higher-order areas first and the attentional effects are communicated “backward” to early visual areas. Here, we compared the magnitude and latency of attentional enhancement of firing rates in V1, V2, and V4 in the same animals performing the same task. We found a reverse order of attentional effects, such that attentional enhancement was larger and earlier in V4 and smaller and later in V1, with intermediate results in V2. These results suggest that attentional mechanisms operate via feedback from higher-order areas to lower-order ones.
Neurophysiologic and brain imaging studies in monkeys and humans have shown that attended stimuli evoke larger responses in visual cortex than unattended distracters (1–6), giving attended stimuli a competitive advantage for representation in the cortex (7). These top-down attentional effects are thought to be mediated in part by feedback from prefrontal and posterior parietal cortex (8–12) acting directly or indirectly on all visual areas in the dorsal and ventral stream, including V1. However, the mechanism of this feedback is unclear. In particular, a first-order question is whether the top-down feedback targets V1 [or even the lateral geniculate nucleus (LGN)] first and then is passed on successively to later areas, or whether it targets higher-order areas first and then is fed back to successively lower areas. Without an understanding of the basic functional anatomy of the attentional feedback, it will be difficult to make progress in unraveling the circuitry for attention. -
Can we equate iconic memory with visual awareness?
Behavioral and Brain Sciences
Every time we look around we can see a rich and detailed world surrounding us. Nevertheless, the majority of visual information seems to slip out of our thoughts instantly. Can we still say that this fleeting percept of the entire world was a conscious percept in the first place, as Block proposes?
-
Relationship between change detection and pre-change [corrected] activity in visual area V1
Neuroreport
Humans are poor at detecting changes to visual scenes occurring during brief disruptions. It is unclear whether this 'change blindness' results from failure to process the relevant item before the change, or failure to compare/recall the item after the change. We recorded pre-change multi-unit activity in area V1 of monkeys performing a change detection task. The animals were rewarded for making a saccade to the changing figure. Figure-ground related activity was observed, even when no correct…
Humans are poor at detecting changes to visual scenes occurring during brief disruptions. It is unclear whether this 'change blindness' results from failure to process the relevant item before the change, or failure to compare/recall the item after the change. We recorded pre-change multi-unit activity in area V1 of monkeys performing a change detection task. The animals were rewarded for making a saccade to the changing figure. Figure-ground related activity was observed, even when no correct saccade was made. However, for the changing figure, pre-change activity was stronger in correct trials than in incorrect trials. We conclude that small differences in pre-change figure-ground segregation have predictive value in whether the change will be successfully detected.
-
The role of figure-ground segregation in change blindness
Psychonomic Bulletin and Review
Partial report methods have shown that a large-capacityrepresentation exists for a few hundred milliseconds after a picture has disappeared. However, change blindness studies indicate that very limited information remains available when a changed version of the image is presented subsequently.
What happens to the large-capacity representation? New input after the first image may interfere, but
this is likely to depend on the characteristicsof the new input. In our first experiment, we…Partial report methods have shown that a large-capacityrepresentation exists for a few hundred milliseconds after a picture has disappeared. However, change blindness studies indicate that very limited information remains available when a changed version of the image is presented subsequently.
What happens to the large-capacity representation? New input after the first image may interfere, but
this is likely to depend on the characteristicsof the new input. In our first experiment, we show that a
display containing homogeneous image elements between changing images does not render the large capacity representation unavailable. Interference occurs when these new elements define objects. On that basis we introduce a new method to produce change blindness: The second experiment shows that
change blindness can be induced by redefining figure and background, without an interval between the
displays. The local features (line segments) that defined figures and background were swapped, while
the contours of the figures remained where they were. Normally, changes are easily detected when
there is no interval. However, our paradigm results in massive change blindness. We propose that in a
change blindness experiment, there is a large-capacity representation of the original image when it is
followed by a homogeneous interval display, but that change blindness occurs whenever the changed
image forces resegregation of figures from the background. -
Large capacity storage of integrated objects before change blindness
Vision Research
Normal people have a strikingly low ability to detect changes in a visual scene. This has been taken as evidence that the brain represents only a few objects at a time, namely those currently in the focus of attention. In the present study, subjects were asked to detect changes in the orientation of rectangular figures in a textured display across a 1600 ms gray interval. In the first experiment, change detection improved when the location of a possible change was cued during the interval. The…
Normal people have a strikingly low ability to detect changes in a visual scene. This has been taken as evidence that the brain represents only a few objects at a time, namely those currently in the focus of attention. In the present study, subjects were asked to detect changes in the orientation of rectangular figures in a textured display across a 1600 ms gray interval. In the first experiment, change detection improved when the location of a possible change was cued during the interval. The cue remained effective during the entire interval, but after the interval, it was ineffective, suggesting that an initially large representation was overwritten by the post-change display. To control for an effect of light intensity during the interval on the decay of the representation, we compared performance with a gray or a white interval screen in a second experiment. We found no difference between these conditions. In the third experiment, attention was occasionally misdirected during the interval by first cueing the wrong figure, before cueing the correct figure. This did not compromise performance compared to a single cue, indicating that when an item is attentionally selected, the representation of yet unchosen items remains available. In the fourth experiment, the cue was shown to be effective when changes in figure size and orientation were randomly mixed. At the time the cue appeared, subjects could not know whether size or orientation would change, therefore these results suggest that the representation contains features in their ‘bound’ state. Together, these findings indicate that change blindness involves overwriting of a large capacity representation by the post-change display.
-
Set Size Effects in the Macaque Striate Cortex.
Journal of Cognitive Neuroscience
Attentive processing is often described as a competition for resources among stimuli by mutual suppression. This is supported by findings that activity in extrastriate cortex is suppressed when several stimuli are presented simultaneously, compared to a single stimulus. In this study, we randomly varied the number of simultaneously presented figures (set size) in an attention-demanding change detection task, while we recorded multiunit activity in striate cortex (V1) in monkeys. After…
Attentive processing is often described as a competition for resources among stimuli by mutual suppression. This is supported by findings that activity in extrastriate cortex is suppressed when several stimuli are presented simultaneously, compared to a single stimulus. In this study, we randomly varied the number of simultaneously presented figures (set size) in an attention-demanding change detection task, while we recorded multiunit activity in striate cortex (V1) in monkeys. After figure–background segregation, activity was suppressed as set size increased. This effect was stronger and started earlier among cells stimulated by the background than those stimulated by the figures themselves. As a consequence, contextual modulation, a correlate of figure–background segregation, increased with set size, approximately 100 msec after its initial generation. The results indicate that suppression of responses under increasing attentional demands differentially affects figure and background responses in area V1.
-
A neural correlate of change blindness in V1
Journal of Vision
Sudden changes are often not noticed when coupled with a brief disruption (‘change blindness’). Change detection under these circumstances appears to require focal attention. Little is known about what happens in the brain when a change is detected/missed. To explore the relation between brain activity and change detection, we trained monkeys to detect which of a varied number of rectangles changed its orientation across two subsequent presentations. We recorded multi-unit activity in the…
Sudden changes are often not noticed when coupled with a brief disruption (‘change blindness’). Change detection under these circumstances appears to require focal attention. Little is known about what happens in the brain when a change is detected/missed. To explore the relation between brain activity and change detection, we trained monkeys to detect which of a varied number of rectangles changed its orientation across two subsequent presentations. We recorded multi-unit activity in the primary visual cortex while the animals performed the task. The percentage of correct responses indicated that change detection accuracy depended on the number of rectangles. With 4 rectangles on screen, about 50% of the changes were missed. Here we compared neuronal activity before the change occurred, between correct and incorrect trials. In correct trials, cells with their receptive field inside the rectangle that was going to change had a firing rate, higher then in incorrect trials, starting 95 ms after stimulus onset. In the background, activity was lower in correct than incorrect trials, starting 235 ms after stimulus onset. At non-changing rectangles there was no difference. Thus, figure/background segregation was greater in correct trials than incorrect trials, particularly at the location of the change. These results show that the magnitude of a V1 response related to pre-attentive figure-ground segregation predicts whether a change is going to be detected or not. This suggests that early, perhaps pre-attentive, processing is very important in change detection.
-
Attention sheds no light on the origin of phenomenal experience
Behavioral and Brain Sciences
In O'Regan & Noë's (O&N's) account for the phenomenal experience of seeing, awareness is equated to what is within the current focus of attention. They find no place for a distinction between phenomenal and access awareness. In doing so, they essentially present a dualistic solution to the mind-brain problem, and ignore that we do have phenomenal experience of what is outside the focus of attention.
-
The role of primary visual cortex (V1) in visual awareness
Vision Research
In the search for the neural correlate of visual awareness, much controversy exists about the role of primary visual cortex. Here, the neurophysiological data from V1 recordings in awake monkeys are examined in light of two general classes of models of visual awareness. In the first model type, visual awareness is seen as being mediated either by a particular set of areas or pathways, or alternatively by a specific set of neurons. In these models, the role of V1 seems rather limited, as the…
In the search for the neural correlate of visual awareness, much controversy exists about the role of primary visual cortex. Here, the neurophysiological data from V1 recordings in awake monkeys are examined in light of two general classes of models of visual awareness. In the first model type, visual awareness is seen as being mediated either by a particular set of areas or pathways, or alternatively by a specific set of neurons. In these models, the role of V1 seems rather limited, as the mere activity of V1 cells seems insufficient to mediate awareness. In the second model type, awareness is hypothesized to be mediated by a global mechanism, i.e. a specific kind of activity not linked to a particular area or cell type. Two separate versions of global models are discussed, synchronous oscillations and spike rate modulations. It is shown that V1 synchrony does not reflect perception but rather the horizontal connections between neurons, indicating that V1 synchrony cannot be a direct neural correlate of conscious percepts. However, the rate of spike discharges of V1 neurons is strongly modulated by perceptual context, and these modulations correlate very well with aspects of perceptual organization, visual awareness, and attention. If these modulations serve as a neural correlate of visual awareness, then V1 contributes to that neural correlate. Whether V1 plays a role in the neural correlate of visual awareness thus strongly depends on the way visual awareness is hypothesized to be implemented in the brain.
Courses
-
AWS Certified Cloud Practioner
-
-
Build Multilayer Perceptrons with Keras
Coursera Project Network
-
Neural Networks and Deep Learning
DeepLearning.AI, Coursera
-
NeuroInformatics
-
-
Scrum Master
-
-
Scrum Product Owner
-
-
Spark Fundamentals I
BD0211EN
-
Structuring machine learning projects
DeepLearning.AI, Coursera
Projects
-
Automated behavior monitoring system
Built an automated audio/video behavior monitoring architectures for large-scale quantitative behavior analysis. Including hardware and data processing tools, producing 8-fold increase in data collection. Utilize open source 3D camera software and deep learning tools for computer vision. Leverage our terabyte dataset to characterize wildtype and mutant animal movement. Formulate techniques for analysis of movement and vocalization tracking using dense sensor data.
Other creators -
-
Laser debriding of dural granulation tissue
A novel method for ablation of granulation tissue from dura mater for monkey electrophysiology. For this, I developed an angled laser tip and air cooling system.
Other creators -
-
Mismatch negativity using eCog
-
Analyzed wireless eCog array data over prefrontal, temporal and extrastriate cortex in freely moving primate. Discovered vocalization-selective properties in marmoset auditory cortex using wireless eCog. Found MisMatch Negativity using vocalizations as stimuli.
-
Advise neuroscience labs on machine learning, movement and vocalization analysis
-
Provide advisory services on machine learning, movement and vocalization analysis and problem-solving to other neuroscience labs. Hold frequent presentations to internal and external stakeholders
-
Longitudinal monitoring of motor and social functioning from birth and in aging non-human primates
-
Longitudinal monitoring of motor and social functioning from birth and in aging non-human primates, relevant to aging and Alzheimer’s model development.
-
Statistical modeling of vocal interactions and movement in animals to predict vocal interactions
-
Develop data analytics on development of marmoset infants. Management of project schedules. Hands-on contributor in project execution.
-
Create and share a 300Gb database of annotated animal vocalizations for machine learning community.
-
Data management of terabyte datasets. Ensure data quality after as it is collected together with technicians.
-
Behavior and brain connectivity in SHANK3-mutant macaques
-
Oversee the behavioral characterization of a transgenic monkey autism model with cross-functional teams, published in Nature, a step towards finding biomarkers and novel therapeutic opportunities for Autism Spectrum Disorder.
-
Deep convolutional network for animal sound classification and source attribution using dual audio recordings
-
Designed and shared deep neural network algorithms for automated spectrogram-based animal vocalization detection and classification, published in Journal of the Acoustical Society of America. This includes segmentation, tokenization. Ownership of technical strategy.
-
Develop test suite and data analysis for social behavior and cognition in SHANK3 mutant monkeys
-
Developed a testing suite and data analysis to evaluate social behavior and cognition in transgenic monkeys, partnership with industry. Emphasized focus on translatable measurements including EEG, MRI, eye tracking and freely moving social behavior.
-
Protocols and SOPs for marmoset lab
-
Protocols and SOPs had to be written for the new marmoset lab. Thes are necessary for ensuring compliance with Federal, local and institute regulations.
-
Research and development team for wearable voice recording product
-
We produced a wearable voice recording product using Bluetooth technology and vibration sensors, in collaboration with with Lincoln Lab at MIT. I played a guiding role in scoping and problem solving.
-
Voice detection and classification software development using neural networks
-
Coordination of a computer science team for voice detection software development using artificial intelligence, presented at Society for Neuroscience meeting. Curate and aggregate labeled vocalization data from experts for machine learning. Take ownership of technical strategy and objectives.
-
Developing fMRI and structural MRI protocols for ultra high field (9.4 T) imaging in squirrel monkey
-
-
Two-photon imaging using viral vectors in non-human primates
-
Intracortical injection of AAV vectors, implantation of cranial window, imaging GCaMP neuronal activity in visual cortex at single neuron resolution. Setting up the microscope, laser. Monitoring vital signs and anesthesia. Analyzing imaging data. Presentation at Society for Neuroscience meeting.
-
Effect of distracting faces on visual selective attention in the monkey
-
Primates express a natural interest in faces. The viewing of faces with an emotional expression affects emotion circuits in the brain, even when they are not directly attended. This has led to a debate about whether faces attract attention automatically. We tested the influence of emotional faces as irrelevant distracters in an attention task in monkeys. Task performance was most affected when facial expression was threatening, especially when presented for durations longer than 200 ms. We…
Primates express a natural interest in faces. The viewing of faces with an emotional expression affects emotion circuits in the brain, even when they are not directly attended. This has led to a debate about whether faces attract attention automatically. We tested the influence of emotional faces as irrelevant distracters in an attention task in monkeys. Task performance was most affected when facial expression was threatening, especially when presented for durations longer than 200 ms. We conclude that, in monkeys, emotional distractors do attract attention away from other tasks but not instantly. Administration of the hormone oxytocin reduced the effect. Among the brain systems likely involved are areas where oxytocin receptors are abundant.
Other creators -
-
Effect of distracting faces on visual selective attention in the monkey, and the effects of oxytocin and vasopressin
-
I designed and executed the experiments. We discovered that oxytocin reduces attentional capture by unfamiliar faces, relevant for pharmaceutical healthcare options of Autism Spectrum Disorder. We were among the first to apply intranasal administration methods in macaque monkeys.
-
Investigate cross-area neuronal synchrony during visual attention in non-human primate
-
-
Study within-area neuronal synchrony in brain areas V1, V2 and V4 during attention in non-human primate
-
-
Trans-Cranial Magnetic Stimulation (TMS) to investigate the role of primary visual cortex in delayed saccade paradigms
-
Honors & Awards
-
RO1 grant Development of an Integrated System for Monitoring Home-Cage Behavior in Non-Human Primates”
NIH
I made a significant contribution to writing the proposal
PI: Robert Desimone -
Simons Center Targeted Project Grant
Simons Foundation
Grant shared between 4 labs (Desimone lab, Graybiel lab, Sur lab, and Jasanoff lab). I am proud to have played a major role in crafting the Desimone proposal. Projects: Desimone: "Neural circuits for social attention and social reward"; Graybiel: "Investigation of striatal circuits in marmoset brain underlying repetitive, perseverative behaviors"; Sur: "Mechanisms of switching and prediction in marmoset cortex"; Jasanoff: "Molecular measurement and perturbation of marmoset brain networks"
Languages
-
English
Native or bilingual proficiency
-
German
Limited working proficiency
-
French
Limited working proficiency
-
Dutch
Native or bilingual proficiency
Organizations
-
Toastmasters
Member
- Present -
Society for Neuroscience
-
- Present
More activity by Rogier
-
🦙🦙🦙 70B running locally on Mac. Thanks to Apple MLX framework and amazing Hugging Face MLX Community. #machinelearning #largelanguagemodels #LLM…
🦙🦙🦙 70B running locally on Mac. Thanks to Apple MLX framework and amazing Hugging Face MLX Community. #machinelearning #largelanguagemodels #LLM…
Liked by Rogier Landman
-
I'm excited to announce that I will be moving back to Boston in May! I'm looking forward to reconnecting with friends and colleagues in the area, so…
I'm excited to announce that I will be moving back to Boston in May! I'm looking forward to reconnecting with friends and colleagues in the area, so…
Liked by Rogier Landman
-
Yesterday, I had the opportunity to present our recent paper at #ADDS2024: "SciKit Digital Health Package for Accelerometry-Measured Physical…
Yesterday, I had the opportunity to present our recent paper at #ADDS2024: "SciKit Digital Health Package for Accelerometry-Measured Physical…
Liked by Rogier Landman
-
I am speaking at DIA Europe about using AI/Ml for monitoring data integrity in clinical trials. https://bit.ly/3Lrpdl1
I am speaking at DIA Europe about using AI/Ml for monitoring data integrity in clinical trials. https://bit.ly/3Lrpdl1
Shared by Rogier Landman
-
Happy to share that I have taken up a Professorship of Neuroscience at Barts and the London School of Medicine/Queen Mary and the University of…
Happy to share that I have taken up a Professorship of Neuroscience at Barts and the London School of Medicine/Queen Mary and the University of…
Liked by Rogier Landman
-
Proud to be part of this massive collaboration across several labs and universities leading up to this amazing finding (published in Nat. Neurosci…
Proud to be part of this massive collaboration across several labs and universities leading up to this amazing finding (published in Nat. Neurosci…
Liked by Rogier Landman
-
🚀 We’re excited to announce the launch of our next-gen neuroimaging system, Flow2! ⚡️ Flow2 is our second-generation combined time-domain…
🚀 We’re excited to announce the launch of our next-gen neuroimaging system, Flow2! ⚡️ Flow2 is our second-generation combined time-domain…
Liked by Rogier Landman
-
Andrew Ng says that if data is carefully prepared, a company may need far less of it than they think. Read more about Data-Centric AI:…
Andrew Ng says that if data is carefully prepared, a company may need far less of it than they think. Read more about Data-Centric AI:…
Liked by Rogier Landman
-
Bioifourmis Fourum 2023. Listening to a great panel on improving efficacy and reducing costs with digital tools and remote care solutions…
Bioifourmis Fourum 2023. Listening to a great panel on improving efficacy and reducing costs with digital tools and remote care solutions…
Liked by Rogier Landman
-
Looking forward to joining a panel discussion with Michael Elashoff and Jonathan Walsh on Applications of responsible AI/ML in drug development next…
Looking forward to joining a panel discussion with Michael Elashoff and Jonathan Walsh on Applications of responsible AI/ML in drug development next…
Liked by Rogier Landman
People also viewed
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More