Guest blog: Having a look at operant behavior

Guest blog: Having a look at operant behavior

Posted by Steffen van Heijningen on Thu 28 Jul. 2022 - 5 minute read

Giorgio Bergamini and Sean Durkin are highly skilled scientists from Idorsia Pharmaceuticals in Switzerland specializing in (pre)clinical behavioral paradigms in rodents. In this blog they provide an expert view on operant behavior, and how we can improve our understanding of this complex behavior with the help of video tracking. 

Having a look at operant behavior: the benefits of video tracking

What is operant behavior? And how do you test it?

Operant testing has been widely used to study emotional and cognitive functions in laboratory animals. It is based on the concept of operant conditioning in which an animal’s behavior is maintained through the scheduled delivery of reinforcers (e.g., food rewards) according to a specific rule (reinforcement schedule). A typical example of operant test is the five-choice serial-reaction time task (5CSRTT), which has been extensively used to study attention and impulse control. In the 5CSRTT, animals are presented with 5 apertures, and for each trial an operant cue (i.e., light stimulus) is presented into one of the apertures; nose-poking into the illuminated aperture leads to delivery of a food reward.

What are some advantages of operant testing?

Operant tests offer several advantages over other tests that assess spontaneous animal behavior under defined conditions. Specifically, operant techniques provide excellent experimental control over the behavior shown by the trained animals such that it remains stable over the course of the experiment. As a consequence, the behavioral actions also become very reproducible across different experiments, and this is a feature that is especially desirable for longitudinal testing and comparison of psychoactive drug candidates.

Disadvantages of standard operant conditioning protocols

Classically, assessment of animals’ behavior is achieved using infrared beam-break sensors installed in the operant chamber: for example, sensors may provide time stamps for emitted response (e.g., nose-poking to the apertures) and for collection of rewards (e.g., from the food magazine). However, depending on the task, an interpretation of the results solely based on the sensors output may limit our understanding of animals’ behavior and its significance.


white rat looking over edge of lab cage mirror

Indeed, animals are normally allowed to freely move inside the operant boxes, and their position relative to the operant cue can heavily impact their performance. This aspect is fundamentally different from neuropsychological testing conditioning in humans, which are usually instructed to constantly look where stimuli are presented. For example, Turner et al. (2016) assessed the impact of the rats’ position during an attentional task and its impact on interpretation of the results: omitted trials (i.e., trials without an operant response) were frequently associated with the rats being far and not looking towards the cues (i.e., illuminated apertures) [1]. Therefore, an omitted trial might be considered as indicative of reduced vigilance, while the response omission actually depends on the fact that the cue was not within the animal’s visual field.

Expanding our understanding of operant behavior with video tracking

One method to improve our understanding and interpretation of operant behavior is to add video tracking to the classical sensor-based readouts. Different experimental approaches are possible:

  • Videos might be recorded in synchrony with the sensors-based outputs to perform an off-line tracking of animal’s position to exclude from the analysis selected trials depending on the animals’ position: for example, Clemensson et al. (2020) used video recording during the 5CSRTT to differentiate omitted trials when the rat failed to notice the operant light cue, from those in which the rat was showing reduced interest or vigilance towards the cue [2]. Alternatively, off-line tracking could also be used to better understand animals’ behavior during the different training and testing phases: for example, Mazziotti et al. (2020) showed that over the course of operant training the distance travelled during a single session reduces while velocity does not, suggesting that the reduction in reaction time depends on the optimization of the motor sequence to realize the operant task [3]. In addition, by detecting frequently visited areas in the operant arena during the test, they identify possible stereotyped strategies used by the mice to execute the task. Iversen et al. (2002) also used off-line video tracking to assess behavioral strategies during the different phases of operant task execution: during a continuous reinforcement schedule rats show a stereotyped operant response topography, while during extinction session rats show a variable response topography [4].
  • A different approach consists in using closed-loop experimental setups (e.g., Werkhoven et al., 2019), where animal’s position is tracked in real-time and this information is used to command the operant apparatus (e.g., by triggering a trial only when the animal’s head has a specific angle relative to the chamber wall where the operant stimuli are positioned) [5]. For example, Schweihoff et al. (2021) developed the DeepLabStream tool which allow to track in real-time the position and the head direction of the animal, so that the visual cue is only presented to animals which are facing the screen where the stimulus would be displayed [6].

Integrating operant conditioning with video tracking

Considering all factors named in this blog, integrating classical operant protocols with information derived from video tracking (either off-line or in real-time) improves our understanding of animals’ behavior during the execution of the operant tasks (Kuroda et al., 2018) and increases the resemblance of the testing conditions between rodents and humans [7].

 

Customer quote

“Ethovision is a very versatile tool, which we used for several different experimental set-ups. With its easy-to-use analysis work menu with the different tabs it allowed us to perform our analysis in a reproducible manner” 

Giorgio Bergamini| Idorsia Pharmaceuticals, Switzerland

 

The power of EthoVision XT

EthoVision XT is a powerful and versatile video tool software that can be used for this automation of operant conditioning tasks. Within EthoVision XT, certain behaviors can be detected, that can then trigger a signal to turn certain stimuli on or off. For example when an animal enters specific area of its cage, which is combined with a reward. From detection, to controlling the food dispenser, is all done by EthoVision XT. Also other external equipment can be controlled with the Trial & Hardware Control Module. This saves precious time, effort and money, while increasing the reliability and validity of your research.









FREE TRIAL: Try EthoVision XT yourself!

Request a free trial and find out what EthoVision XT can do for your research!

  • A cost-effective solution
  • Powerful data selection
  • Most cited video tracking system


References

  1. Turner, Karly M.; Peak, James; Burne, Thomas H. J. (2016). Measuring Attention in Rodents: Comparison of a Modified Signal Detection Task and the 5-Choice Serial Reaction Time Task. Front. Behav. Neurosci., 9
  2. Clemensson, Erik K. H.; Abbaszadeh, Morteza; Fanni, Silvia; Espa, Elena; Angela Cenci, M. (2020). Tracking Rats in Operant Conditioning Chambers Using a Versatile Homemade Video Camera and DeepLabCut. J. Vis. Exp., 2020 (160), 1–30
  3. Mazziotti, Raffaele; Sagona, Giulia; Lupori, Leonardo; Martini, Virginia; Pizzorusso, Tommaso (2020). 3D Printable Device for Automated Operant Conditioning in the Mouse. eNeuro, 7 (2)
  4. Iversen, Iver H. (2002). Response-initiated imaging of operant behavior using a digital camera.. J. Exp. Anal. Behav., 77 (3), 283
  5. Werkhoven, Zach; Rohrsen, Christian; Qin, Chuan; Brembs, Björn; de Bivort, Benjamin (2019). MARGO (Massively Automated Real-time GUI for Object-tracking), a platform for high-throughput ethology. PLoS One, 14 (11)
  6. Schweihoff, Jens F.; Loshakov, Matvey; Pavlova, Irina; Kück, Laura; Ewell, Laura A.; Schwarz, Martin K. (2021). DeepLabStream enables closed-loop behavioral experiments using deep learning-based markerless, real-time posture detection. Commun. Biol. 2021 41, 4 (1), 1–11
  7. Kuroda, Toshikazu (2018). A system for the real-time tracking of operant behavior as an application of 3D camera. J. Exp. Anal. Behav., 110 (3), 522–544
Don't miss out on the latest blog posts
Share this post
Topics
Learn
more
Relevant Blogs
decoding-memories-during-day-and-night

Decoding memories during day and night

Unraveling the role of protein synthesis in memory storage across the day and night in a mouse model - how researchers are gaining insights into sleep deprivation and its consequences.
infections-cognition

Unlocking the link between CNS infections and cognitive decline

Bacterial infections in the brain can have lasting effects even after recovery. In this blog we will look a study that used PhenoTyper and the CognitionWall to study the mechanisms behind this cognitive impairment.
5-ways-to-measure-spatial-learning-rodents

5 proven ways to measure spatial learning in rodents

What is spatial learning, why is it important and how do we measure it? Here we dive into 5 different behavioral tests that specifically measure spatial learning and memory in rodents.