How does the timing work out for dopamine to affect the development of long-term potentiation?

How does the timing work out for dopamine to affect the development of long-term potentiation?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I just recently learned (indirectly from this source) that long-term potentiation/depression only takes place when you have large/small amounts of dopamine present at the relevant synapse. However, my understanding is that early in the process of learning behavior, dopamine isn't released until after a reward is received. If I'm a monkey who has to look one way or the other to get some juice, by the time I drink the juice and get the resulting dopamine, the neurons I used to move my eye are presumably not firing any more, right? So then how does this work? Is it that:

  1. There's a slow bootstrapping process. Something like: first the monkey has to learn that the taste of juice leads to the reward of sugar. Then the monkey has to learn that the sight of juice correlates to the taste, and extend the chain one step further, etc., etc., such that by the time you actually have the monkey in the lab there's a very small inferential step needed to plug into a larger reward chain.
  2. The neurons don't actually stop firing. There is some sort of system in play that keeps the neurons firing afterwards, waiting for fluctuations in dopamine level to tell it whether or not it should increase or decrease long term potentiation.
  3. Something I haven't thought of?

The ability of dopamine to work on different time scales is discussed by Schultz. More generally, the basic idea of prediction error reinforcement based learning and DA is that the prediction error signal "propagates back in time". There is some doubt for how temporally precise this DA signal actually is, and if DA is fast enough to support RPE TD learning.

Does that answer the question?

How does the timing work out for dopamine to affect the development of long-term potentiation? - Psychology

Learning is an important way that organisms interact with, are changed by, and change their environment. The basis of learning is that experiences can alter and change behavior, sometimes permanently. While learning occurs in many organisms, humans demonstrate the broadest range of different types of learning.

Human learning is influenced by many factors, including both innate and environmental variables. Learning and memory are intimately related and both play critical roles in human behavior, personality, attitudes, and development over the entire lifetime of an individual.


We have been learning since the day we were born. At times, it comes more formally, like learning how to add and subtract in school, how to throw a baseball, or studying for the MCAT. Other times, learning occurs more informally, like learning how to walk, how to behave in certain social situations, or how to talk. Any way it happens, learning is an important part of how humans (and other animals) interact with each other and with the world around them.

Nonassociative Learning

Nonassociative learning occurs when an organism is repeatedly exposed to one type of stimulus. Two important types of nonassociative learning are habituation and sensitization. A habit is an action that is performed repeatedly until it becomes automatic, and habituation follows a very similar process. Essentially, a person learns to &ldquotune out&rdquo the stimulus. For example, suppose you live near train tracks and trains pass by your house on a regular basis. When you first moved into the house, the sound of the trains passing by was annoying and loud, and it always made you cover your ears. However, after living in the house for a few months, you become used to the sound and stop covering your ears every time the trains pass. You may even become so accustomed to the sound that it becomes background noise and you don&rsquot even notice it anymore.

Dishabituation occurs when the previously habituated stimulus is removed. More specifically, after a person has been habituated to a given stimulus, and the stimulus is removed, this leads to dishabituation the person is no longer accustomed to the stimulus. If the stimulus is then presented again, the person will react to it as if it was a new stimulus, and is likely to respond even more strongly to it than before. In the train example above, dishabituation could occur when you go away on vacation for a few weeks to a quiet beach resort. The train noise is no longer present, so you become dishabituated to that constant noise. Then, when you return to your home and the noisy train tracks, the first time you hear the train after you return, you notice it again. The noise may cause you to cover your ears again or react even more strongly because you have become dishabituated to the sound of the trains passing.

Sensitization is, in many ways, the opposite of habituation. During sensitization, there is an increase in the responsiveness due to either a repeated application of a stimulus or a particularly aversive or noxious stimulus. Instead of being able to &ldquotune out&rdquo or ignore the stimulus and avoid reacting at all (as in habituation), the stimulus actually produces a more exaggerated response. For example, if instead of trains passing by outside your house, you attend a rock concert and sit near the stage. The feedback noise from the amplifier may at first be merely irritating, but as the aversive noise continues, instead of getting used to it, it actually becomes much more painful and you have to cover your ears and perhaps even eventually move. Sensitization may also cause you to respond more vigorously to other similar stimuli. For example, supposed that as you leave the rock concert, an ambulance passes. The siren noise, which usually doesn&rsquot bother you, seems particularly loud and abrasive after being sensitized to the noise of the rock concert. Sensitization is usually temporary and may not result in any type of long-term behavior change (you may or may not avoid rock concerts in the future and you are unlikely to respond so strongly to an ambulance siren when you hear one next week).

Associative Learning

Associative learning describes a process of learning in which one event, object, or action is directly connected with another. There are two general categories of associative learning: classical conditioning and operant conditioning.

Classical Conditioning

Classical (or respondent) conditioning is a process in which two stimuli are paired in such a way that the response to one of the stimuli changes. The archetypal example of this is Pavlov&rsquos dogs. Ivan Pavlov, who first named and described the process of classical conditioning, did so by training his dogs to salivate at the sound of a ringing bell. Dogs naturally salivate at the sight and smell of food it is a biological response that prepares the dogs for food consumption. The stimulus (food) naturally produces this response (salivating), however, dogs do not intrinsically react to the sound of a bell in any particular way. Pavlov&rsquos famous experiment paired the sound of a bell (an auditory stimulus) with the presentation of food to the dogs, and after a while, the dogs began to salivate to the sound of a bell even in the absence of food. The process of pairing the two initially unrelated stimuli changed the dogs&rsquo response to the sound of the bell over time they became conditioned to salivate when they heard it. The dogs effectively learned that the sound of the bell was meant to announce food.

This example demonstrates a few key concepts about classical conditioning. This type of learning relies on specific stimuli and responses.

&bull A neutral stimulus is a stimulus that initially does not elicit any intrinsic response. For Pavlov&rsquos dogs, this was the sound of the bell prior to the experiment.

&bull An unconditioned stimulus (US) is a stimulus that elicits an unconditioned response (UR). Think of this response like a reflex. It is not a learned reaction, but a biological one: in this case, the presentation of food is the unconditioned stimulus and the salivation is the unconditioned response.

&bull A conditioned stimulus (CS) is an originally neutral stimulus (bell) that is paired with an unconditioned stimulus (food) until it can produce the conditioned response (salivation) without the unconditioned stimulus (food).

&bull Finally then, the conditioned response (CR) is the learned response to the conditioned stimulus. It is the same as the unconditioned response, but now it occurs without the unconditioned stimulus. For the dogs, salivating at the sound of the bell is the conditioned response.

Figure 1 Conditioning in Pavlov&rsquos Dogs

Acquisition, extinction, spontaneous recovery, generalization, and discrimination are the processes by which classically conditioned responses are developed and maintained.

1) Acquisition refers to the process of learning the conditioned response. This is the time during the experiment when the bell and food are always paired.

2) Extinction, in classical conditioning, occurs when the conditioned and unconditioned stimuli are no longer paired, so the conditioned response eventually stops occurring. After the dogs have been conditioned to salivate at the sound of bell, if the sound is presented to the dogs over and over without being paired with the food, then after some period of time the dogs will eventually stop salivating at the sound of the bell.

3) Spontaneous recovery is when an extinct conditioned response occurs again when the conditioned stimulus is presented after some period of time. For example, if the behavior of salivating to the sound of the bell becomes extinct in a dog, and is then presented to the dog again after some amount of lapsed time and the dog salivates, the conditioned response was spontaneously recovered.

4) Generalization refers to the process by which stimuli other than the original conditioned stimulus elicit the conditioned response. So, if the dogs salivate to the sound of a chime or a doorbell, even though those were not the same sounds as the conditioned stimulus, the behavior has been generalized.

5) Discrimination is the opposite of generalization, and occurs when the conditioned stimulus is differentiated from other stimuli thus, the conditioned response only occurs for conditioned stimuli. If the dogs do not salivate at the sound of a buzzer or a horn, they have differentiated those stimuli from the sound of a bell.

Figure 2 Curve of Acquisition, Extinction, and Spontaneous Recovery in Classical Conditioning

Organisms seem predisposed to learn associations that are adaptive in nature. One powerful and very long-lasting association in most animals (including humans) is taste-aversion caused by nausea and/or vomiting. An organism that eats a specific food and becomes ill a few hours later will generally develop a strong aversion to that food. Most organisms develop the aversion specifically to the smell or taste of the food (occurs in most mammals), but it is also possible to develop an aversion to the sight of the food (occurs in birds). The function of this quickly-learned response is to prevent an organism from consuming something that might be toxic or poisonous in the future. This response happens to be one that does not need a long acquisition phase (it is typically acquired after one exposure) and has a very long extinction phase in fact for most organisms, it never extinguishes.

Operant Conditioning

The other category of associative learning is operant (or instrumental) conditioning. Whereas classical conditioning connects unconditioned and neutral stimuli to create conditioned responses, operant conditioning uses reinforcement (pleasurable consequences) and punishment (unpleasant consequences) to mold behavior and eventually cause associative learning. However, just as with classical conditioning, timing is everything. In classical conditioning, it was important for the neutral stimulus to be paired with the unconditioned stimulus (that is, for them to occur together or very close together in time), in order for the neutral stimulus to become conditioned. In operant conditioning, it is just as important for the reinforcement or the punishment to occur around the same time as the behavior in order for learning to occur.

One of the most famous people to conduct research in the area of operant conditioning was B.F. Skinner. Skinner worked with animals and designed an operant conditioning chamber (later called a &ldquoSkinner box&rdquo) that he used in a series of experiments to shape animal behavior. For example, in one series of experiments, a hungry rat would be placed inside a Skinner box that contained a lever. If the rat pressed the lever, a food pellet would drop into the box. Often the rat would first touch the lever by mistake, but after discovering that food would appear in response to pushing the lever, the rat would continue to do so until it was sated. In another series of experiments, the Skinner box would be wired to deliver a painful electric shock until a lever was pushed. In this example, the rat would run around trying to avoid the shock at first, until accidentally hitting the lever and causing the shock to stop. On repeated trials, the rat would quickly push the lever to end the painful shock.

Figure 3 Example of a &ldquoSkinner Box&rdquo

These examples demonstrate a few key concepts about operant conditioning.

1) Reinforcement is anything that will increase the likelihood that a preceding behavior will be repeated the behavior is supported by a reinforcement. There are two major types of reinforcement: positive and negative.

&bull Positive reinforcement is some sort of positive stimulus that occurs immediately following a behavior. In the above experiments, the food pellet was a positive reinforcer for the hungry rat because it causes the rat to repeat the desired behavior (push the lever).

&bull Negative reinforcement is some sort of negative stimulus that is removed immediately following a behavior. In the above experiments, the electric shock is a negative reinforcer for the rat because it causes the rat to repeat the desired behavior (again, push the lever) to remove the undesirable stimulus (the painful shock).

Anything that increases a desired behavior is a reinforcer both positive and negative reinforcements increase the desired behavior, but the process by which they do so is different. Positive reinforcement does it by adding a positive stimulus (something desirable) and negative reinforcement does it by removing a negative one (something undesirable). Positive reinforcement adds and negative reinforcement subtracts (this will be important when contrasted with punishment later). While several brain structures are involved in operant conditioning, the amygdala is understood to be particularly important in negative conditioning, while the hippocampus is believed to be particularly important in positive conditioning.

Another key distinction for reinforcement is between primary and secondary or unconditioned and conditioned reinforcers.

1) Primary (or unconditioned) reinforcers are somehow innately satisfying or desirable. These are reinforcers that we do not need to learn to see as reinforcers because they are integral to our survival. Food is a primary positive reinforcer for all organisms because it is required for survival. Avoiding pain and danger are primary negative reinforcers for the same reason avoidance is important for survival.

2) Secondary (or conditioned) reinforcers are those that are learned to be reinforcers. These are neutral stimuli that are paired with primary reinforcers to make them conditioned. Secondary reinforcers can also be paired with other secondary reinforcers. For example, imagine training a dog to sit. Saying the word &ldquosit&rdquo to a dog is a neutral stimulus it will not inherently react to that word. However, when you pair the word &ldquosit&rdquo with a treat every time the dog demonstrates the desired behavior (sitting), eventually the dog will learn to sit when you say &ldquosit.&rdquo The word &ldquosit&rdquo has become a secondary reinforcer, because it has been paired with a primary reinforcer (the treat). You can then pair the secondary reinforcer (the word &ldquosit&rdquo) with another secondary reinforcer, such as a hand motion. Eventually the dog will learn to sit when you make the hand motion instead of when you say the word &ldquosit.&rdquo Almost any stimulus can become a secondary reinforcer, but it must be paired with a primary reinforcer in order to produce learned behavior.

Operant conditioning relies on a reinforcement schedule. This schedule can be continuous, in which every occurrence of the behavior is reinforced, or it can be intermittent, in which occurrences are sometimes reinforced and sometimes not. Continuous reinforcement will result in rapid behavior acquisition (or rapid learning), but will also result in rapid extinction when the reinforcement ceases. Intermittent reinforcement typically results in slower acquisition of behavior, but great persistence (or resistance to extinction) of that behavior over time. Therefore, it is possible to initially condition a behavior using a continuous reinforcement schedule, and then maintain that behavior using an intermittent reinforcement schedule. For instance, a dog can be trained to sit in response to a hand motion in a continuous reinforcement schedule where a treat is given every time the dog sits once the dog has sufficiently mastered this behavior, you can switch to an intermittent reinforcement schedule, where the dog receives a treat only occasionally when it sits in response to the hand motion.

There are four important intermittent reinforcement schedules: fixed-ratio, variable-ratio, fixed-interval, and variable-interval. Ratio schedules are based on the number of instances of a desired behavior, and interval schedules are based on time.

1) A fixed-ratio schedule provides the reinforcement after a set number of instances of the behavior. Returning to the example of a hungry rat in a Skinner box, if the rat receives a food pellet every 10 times it pushes the lever, after it has been conditioned, the rat will demonstrate a high rate of response (in other words, it will push the lever rapidly, many times to get the food).

2) A variable-ratio schedule provides the reinforcement after an unpredictable number of occurrences. A classic example of reinforcement provided on a variable-ratio schedule is gambling the reinforcement may be unpredictable, but the behavior will be repeated with the hope of a reinforcement. Both fixed-ratio schedules and variable-ratio schedules produce high response rates the chances that a behavior will produce the desired outcome (a treat or a jackpot or some other reinforcement) increases with the number responses (times the behavior is performed).

3) A fixed-interval schedule provides the reinforcement after a set period of time that is constant. The behavior will increase as the reinforcement interval comes to an end. For example, if an employee is reinforced by attention from the boss, the employee might work hard all the time, thinking the boss will walk by at any second and notice the hard work (and provide the positive reinforcement, attention). Once the employee learns that the boss only walks by at the top of the hour every hour, the employee may become an ineffective worker throughout the day, but be more effective as the top of the hour approaches.

4) A variable-interval schedule provides the reinforcement after an inconsistent amount of time. This schedule produces a slow, steady behavior response rate, because the amount of time it will take to get the reinforcement is unknown. In the employee-boss example, if the boss walks by at unpredictable times each day, the employee does not know when they might receive the desired reinforcement (attention). Thus, the employee will work in a steady, efficient manner throughout the day, but not very quickly. The employee knows it doesn&rsquot matter how quickly he works at any given time, because the potential reinforcement is tied to an unpredictable time schedule.

Figure 4 Behavior Response Patterns to Each of the Four Reinforcement Schedules.

Figure 4 demonstrates behavior response patterns to each of the four reinforcement schedules.

Reinforcement and reinforcement schedules explain how behaviors can be learned, but not every behavior is learned by simply providing a reinforcement. For example, think about how a baby learns to walk. Do babies spontaneously walk one day and then receive some sort of reinforcement from their parents for doing so? Of course not. Instead, parents shape the desired behavior by reinforcing the smaller intermediate behaviors necessary to achieve the final desired behavior, walking. Thus, parents will reinforce their child&rsquos attempts to pull herself up, so she will try again. Once she&rsquos mastered pulling herself up and standing while holding onto something, they will reinforce the child&rsquos attempts to stand while not holding anything. And so on until the child is able to walk on her own. Shaping is a way to learn more complex behaviors by breaking them down and reinforcing the &ldquopieces of the puzzle&rdquo until the whole behavior is strung together.

Like reinforcement, punishment is also an important element of operant conditioning, but the effect is the opposite: reinforcement increases behavior while punishment decreases it. Punishment is the process by which a behavior is followed by a consequence that decreases the likelihood that the behavior will be repeated. Like reinforcement, punishment can be both positive AND negative. Positive punishment involves the application, or pairing, of a negative stimulus with the behavior. For example, if cadets speak out of turn in military boot camp, the drill sergeant makes them do twenty pushups. On the contrary, negative punishment involves the removal of a reinforcing stimulus after the behavior has occurred. For example, if a child breaks a window while throwing a baseball in the house, they lose TV privileges for a week. Positive punishment adds and negative punishment subtracts. Commonly, reinforcement and punishment are used in conjunction when shaping behaviors however, it is uncommon for punishment to have as much of a lasting effect as reinforcement. Once the punishment has been removed, then it is no longer effective. Furthermore, punishment only instructs what not to do, whereas reinforcement instructs what to do. Reinforcement is therefore a better alternative to encourage behavior change and learning. Additionally, the processes described for classical conditioning (acquisition, extinction, spontaneous recovery, generalization, and discrimination) occur in operant conditioning, as well.

Note that the term &ldquonegative reinforcement&rdquo is often used incorrectly colloquially, people use the term when they mean punishment.

Figure 5 Schematic of Positive and Negative Reinforcements and Punishments

In conclusion, let&rsquos examine two specific types of operant learning: escape and avoidance. In escape, an individual learns how to get away from an aversive stimulus by engaging in a particular behavior. This helps reinforce the behavior so they will be willing to engage in it again. For example, a child does not want to eat her vegetables (aversive stimulus) so she throws a temper tantrum. If the parents respond by not making the child eat the vegetables, then she will learn that behaving in that specific way will help her escape that particular aversive stimulus. On the other hand, avoidance occurs when a person performs a behavior to ensure an aversive stimulus is not presented. For example, a child notices Mom cooking vegetables for dinner and fakes an illness so Mom will send him to bed with ginger ale and crackers. The child has effectively avoiding confronting the aversive stimulus (the offensive vegetables) altogether. As long as either of these techniques work (meaning the parents do not force the child to eat the vegetables), the child is reinforced to perform the escape and/or avoidance behaviors.

Figure 6 Flowchart of Positive and Negative Reinforcements and Punishments and Examples

Cognitive Processes that Affect Associative Learning

Classical and operant conditioning fall under the behaviorist tradition of psychology, which is most strongly associated with Skinner. In behaviorism, all psychological phenomena are explained by describing the observable antecedents of behaviors and its consequences. Behaviorism is not concerned with the unobservable events occurring within the mind. This perspective views the brain as a &ldquoblack box&rdquo which does not need to be incorporated into the discussion. While Skinner and other behaviorists contributed a great deal to science, this extreme form of behaviorism has lost favor. As a reaction to behaviorism, cognitive psychology emerged. In cognitive psychology, researchers began to focus on the brain, cognitions (thoughts), and their effects on how people navigate the world. Cognitive psychologists do not see learning as simply due to stimulus pairing and reinforcement. Although its importance is acknowledged, cognitive psychologists do not believe that all learning can be explained in this way. For example, say a child learns that he can slide on his belly to reach a toy he wants under the bed. And, he learns that a grabbing tool can be used to pick up his toys from the ground. What will the child do when his toy is under the bed out of reach? He may figure out that he can combine the two behaviors: sliding on his belly and using the grabbing tool to get the toy. Insight learning is the term used to describe when previously learned behaviors are suddenly combined in unique ways. For the child, the two behaviors (sliding on the belly and using the grabbing tool) were previously reinforced because he got the toy he wanted each time. A new situation was presented (the toy is out of reach under the bed), and he was able to combine previously reinforced behavior in a novel way on his own to attain the desired outcome (retrieval of the toy).

This also works the other way around: previously unseen behavior can manifest quickly when required. The learning that is present here is latent learning. In latent learning, something is learned but not expressed as an observable behavior until it is required. For instance, if a child in middle school always receives a ride to school from his dad, he may latently learn the route to school, even if he never demonstrates that knowledge. One day, when his dad is on a business trip, the child is able to navigate to school along the same route by bike.

Finally, conditioning is not only behavioral learning. For instance, in operant conditioning certain behaviors are reinforced and the likelihood of that behavior being repeated increases as a result. Cognitively, the reinforcement establishes an expectation for a future reinforcer, so the process is not exclusively behavioral. There is thinking involved in this kind of learning. Expectations may also present themselves in stimulus generalization. If you were rewarded in one class for raising your hand before speaking, then you would expect that to be reinforced in another class, as well.

Table 1 Comparison of Classical Conditioning and Operant Conditioning

Biological Factors that Affect Nonassociative and Associative Learning

Learning is a change in behavior as a result of experience. While many extrinsic factors can influence learning, learning is also limited by biological constraints of organisms. For example, chimpanzees can learn to communicate using basic sign language, but they cannot learn to speak, in part because they are constrained by a lack of specialized vocal chords that would enable them to do so. It was long believed that learning could occur using any two stimuli or any response and any reinforcer. But again, biology serves as an important constraint. Associative learning is most easily achieved using stimuli that are somehow relevant to survival. Furthermore, not all reinforcers are equally effective. As previously discussed, a dramatic example of this is illustrated by food aversions. If an organism consumes something that tastes strongly of vanilla and becomes nauseous a few hours later (even if the nausea was not caused by the vanilla food), that organism will develop a strong aversion to both the taste and the smell of vanilla, even if the nausea occurred hours after consuming the food. This aversion defies many of the principles of associative learning previously discussed because it occurs after one instance, it can occur after a significant time delay of hours, and it is often an aversion that can last for a very long time, sometimes indefinitely. In studies, researchers tried to condition organisms to associate the feeling of nausea with other things, such as a sound or a light, but were unable to do so. Therefore, food aversions demonstrate another important facet of learning: learning occurs more quickly if it is biologically relevant.

The process of learning results in physical changes to the central nervous system (see Chapter 3). Different areas of the brain are involved with learning different types of things. For example, the cerebellum is involved with learning how to complete motor tasks and the amygdala is involved with learning fear responses (brain lesion studies have helped scientists determine this).

Learning and memory are two processes that work together in shaping behavior, and it is impossible to discuss how learning is processed in the brain without discussing memory. Certain synaptic connections develop in the brain when a memory is formed. Short-term memory lasts for seconds to hours, and can potentially be converted into long-term memory through a process called consolidation. Newly acquired information (such as the knowledge that a reward follows a certain behavior) is temporarily stored in short-term memory, and can be transferred into long-term memory under the right conditions.

Long-term Potentiation

When something is learned, the synapses between neurons are strengthened and the process of long-term potentiation begins. Long-term potentiation occurs when, following brief periods of stimulation, an increase in the synaptic strength between two neurons leads to stronger electrochemical responses to a given stimuli. When long-term potentiation occurs, the neurons involved in the circuit develop an increased sensitivity (the sending neuron needs less prompting to fire its impulse and release its neurotransmitter, and/or the receiving neurons have more receptors for the neurotransmitter), which results in increased potential for neural firing after a connection has been stimulated. This increased potential can last for hours or even weeks. Synaptic strength is thought to be the process by which memories are consolidated for long-term memory (so learning can occur). At a given synapse, long-term potentiation involves both presynaptic and postsynaptic neurons. For example, dopamine is one of the neurotransmitters involved in pleasurable or rewarding actions. In operant conditioning, reinforcement activates the limbic circuits that involve memory, learning and emotions. Since reinforcement of a good behavior is generally intrinsically pleasurable (like food or praise), the circuits are strengthened as dopamine floods the system making it more likely the behavior will be repeated.

After long-term potentiation has occurred, passing an electrical current through the brain doesn&rsquot disrupt the memory associations between the neurons involved, although other memories will be wiped out. For example, when a person receives a blow to the head resulting in a concussion, he or she loses memory for events shortly preceding the concussion. This is due to the fact that long-term potentiation has not had a chance to occur (and leave traces of memory connections), while old memories, which were already potentiated, remain.

Long-term memory storage involves more permanent changes to the brain, including structural and functional connections between neurons. For example, long-term memory storage includes new synaptic connections between neurons, permanent changes in pre- and postsynaptic membranes, and a permanent increase or decrease in neurotransmitter synthesis. Furthermore, visual imaging studies suggest that there is greater branching of dendrites in regions of the brain thought to be involved with memory storage. Other studies suggest that protein synthesis somehow influences memory formation drugs that prevent protein synthesis appear to block long-term memory formation.

Not all behaviors are learned of course. The neural processes described above occur when animals or people learn new behaviors, or change their behaviors based on experience (that is, environmental feedback). As our learned behaviors change, our synapses change, too. On the other hand, some behaviors are innate. These are the things we know how to do instinctively (or our body just does without us consciously thinking about it), not because someone taught us to do them (for example, breathing or pulling away from a hot stove). Further, innate behaviors are always the same between members of the species, even for the ones performing them for the first time.

Observational Learning

More advanced organisms, particularly humans, do not learn only through direct experience. Observational learning, also known as social learning or vicarious learning, is learning through watching and imitating others.

Modeling is one of the most basic mechanisms behind observational learning. In modeling, an observer sees the behavior being performed by another person. Later, with the model in mind, the observer imitates the behavior he observed. You likely participated in this behavior as a child (or even now). Think back to when you were little and you played with your friends perhaps you played house or pretended to be a superhero. Typically, you would play your role (&ldquomom,&rdquo or &ldquoSuperman&rdquo) according to the model you have seen: your mother or Superman on TV. As an adult, your appearance may be based on models in society you dress, talk, and walk like your friends. Modeling is not limited to humans either think about how lions learn to hunt in the wild. A lioness will take her cubs with her to hunt and her cubs watch her during the process and hunt based on what they observe.

Typically, the likelihood of imitating a modeled behavior is based on how successful someone finds that behavior to be, or the type of reinforcement that the model received for his behavior. However, individuals may choose to imitate behaviors even if they do not observe the consequences of the model&rsquos behavior. For instance, Albert Bandura (considered a pioneer in the field of observational learning) conducted a series of experiments using a Bobo doll (a large inflatable toy with a heavy base that will spring back up after being punched). Bandura showed children videos of adults either behaving aggressively towards the Bobo doll (punching, kicking, and shouting at the doll) or ignoring him all together. Even when children did not see the consequences of the adult&rsquos behavior, they tended to imitate the behavior they saw. Later studies conducted by others support that humans are prone to imitation and modeling, and we are particularly likely to imitate those that we perceive as similar to ourselves, as successful, or as admirable in some way. Therefore modeling, and social learning in general, is a very powerful influence on individuals&rsquo behaviors.

Biological Processes that Affect Observational Learning

Mirror neurons have been identified in various parts of the human brain, including the premotor cortex, supplementary motor area, primary somatosensory cortex, and the inferior parietal cortex. In monkeys, mirror neurons fire when the monkey performs a task, and also fire when the monkey observes another monkey performing the task. Humans also possess mirror neurons, and while there is still some debate about the exact function of these neurons, there are several hypotheses. Some believe that mirror neurons are activated by connecting the sight and action of a movement (that is, they are programmed to mirror). Some postulate that mirror neurons help us understand the actions of others, and help us learn through imitation. It has also been proposed that mirror neurons in humans are responsible for vicarious emotions, such as empathy, and that a problem in the mirror neuron system might underlie disorders such as autism. However, this has yet to be proven, and there is clearly still much research needed to determine the exact function or functions of mirror neurons. Despite that, many believe that they are somehow involved in observational learning in animals, including humans.

Applications of Observational Learning to Explain Individual Behavior

As social organisms, observational learning connects humans. We learn from and behave like each other, but this mimicking is not perfect. There are individual differences between people and animals. Personality differences and psychological disorders can affect observational learning. For example, much of the research on observational learning has focused on violence and how observing violence increases violence in society, but not everyone who observes violence is violent. Cognition plays a role in how we use what we learn.


Attitudes are an important part of what makes us human and what makes us unique. Our attitudes about people, places, and things are shaped by experience, but can by highly mutable. Attitude and behavior are intimately related, and it is important to understand how both develop and change over time.

Elaboration Likelihood Model

Persuasion is one method of attitude and behavior change. When you change your beliefs about something there are a few factors that likely come into play. For example, say you are listening to two speeches about the importance of increasing the ban on smoking in public spaces. The first orator is attractive, but his argument is not well-formulated. The second orator&rsquos speech has better, more logical arguments, but he is not as attractive. Whose argument will persuade you more? The elaboration likelihood model explains when people will be influenced by the content of the speech (or the logic of the arguments), and when people will be influenced by other, more superficial characteristics like the appearance of the orator or the length of the speech.

Since persuasion can be such a powerful means for influencing what people think and do, much research has gone into studying the various elements of a message that might have an impact on its persuasiveness. The three key elements are message characteristics, source characteristics, and target characteristics.

1) The message characteristics are the features of the message itself, such as the logic and number of key points in the argument. Message characteristics also include more superficial things, such as the length of the speech or article, and its grammatical complexity.

2) The source characteristics of the person or venue delivering the message, such as expertise, knowledge, and trustworthiness, are also of importance. People are much more likely to be persuaded by a major study described in the New England Journal of Medicine than in the pages of the local supermarket tabloid.

3) Finally, the target characteristics of the person receiving the message, such as self-esteem, intelligence, mood, and other such personal characteristics, have an important influence on whether a message will be perceived as persuasive. For instance, some studies have suggested that those with higher intelligence are less easily persuaded by one-sided messages.

The two cognitive routes that persuasion follows under this model are the central route and the peripheral route. Under the central route, people are persuaded by the content of the argument. They ruminate over the key features of the argument and allow those features to influence their decision to change their point of view. The peripheral route functions when people focus on superficial or secondary characteristics of the speech or the orator. Under these circumstances, people are persuaded by the attractiveness of the orator, the length of the speech, whether the orator is considered an expert in his field, and other features. The elaboration likelihood model then argues that people will choose the central route only when they are both motivated to listen to the logic of the argument (they are interested in the topic), and they are not distracted, thus focusing their attention on the argument. If those conditions are not met, individuals will choose the peripheral route, and will be persuaded by more superficial factors. Messages processed via the central route are more likely to have longer-lasting persuasive outcomes than messages processed via the peripheral route.

Figure 7 Elaboration Likelihood Model: Central vs. Peripheral Processing Routes

Social Cognitive Theory

The social cognitive perspective incorporates elements of cognition, learning, and social influence. Social Cognitive Theory is a theory of behavior change that emphasizes the interactions between people and their environment. However, unlike behaviorism (where the environment controls us), cognition (how we process our environment) is also important in determining our behavior. Social cognitive theory focuses on how we interpret and respond to external events, and how our past experiences, memories, and expectations influence our behavior. According to social cognitive theory, social factors, observational learning, and environmental factors can also influence a person&rsquos attitude change. The opinions and attitudes of your friends, family members, and other peer groups often have a major influence on your beliefs. Social cognition will be discussed in greater depth in Chapter 7.

Reciprocal determinism is the interaction between a person&rsquos behaviors (conscious actions), personal factors (individual motivational forces or cognitions personality differences that drive a person to act), and environment (situational factors). There are three different ways that individuals and environments interact

1) People choose their environments which in turn shape them. For example, the college that you chose to attend had some sort of a unique impact on you.

2) Personality shapes how people interpret and respond to their environment. For example, people prone to depression are more likely to view their jobs as pointless.

3) A person&rsquos personality influences the situation to which she then reacts. Experiments have demonstrated that how you treat someone else influences how they will treat you. For example, if you call customer service because you are furious about something, you are more likely to receive a defensive or aggressive response on the phone.

In these three ways, people both shape and are shaped by their environments.

Behavioral Genetics

Genetics plays an important role in the behavior of humans and other animals. Behavioral genetics attempts to determine the role of inheritance in behavioral traits the interaction between heredity and experience determines an individual&rsquos personality and social behavior.

Almost every cell in the body contains DNA [which cells in humans do not contain DNA? 1 ], and this DNA contains genes, some 20,000 or so in humans. Genes encode the information for creating proteins, the building blocks of physical development. Humans share 99.9% of their DNA with other species, therefore, to help determine what makes us different (for example, why one person suffers from schizophrenia while his brother does not), it is vital to understand the variations in both our genes and our environment. The genotype is the genetic makeup of an organism, while the phenotype is the observable characteristics and traits. Behavioral genetics seeks to understand how the genotype and environment affect the phenotype.

Most phenotypes are influenced by several genes and by the environment for example, tallness in humans is the result of the interaction between several genes, and is also the result of proper nutrition at key developmental stages. If someone is born with genes for tallness, but is malnourished as a child, they will not grow nearly as tall as their genotype might indicate. Therefore, in order to determine the influence of genes vs. the environment (the old &ldquonature versus nurture&rdquo argument!), behavioral genetics uses two types of studies in humans: twin studies and adoption studies.

Twin studies compare traits in monozygotic (identical) and dizygotic (fraternal) twins. Monozygotic (MZ) twins have essentially identical genotypes 2 and an almost identical environment, starting from the womb 3 . Dizygotic (DZ) twins share roughly 50% of their DNA (they are genetically no more similar than ordinary siblings), and an arguably similar environment, starting in the womb. The classic twin study attempts to assess the variance of a phenotype (behavior, psychological disorder) in a large group in order to estimate genetic effects (heritability) and environmental effects (both from shared environment or experiences and unshared/unique environment or experiences). If identical twins share the phenotype more than fraternal twins (which is the case for most traits), genes likely play an important role. For example, if one MZ twin develops Alzheimer&rsquos disease, the other MZ twin has a 60% chance of developing it as well. Alternatively, if one DZ twin develops the disease, the other only has a 30% chance of developing it. By comparing hundreds of twin pairs, researchers can then understand more about the roles of genetic effects, shared environment, and unique environment in shaping behavior.

Adoption studies present another unique way to study the effect of genetics and environment on phenotype. Adoption creates two groups: genetic relatives and environmental relatives. Adopted individuals can be compared with both groups to determine if they are more similar to their genetic relatives or their environmental relatives. The advantage of adoption studies over twin studies is that they can help to elucidate the impact of both heredity and environment on phenotype. Twin studies can only examine the impact of genetics because the environment is the same for each twin. Interestingly, hundreds of studies have shown that people who grow up together do not much resemble each other&rsquos personality. Adopted children have personalities more similar to their biological parents than their adopted parents traits such as agreeableness, extraversion, introversion, etc. tend to pass from parents to offspring. However adopted children are more similar to their adoptive families in terms of attitudes, values, manners, faith, and politics.

Interestingly, there have been a few examples of identical twins separated at birth and raised independently by different adopted families. Psychologists have noted that these individuals, despite being raised in completely different environments with no contact with each other while growing up, are remarkably similar in terms of the tastes, physical abilities, personality, interests, attitudes and fears.

Using twin and adoption studies, behavioral geneticists can mathematically estimate heritability for many phenotypes. Heritability does not pertain to an individual, but rather to how two individuals differ. For example, the estimated heritability of intelligence (the variation of intelligence scores attributable to genetic factors) is roughly 50%. This does not mean that your genes are responsible for 50% of your intelligence, rather, it means that heredity is responsible for 50% of the variation in intelligence between you and someone else. In fact, it means that genetic differences account for 50% of the variation in intelligence among all people.

In animals, the interaction between genotype and phenotype is easier to study because genes and environment can be more tightly controlled. Researchers can use transgenesis (the introduction of an exogenous 4 or outside gene) or knockout genes to alter genotype while controlling for environment. Transgenic animal models are useful for helping researchers understand what happens when a certain gene is present. For example, transgenic mice that have had human cancer genes introduced can help researchers study how and when cancer develops, and how cancer responds to various treatment in the mouse model (before trying the treatment on humans). Knock-out animal models are useful for helping researchers understand what happens when a gene is absent. For example, knockout mice that are missing a specific gene known to protect against cancer can also help researchers understand how and why cancer develops, and how it responds to treatment.

One of the most important adaptive aspects of all life&mdashfrom single-celled organisms to human begins&mdashis the capacity for adaptation. Genes and environment work together not only do genes code for proteins, but they also respond to the environment. Genes might be turned on in one environment and turned off in another. For example, in response to an ongoing stressor, one gene might begin producing more of a neurotransmitter involved in over-eating, which then leads to obesity. The gene itself was not hard-wired to produce obesity, but an interaction between the gene and the environment resulted in obesity.

Genes and environment interact. Consider the example of temperament (emotional excitability): infants who are considered &ldquodifficult&rdquo have a temperament that is more irritable and unpredictable, while infants who are considered &ldquoeasy&rdquo have a more placid, quiet, and easygoing temperament. While heredity might predispose infants towards these temperament differences, an easy baby will be treated differently than a difficult baby, and studies have shown that temperament persists through childhood and beyond. Do difficult babies grow up to be aggressive, pugnacious teenagers because their temperament is genetically wired, or because their parents reacted to their irritability and unpredictability in infancy with frustration and unsupportive caregiving? It is difficult to say, but it is important to understand that both heredity and environment play an important role in many complex human traits, such as personality (of which temperament is one aspect), intelligence, motivation, etc.

Intellectual Functioning

Multiple Definitions of Intelligence

What is intelligence? We often think of it as something objective that can be measured like height and weight, but the concept of intelligence is a human creation. A common definition is the ability to learn from experience, problem-solve, and use knowledge to adapt to new situations. But there is no neurological trait that defines intelligence. Consider the concept of athleticism. Athleticism can be broadly defined as physical prowess. However, it becomes difficult to define details since athleticism could be defined by one&rsquos speed, one&rsquos agility, one&rsquos ability to lift weights, or one&rsquos visual-motor skills. Based on the criteria, a golfer or a football player may or may not be considered an athlete.

There is some debate as to whether intelligence is even a unitary construct, or really a broad concept that can be further separated into other abilities. It has been proposed that general intelligence exists as a foundational ability that underlies more specialized abilities. In addition to academic problem-solving abilities, these specialized abilities may include creative intelligence (generating novel ideas), practical intelligence (functional ability for everyday tasks), and emotional intelligence (the ability to perceive, express, understand, and regulate emotions). Another classification system distinguishes between crystallized intelligence, or accumulated knowledge of facts, and fluid intelligence, the ability to reason abstractly and quickly in novel situations.

Influence of Heredity and Environment on Intelligence

Is it natural ability, or is it the environment and experiences that lead to one&rsquos intellectual abilities? As you might expect, it&rsquos a little bit of both. Studies of twins, family members, and adopted children indicate that there is significant heritability of intelligence. Scores on intelligence tests taken by identical twins correlate highly, while those of adopted children more closely resemble scores of their birth parents than of their adoptive parents. However, although genes are a predisposing factor, life experiences affect one&rsquos performance on intelligence tests. Malnutrition, sensory deprivation, social isolation, and trauma can affect normal brain development in childhood. On the other hand, early intervention and schooling can increase intelligence scores.

Also remember that intelligence is a social construct. The way it is measured is determined by cultural context. In many Western cultures, intelligence is often thought of as superior performance on academic and cognitive tasks. Some of these tasks emphasize speed, because it is valued in those societies. However, other cultures may emphasize emotional and spiritual knowledge, or social skills.

Variations in Intellectual Ability

Some differences have been found in how various groups perform on intelligence test scores. These differences have often been attributed to biases within the tests themselves or related to outside confounding factors. For example, controversial but well-established findings are that racial groups differ in their average scores on intelligence tests, and that high-scoring people are more likely to attain high levels of education and income. However, these differences are potentially due to environmental factors, such as the availability of quality schooling.

Intellectual abilities at the upper and lower extremes have profound social and functional implications. At the lower extreme are individuals whose intelligence scores fall below 70. On intelligence tests, a score of 70 is two standard deviations below the average score of 100. Individuals who not only have a score below 70, but also have difficulty adapting to everyday demands of life are classified as having an intellectual disability, also known as mental retardation. Sometimes, intellectual disability is the product of a physical cause, such as Down&rsquos syndrome or an acquired brain injury. Students with mild intellectual disabilities are educated in the least restrictive environments in which they can learn, and are integrated into regular classrooms with accommodations if possible. At the upper extreme, high intelligence scores (130+) often serve as criteria in selection for gifted education.

Experience and Behavior

While it is true that our genes play an important role in our behavior, our individual experiences and our social experiences also shape our behavior in important ways. As social animals, we learn ways of thinking and behavior from our families and peer groups. An individual&rsquos development, then, is determined by a complex interplay of biology, psychology, society, and culture. Biological influences include the inherited genome, prenatal development, sex-related genes, hormones, and physiology. Psychological influences include gene-environment interactions, prior experiences, responses evoked in others by our own traits (such as our temperament or gender), and beliefs, feelings, and expectations. Social and cultural influences include families, peers, friends, cultural ideals, cultural mores, and cultural norms.


Developmental psychology is the study of how humans develop physically, cognitively, and socially, throughout their lifetime. As previously discussed, genetics and environment play an important role in human development.

Prenatal Development

At conception, the female and male gametes (ovum and sperm, respectively) fuse to form a zygote&mdasha single cell with the entire genetic complement necessary 5 for developing into a human being (Note: see Princeton Review&rsquos MCAT Biology Review for a more detailed description of embryological development). During the prenatal stage (from conception to birth), genetic and environmental factors have an impact on development. The placenta transfers nutrients and oxygen to the developing fetus, and transports waste and carbon dioxide away from the fetus. The placenta acts as a barrier, protecting the fetus from most harmful substances, but some substances can still cross this barrier. Alcohol, for example, easily crosses the placental barrier, and has been shown to have a negative impact on neurological development.

Newborns have some automatic behaviors, called reflexes, which are useful for survival. These reflexes are considered primitive because they originate in the central nervous system and are exhibited by all normal infants. Some of these reflexes are as follows:

1) Moro (startle) reflex &ndash in response to a loud sound or sudden movement, an infant will startle the baby throws back its head and extends its arms and legs, cries, then pulls the arms and legs back in. This reflex is present at birth, and lasts until about six months.

2) Rooting reflex &ndash in response to touching or stroking one of the baby&rsquos cheeks, the baby will turn its head in the direction of the stroke and open its mouth to &ldquoroot&rdquo for a nipple.

3) Sucking reflex &ndash linked with the rooting reflex, in response to anything touching the roof of the baby&rsquos mouth, it will begin to suck.

4) Babinski reflex &ndash in response to the sole of the foot being stroked, the baby&rsquos big toe moves upward or toward the top surface of the foot and the other toes fan out.

5) Tonic neck reflex &ndash in response to its head being turned to one side, the baby will stretch out its arm on the same side and the opposite arm bends up at the elbow. This reflex lasts about six to seven months.

6) Palmar grasp reflex &ndash in response to stroking the baby&rsquos palm, the baby&rsquos hand will grasp. This reflex lasts a few months.

7) Walking/stepping reflex &ndash in response to the soles of a baby&rsquos feet touching a flat surface, they will attempt to &ldquowalk&rdquo by placing one foot in front of the other. This reflex disappears around six weeks and reappears around 8-12 months when a baby learns to walk.

In is difficult to determine what babies think, but some research indicates that infants do have certain preferences. For example, humans are born with a preference for sights and sounds that facilitate social responsiveness. Newborns turn their heads toward human voices, for example. When shown the two images in Figure 8, newborns prefer (gaze longer at) the first, because it more similar to a human face. Other experiments determine that babies can distinguish their mother&rsquos voice and smell. It appears that from the very beginning of life, humans use their senses to learn about the world around them.

Figure 8 Two Images Shown to Newborns to Test Human Preference for Faces

Motor Development

Humans undergo a fairly predictable course of motor development, beginning with these rudimentary reflexes and progressing through the learning of specialized movements to assist with daily living and recreational activities (see Figure 9).

Figure 9 Motor Development Throughout a Lifetime

Reflexive movements are primitive, involuntary movements that serve to &ldquoprime&rdquo the neuromuscular system and form the basis for the more sophisticated movement to come. For example, the palmar grasp reflex primes the nervous system for the more controlled grasping learned at later stages. Reflexes, and learning to inhibit reflexes, occurs during the first year of a child&rsquos life and overlaps with the stage in which rudimentary movements are learned.

Rudimentary movements serve as the first voluntary movement performed by a child. They occur in very predictable stages from birth to age 2, and include rolling, sitting, crawling, standing, and walking. These form the foundation on which the fundamental movements are built and is primarily dictated by genetics (that is, these movements are more or less &ldquopre-programmed&rdquo).

The fundamental movement stage occurs from age 2 to age 7 during this time the child is learning to manipulate his body through actions such as running, jumping, throwing, catching. This stage is highly influenced by environment, much more so than the rudimentary movement stage that precedes it. Children are typically in school at this stage, and physical activity and games are necessary for proper motor development. Movements initially start out uncoordinated and poorly controlled, but as the child advances in age, movements become more refined, coordinated, and efficient.

During the stage of specialized movement, children learn to combine the fundamental movements and apply them to specific tasks. This stage can be subdivided into two shorter stages: a transitional stage and an application stage. The transitional substage is where the combination of movements occur for example, grasping, throwing, and jumping are combined to shoot a basket in basketball. The application substage is defined more by conscious decisions to apply these skills to specific types of activity for example, one child might choose to play basketball, whereas another might use the same set of skills and abilities to play baseball. Additionally, the application of strategy to movement is now possible, with the child say, choosing to delay shooting the basketball until she has a clear shot at the basket.

Ultimately, children progress to a lifelong application stage, typically beginning in adolescence and progressing through adulthood. During this time movements are continually refined and applied to normal daily activities as well as recreational and competitive activities.

Early Brain Development

During prenatal development, the brain actually produces more neurons than needed. At birth, humans have the highest number of neurons at any point in their life, and these are pruned throughout the ensuing lifetime. However, the immature brain does not have many neural networks, or codified routes for information processing (the types that are generated in response to learning and experience throughout a lifetime). During infancy and early childhood development, these neurons form neural networks, and networks are reinforced by learning and behavior. From ages 3 to 6, the most rapid growth occurs in the frontal lobes, corresponding to an increase in rational planning and attention. The association areas, linked with thinking, memory, and language, are the last cortical areas to develop. (For more information on cognitive development, see Jean Piaget in Chapter 4.)

Maturation is the sequence of biological growth processes in human development. Maturation, while largely genetic, is still influenced by environment. For example, while humans are programmed to learn how to speak, first using one-word utterances, then developing progressively more complex speech, severe deprivation can significantly delay this process, while an incredibly nurturing environment might speed it up. The developing brain allows for motor development as the nervous system and muscles mature, more and more complex physical skills develop. The sequence of motor development is almost entirely universal. Babies learn to roll over, then sit, then crawl, then stand, then walk (see rudimentary movements above). The development of the cerebellum is a necessary precursor to walking, and most humans learn to walk around age one.

The average age of earliest conscious memory is roughly 3.5 years. Before this age, we are unable to remember much, if anything this is referred to as infantile amnesia. Even though humans are unable to recall memories from this period, babies and young children are still capable of learning and memory. In one famous experiment, a researcher tied a string to an infant&rsquos foot and attached the other end of the string to a mobile. When the baby kicked its foot, the mobile moved. Babies demonstrated learning&mdashthey associated kicking with mobile movement&mdashbecause they kicked more when attached to the mobile, both on the day of the experiment and the day after. Interestingly, if the babies were attached to a different mobile, they did not kick more, however when attached to the same mobile a month later, they remembered the association and began kicking again.

Social Development and Attachment

Humans are social organisms. From approximately 8-12 months of age, young children display stranger anxiety (crying and clinging to caregiver). Around this time, infants have developed schemas for familiar faces, and when new faces do not fit an already developed schema, the infant becomes distressed. Infant-parent attachment bonds are an important survival impulse. Stranger anxiety seems to peek around 13 months for children and then gradually declines. For many years it was assumed that infants attached to their parents because they provided nourishment, but an accidental experiment actually countered this assumption.

In the 1950s, two psychologists (Harry Harlow and Margaret Harlow) bred monkeys for experiments. To control for environment and to reduce the incidence of disease, infant monkeys were separated from their mothers at birth (maternal deprivation), and provided with a baby blanket. When the blankets were removed for laundering the baby monkeys became very distressed because they had formed an intense attachment to the object. This physical attachment seemed to contradict the idea that attachment was formed based on nourishment, so the Harlows designed a series of experiments to further investigate. In one experiment, the Harlows fashioned two artificial mothers&mdashone nourishing (a wire frame with a wooden head and a bottle) and the other cloth (wire frame with a wooden head and a cloth blanket wrapped around it). They found that the baby monkeys preferred the cloth mother, clinging to her and spending the majority of their time with her, and visiting the other mother only to feed. Harlow concluded that &ldquocontact comfort&rdquo was an essential element of infant/mother bonding, and essential to psychological development. Keep in mind, however, that even though these baby monkeys were provided with a surrogate wire mother, this mother was still largely inadequate. Therefore, when the monkeys from these experiments matured, they demonstrated social deficits when reintroduced to other monkeys. Harlow&rsquos monkeys demonstrated aggressive behavior as adults, were unable to socially integrate with other monkeys, and did not mate. If female monkeys were artificially inseminated, they would neglect, abuse, or even kill their offspring.

Mary Ainsworth conducted a series of experiments called the &ldquostrange situation experiments,&rdquo where mothers would leave their infants in an unfamiliar environment (usually a laboratory playroom) to see how the infants would react. These studies suggested that attachment styles vary among infants. Securely attached infants in the presence of their mother (or primary caregiver) will play and explore when the mother leaves the room, the infant is distressed, and when the mother returns, the infant will seek contact with her and is easily consoled. Insecurely attached infants in the presence of their mother (or primary caregiver) are less likely to explore their surroundings and may even cling to their mother when the mother leaves they will either cry loudly and remain upset or will demonstrate indifference to her departure and return. Observations indicate that securely attached infants have sensitive and responsive mothers (or primary caregivers) who are quick to attend to their child&rsquos needs in a consistent fashion. Insecurely attached infants have mothers (or primary caregivers) who are insensitive and unresponsive, attending to their child&rsquos needs inconsistently or sometimes even ignoring their children. In the Harlow&rsquos monkeys experiments described above, the cloth mother would be considered rather insensitive and unresponsive when these monkeys were put in situations without their artificial mothers they became terrified 6 .

Psychologists believe that early interactions with parents and caregivers lay the foundation for future adult relationships. Securely attached infants grow up to demonstrate better social skills, a greater capacity for effective intimate relationships, and are better able to promote secure attachments in their children. Alternatively, children who are neglected or abused are more likely to neglect or abuse their own children. Note that more likely does not imply a destiny most abused children do not grow up to abuse their own children. Humans display a large degree of resiliency, and most insecurely attached or abused children grow into normal adults.

Parenting styles vary but tend to fall largely into three categories: authoritarian, permissive, and authoritative.

&bull Authoritarian parenting involves attempting to control children with strict rules that are expected to be followed unconditionally. Authoritarian parents will often utilize punishment instead of discipline, and will not explain the reasoning behind their rules. Typically, authoritarian parents are very demanding, but not very responsive to their children, and do not provide much warmth or nurturing. Children raised by authoritarian parents may display more aggressive behavior towards others, or may act shy and fearful around others, have lower self-esteem, and have difficulty in social situations.

&bull Permissive parents, on the other hand, allow their children to lead the show. With few rules and demands, these parents rarely discipline their children. Permissive parents are very responsive and loving toward their children, but are rather lenient if rules exist, they are enforced inconsistently. Children raised by permissive parents tend to lack self-discipline, may be self-involved and demanding, and may demonstrate poor social skills.

&bull Authoritative parents listen to their children, encourage independence, place limits on behavior and consistently follow through with consequences when behavior is not met, express warmth and nurturing, and allow children to express their opinions and to discuss options. Authoritative parents have expectations for their children, and when children break the rules they are disciplined in a fair and consistent manner. Authoritative parenting is the &ldquobest&rdquo parenting style, as it tends to produce children that are happier, have good emotional control and regulation, develop good social skills, and are confident in their abilities.

Please remember that parenting style and a child&rsquos disposition is merely correlated while it is possible that parenting style causes these outcomes in children, there are other possible explanations, as well [what are some potential alternative explanations for these results? 7 ].


Despite the fact that infancy is crucial for development, development continues throughout our lifetime. Adolescence 8 is the transitional stage between childhood and adulthood this period roughly begins at puberty and ends with achievement of independent adult status. Therefore, adolescence generally encompasses the teenage years. Adolescence involves many important physical, psychological, and social changes. The onset of puberty (typically around age 10 or 11 in girls, and age 11 or 12 in boys) involves surging estrogens and androgens (sex hormones) that cause a cascade of physical changes. In girls, increased estrogen causes the development of secondary sex characteristics (increased body and pubic hair, increased fat distribution, breast development) as well as the initiation of the menstrual cycle. In boys, increased testosterone (the primary androgen) also causes the development of secondary sex characteristics (increased body and pubic hair, increased muscle mass, voice deepening, enlargement of the penis and testes), and the onset of ejaculation. While the sequence of events in puberty is fairly predictable, the onset of these events is less so, which can be distressing. For example, early puberty for a girl means that she will begin developing breasts and menstruating before her peers, which can be psychologically upsetting.

During adolescence, the brain undergoes three major changes: cell proliferation (in certain areas, particularly the prefrontal lobes and limbic system), synaptic pruning (of unused or unnecessary connections), and myelination (which strengthens connections between various regions). The prefrontal cortex&mdashresponsible for abstract thought, planning, anticipating consequences, and personality&mdashcontinues to develop during this period 9 . The limbic system&mdashinvolved in emotion&mdashdevelops more rapidly than the prefrontal cortex during adolescence, which may explain behavior that appears to be emotionally rather than rationally driven. Though it may seem contradictory, adolescents are actually improving their self-control and judgment and long-term planning abilities during this time.

Adulthood and Later Life

While the transition to &ldquoadulthood&rdquo is not marked by any definitive biological event (indeed, the term is essentially defined by society), attainment of &ldquoadulthood&rdquo is marked by a feeling of comfortable independence. Interestingly, while childhood and adolescence is marked by clear developmental milestones and attainment of physical abilities, adulthood is less clearly defined. For example, if you met a 4 year old and a 14 year old, you would probably be able to reasonably guess at some of the things they were and were not capable of, and the differences between them would be drastic. If, on the other hand, you met a 40 year old and 50 year old, it may be much harder to pinpoint the difference, if there was much of one at all.

Process of Encoding Information

As you may recall from Information Processing models, information first enters a sensory register before encoding occurs. Encoding is the process of transferring sensory information into our memory system.

Working memory&mdashwhere information is maintained temporarily as part of a particular mental activity (learning, solving a problem)&mdashis thought to include a phonological loop, visuospatial sketchpad, central executive, and episodic buffer (Chapter 4). Working memory is quite limited, and this model helps to explain the serial position effect. This effect occurs when someone attempts to memorize a series, such as a list of words. In an immediate recall condition (shortly after the information is first presented), the individual is more likely to recall the first and last items on the list. These phenomena are called the primacy effect and the recency effect. It is hypothesized that first items are more easily recalled because they have had the most time to be encoded and transferred to long-term memory. Last items may be more easily recalled because they may still be in the phonological loop, and so may be readily available. When the individual is asked to recall the list at a later point, the individual tends to remember only the first items well. This may be because that was the only information that was transferred to long-term memory, whereas recent information from the phonological loop would quickly decay and be lost.

Processes That Aid in Encoding Memories

A mnemonic is any technique for improving retention and retrieval of information from memory. One simple process that aids memory is use of the phonological loop through rehearsal. If someone were to give you a phone number and you didn&rsquot have any way to record the information, you might repeat the digits over and over in your head until you were able to write them down. In some cases, such as the recital of the Pledge of Allegiance, repeated rehearsal can lead to encoding into long-term memory.

Chunking is a strategy in which information to be remembered is organized into discrete groups of data. For example, with phone numbers, one might memorize the area code, the first three digits, and the last four digits as discrete chunks. Thus, the number of &ldquothings&rdquo being remembered is decreased&mdashin the case of a phone number, there are now three &ldquothings&rdquo to memorize instead of 10 individual digits. This is an important strategy because the limit of working memory is generally understood to be about seven digits. Even the process of remembering that a group of letters makes a particular word involves chunking.

When memorizing information, people make use of hierarchies for organization. For example, imagine that a child is learning about the different animals in the zoo. It would be useful to have a category of &ldquobirds&rdquo to include ostriches, penguins, etc. and a category of &ldquobig cats&rdquo to remember lions, tigers, and so on. As the child learns more, these hierarchies are reorganized to match incoming information. When words are organized into groups, recall significantly improves. For example, it would be easier to remember the list &ldquochair, table, desk, lamp, recliner, sofa&rdquo if you realized that all of these words were pieces of furniture.

There is some evidence that the depth of processing is important for encoding memories. Information that is thought about at a deeper level is better remembered. For example, it is easier to remember the general plot of a book than it is the exact words, meaning that semantic information (meaning) is more easily remembered than grammatical information (form) when the goal is to learn a concept. On the other hand, rhyme can be useful in aiding phonological processing. Another useful mnemonic device is to use short words or phrases that represent longer strings of information. For example, ROYGBIV is an acronym that is helpful in memorizing the colors of the rainbow (red, orange, yellow, green, blue, indigo, violet).

The dual coding hypothesis indicates that it is easier to remember words with associated images than either words or images alone. By encoding both a visual mental representation and an associated word, there are more connections made to the memory and an opportunity to process the information at a deeper level. For this reason, imagery is a useful mnemonic device. One aid for memory is to use the method of loci. This involves imagining moving through a familiar place, such as your home, and in each place, leaving a visual representation of a topic to be remembered. For recall, then, the images of the places could be called upon to bring into awareness the associated topics.

It is also easier to remember things that are personally relevant, known as the self-reference effect. We have excellent recall for information that we can personally relate to because it interacts with our own views or because it can be linked to existing memories. A useful tool for memory is to try to make new information personally relevant by relating it to existing knowledge.

Memory Storage

Types of Memory Storage

Different stores of memory include sensory memory, short-term memory, and long-term memory. Sensory memory, the initial recording of sensory information in the memory system, is a very brief snapshot that quickly decays. Two types of sensory memory are iconic memory and echoic memory. Iconic memory is brief photographic memory for visual information, which decays in a few tenths of a second. Echoic memory is memory for sound, which lasts for about 3&ndash4 seconds. This is why sometimes in a conversation, you might ask what someone said if you had trouble hearing him or her, only to hear and make sense of the words yourself a second later. Information from sensory memory decays rapidly if it is not passed through Broadbent&rsquos filter into short-term memory. Short-term memory is also limited in duration and in capacity. Recall capacity for an adult is typically around seven items, plus or minus two. This is why phone numbers with seven digits (excluding area code) are conveniently remembered. As discussed earlier, although chunking increases the amount of information remembered by putting more information into each chunk, it is still subject to this limit of about seven chunks. Information in short-term memory is retained only for about 20 seconds, unless it is actively processed so that it can be transferred into long-term memory. Long-term memory is information that is retained sometimes indefinitely it is believed to have an infinite capacity.

It is important to draw a distinction between short-term memory and working memory. Short-term memory, which is strongly correlated with the hippocampus, is where new information sought to be remembered resides temporarily and is then encoded to long-term memory or forgotten. Thus, if you meet a new person, you will store the person&rsquos name in short-term memory and, perhaps through rehearsal, encode the name in long-term memory. Working memory, which is strongly correlated with the prefrontal cortex, is a storage bin to hold memories (short-term or long-term) that are needed at a particular moment in order to process information or solve a problem. For example, if you need to mentally determine the area of a triangle, you will bring the formula and your knowledge of multiplication into your working memory while you process the result.

Implicit or procedural memory refers to conditioned associations and knowledge of how to do something, while explicit or declarative memory involves being able to &ldquodeclare&rdquo or voice what is known. For example, one could read a book on how to develop a great shot in basketball from cover to cover and be able to explain in great detail the necessary steps. However, this book knowledge would not likely translate into being able to execute the shot on the court without practice. Explaining the concept involves explicit or declarative memory, while not having practiced it indicates a lack of implicit or procedural memory. Semantic and episodic memory are two subdivisions of explicit memory. Semantic memory is memory for factual information, such as the capital of England. Episodic memory is autobiographical memory for information of personal importance, such as the situation surrounding a first kiss. Typically, semantic memory deteriorates before episodic memory does.

Figure 10 Types of Long-Term Memory

The distinction between explicit and implicit memory is supported by neurological evidence. Brain structures involved in memory include the hippocampus, cerebellum, and amygdala. The hippocampus is necessary for the encoding of new explicit memories. The cerebellum is involved in learning skills and conditioned associations (implicit memory). The amygdala is involved in associating emotion with memories, particularly negative memories for example, a fear response to a dentist&rsquos drill involves fear conditioning. The roles of the hippocampus, cerebellum, and amygdala are shown by studies on patients who have the capacity for either implicit or explicit memory (but not both). For example, amnesic patients with hippocampal damage may not have declarative memory for a skill they have recently learned (due to amnesia), and yet may be able to demonstrate the skill, indicating that implicit memory exists. Interestingly, the implicit memories that infants make are retained indefinitely, but the explicit memories that infants make are largely not retained beyond about age four&mdasha phenomenon known as infantile amnesia. It is only later, after the hippocampus has fully developed, that explicit memories are retained long-term.

Semantic Networks and Spreading Activation

If our long-term memories contained isolated pockets of information without any organization, they might be more difficult to access. A person might have numerous memories for directions, people&rsquos faces, the definitions of tens of thousands of words, and other such content with that much information, it could be nearly impossible to find anything. Just as hierarchies are a useful tool for processing information during the encoding process, it is believed that information is stored in long-term memory as an organized network. In this network exist individual ideas called nodes, which can be thought of like cities on a map. Connecting these nodes are associations, which are like roads connecting the cities. Not all roads are created equal some are superhighways and some are dirt roads. For example, for a person living in a city, there may be a stronger association between the nodes &ldquobird&rdquo and &ldquopigeon&rdquo than between &ldquobird&rdquo and &ldquopenguin.&rdquo According to this model, the strength of an association in the network is related to how frequently and how deeply the connection is made. Processing material in different ways leads to the establishment of multiple connections. In this model, searching through memory is the process of starting at one node and traveling the connected roads until one arrives at the idea one is looking for. Retrieval of information improves if there are more and stronger connections to an idea. Because all memories are, in essence, neural connections, the road analogy provides a useful visual aid in understanding access to memories strong neural connections are like better roads.

Like any neural connection, a node does not become activated until it receives input signals from its neighbors that are strong enough to reach a response threshold. The effect of input signals is cumulative: the response threshold is reached by the summation of input signals from multiple nodes. Stronger memories involve more neural connections in the form of more numerous dendrites, the stimulation of which can summate more quickly and powerfully to threshold. Once the response threshold is reached, the node &ldquofires&rdquo and sends a stimulus to all of its neighbors, contributing to their activation. In this way, the activation of a few nodes can lead to a pattern of activation within the network that spreads onward. This process is known as spreading activation. It suggests that when trying to retrieve information, we start the search from one node. Then, we do not &ldquochoose&rdquo where to go next, but rather that activated node spreads its activation to other nodes around it to an extent related to the strength of association between that node and each other. This pattern continues, with well-established links carrying activation more efficiently than more obscure ones. The network approach helps explain why hints may be helpful. They serve to activate nodes that are closely connected to the node being sought after, which may therefore contribute to that node&rsquos activation. It also explains the relevance of contextual cues. If you are reading this book while jumping up and down on a trampoline, you are more likely to later recall this information if you are once again on the trampoline. This is because you would have developed some associations between the learned information and the cues in the environment when learning the information.

Figure 11 Example of Spreading Activation Theory

Recall, Recognition, and Relearning

Retrieval is the process of finding information stored in memory. When most people think of retrieval, they think of recall, the ability to retrieve information. Free recall involves retrieving the item &ldquoout of thin air,&rdquo while cued recall involves retrieving the information when provided with a cue. For example, a test of free recall would be to ask a student to name all of the capital cities of the world. A test of cued recall would be to provide the student with a list of countries and then ask him or her to name all of the capital cities of the world. Another type of retrieval is recognition, which involves identifying specific information from a set of information that is presented. One recognition task would be a multiple-choice question. Finally, relearning involves the process of learning material that was originally learned. Once we have learned and forgotten something, we are able to relearn it more quickly than when it was originally learned, which suggests that the information was in the memory system to be retrieved.

Retrieval Cues

Retrieval cues provide reminders of information. Within the network model of memory, we have already discussed how hints may activate a closely related node, making it easier to retrieve the node being searched for. Prior activation of these nodes and associations is called priming. Often, this process occurs without our awareness. For example, if you are shown several red items and then asked to name a fruit, you will be more likely to name a red fruit. The best retrieval cues are often contextual cues that had associations formed at the time that the memory was encoding, such as tastes, smells, and sights. Almost everyone has had the experience of not recognizing someone familiar because of seeing the person in another context. For example, running into your coffee shop barista at a concert might make it harder to recognize her or him. Or a man may associate happiness with beagles because he had one as a child. When looking in a shelter to adopt a dog decades later, he may find himself emotionally drawn to select a beagle. Although he may not consciously be thinking of his childhood dog, his memories predispose him to connect a beagle with feeling happy.

The Role of Emotion in Retrieving Memories

In addition to words, events, and sensory input serving as retrieval cues, emotion can also serve as a retrieval cue. What we learn in one state is most easily recalled when we are once again in that emotional state, a phenomenon known as mood-dependent memory. Thus, when someone is depressed, events in the past that were sad are more likely to emerge to the forefront of his or her mind. This plays a role in maintaining the cycle of depression. When we are happy, we tend to remember past times that were also happy. In addition, emotion can bias the recall of memories. If someone is angry at a friend, the person is more likely to feel that the friendship has always been rotten, whereas in a moment when the friendship feels joyful, the person is more likely to perceive the relationship as having always been a joyful one.

Remembering information is achieved through the process of paying attention, encoding, retaining information (storage), and finally retrieval. Failure along any step of this process can cause forgetting. A failure to pay attention or encode means that the information never got into the memory system. A failure to store information is decay. A failure in retrieval could result from a lack of retrieval cues or interference.

Aging and Memory

Older adults vary in their memory abilities. Decline in memory is influenced by how active the person is: increased activity (both physical and mental) is a protective factor against neuronal atrophy. Memory loss may parallel the age-related loss of neurons. As we age, memory decline tends to follow some common trends, with certain types of memory being affected earlier. As you might expect, older adults have accumulated many experiences and so have a rich network of nodes and associations. Information that is meaningful and connects well to that existing web of information, and information that is skill-based, show less decline with age. However, there is greater decline for information that is less meaningful and less richly connected.

Due to having a more extensive memory network, retrieval can also become trickier with time. Older adults show minimal decline in recognition, but greater decline in free recall. One type of recall is prospective memory, remembering to do things in the future. Prospective memory is stronger when there are cues in the environment. As an example, an older adult may be asked to remember to take a particular medication three times a day. However, unless there is a reminder cue such as a readily visible pillbox or an alarm, it may be difficult to remember that there is a task that needs to be completed. Thus, the person fails to &ldquoremember to remember.&rdquo Difficulty with prospective memory without cues also makes it difficult to complete time-based tasks, since one must remember to look at a clock or keep track of a schedule.

Memory Dysfunctions

Remember that memory has a neurological basis, with the hippocampus playing a role in the encoding of new explicit memories, the cerebellum playing a role in encoding implicit memories, and the amygdala helping to tie emotion to memories. Once information is in long-term memory, it is stored in various areas spread throughout the brain. Damage to parts of the brain by strokes, brain tumors, alcoholism, traumatic brain injuries, and other events can cause memory impairment. Patients with damage to the hippocampus could develop anterograde amnesia, an inability to encode new memories, or retrograde amnesia, an inability to recall information that was previously encoded (or both types of amnesia). In addition, neurological damage involving neurotransmitters can also cause memory dysfunction. One theory about the cause of Alzheimer&rsquos disease, for example, involves an inability to manufacture enough of the neurotransmitter acetylcholine, which results in, among other things, neuronal death in the hippocampus.

Memory decay results in a failure to retain stored information. Even if information is successfully encoded into memory, it can decay from our memory storage and be forgotten. However, decay does not happen in a linear fashion. Rather, the &ldquoforgetting curve&rdquo indicates that the longer the retention interval, or the time since the information was learned, the more information will be forgotten, with the most forgetting occurring rapidly in the first few days before leveling off. It is unclear why memories fade or erode with the passage of time. It is possible that the brain cells involved in the memory may die off, or perhaps that the associations among memories need to be refreshed in order not to weaken.

Figure 12 Forgetting Curve


Interference can result in a failure to retrieve information that is in storage. The passage of time may create more opportunity for newer learning to interfere with older learning, which is especially common if the learned information is similar. Proactive interference happens when information previously learned interferes with the ability to recall information learned later. For example, remembering where you had parked your car in a parking garage will be more difficult once you have parked in that parking garage for months in different locations. Retroactive interference happens when newly learned information interferes with the recall of information learned previously. For example, someone who has moved frequently may find that learning new addresses and directions interferes with his or her ability to remember old addresses and directions. Of course, old and new information do not always interfere. Sometimes, old information facilitates the learning of new information through positive transfer. For example, knowing how to play American football may make it easier for someone to learn how to play rugby.

Memory Construction and Source Monitoring

Our memories are far from being snapshots of actual experience. We already know that when memories are encoded, they pass through a &ldquolens&rdquo the mood and selective attention of the observer influence how they are encoded. Memory is once again altered when passing through the &ldquolens&rdquo of retrieval. When we remember something, we do not pull from a mental photo album, but rather, we draw a picture, constructing the recalled memory from information that is stored. This process is not foolproof.

Sometimes the information that we retrieve is based more on a schema than on reality. A schema is a mental blueprint containing common aspects of some part of the world. For example, if asked to describe what your 4 th grade classroom looked like, you might &ldquoremember&rdquo a chalkboard, chalk, desks, posters encouraging reading, and books, based on your schema for such a classroom, even though the actual room may not have had posters. In this way, when we construct a memory, we tend to &ldquofill in the blanks&rdquo by adding details that may not have been present at the time. We may also unknowingly alter details. For example, in eyewitness testimony, leading questions often cause witnesses to misestimate or misremember. When participants in an experiment were asked how fast cars were going when they smashed into each other, instead of just hit each other, they indicated higher speeds. Individuals in the first group also reported seeing broken glass and car parts, when there actually were none. After people are exposed to subtle misinformation, they are usually susceptible to the misinformation effect, a tendency to misremember.

Individuals may also misremember when asked to repeatedly imagine nonexistent actions and events. Simply repeatedly imagining that one did something can create false memories for an event. False memories are inaccurate recollections for an event and may be the result of the implanting of ideas. For example, if one repeatedly imagined being lost as a child in a shopping mall, this imagined occurrence would begin to feel familiar, and as it felt more familiar, it would take on the flavor of a real memory. In fact, it can be very difficult for people to distinguish between real memories and false memories by feeling, because both can be accompanied by emotional reactions and the sense of familiarity. For this reason, an individual&rsquos confidence in the validity of a memory has not been found to be a good indication of how valid it actually is.

When recalling information people are also susceptible to forgetting one particular fact&mdashthe information&rsquos source. This is an error in source monitoring. For example, you may find yourself angry with an individual in your life for doing something hurtful, only to later realize that the action occurred in a dream. Or you may recognize someone, but have no idea where you have seen the person before.

Changes in Synaptic Connections Underlie Memory and Learning

Neuroscientists have had a difficult time in their search for a physical basis for memory. There has been no central location found for memories, and there seem to be no such thing as special memory neurons. The process of forming memories involves electrical impulses sent through brain circuits. Somehow, these impulses leave permanent neural traces that are physical representations of information. More and more evidence indicates that what is important for memory and for learning is the synapses&mdashthose sites where nerve cells communicate with each other through neurotransmitters.

Neural Plasticity

It was once believed that after the brain develops in childhood, it remains fixed. However, scientists are finding that the brain is not a static organ. Neural plasticity refers to the malleability of the brain&rsquos pathways and synapses based on behavior, the environment, and neural processes. In fact, the brain undergoes changes throughout life. As you will see, changes in memory and learning are reflected physiologically by changes in the associations between neurons. Connections in the brain are constantly being removed and recreated. In fact, if someone sustains a brain injury, neurons will reorganize in an attempt to compensate for or work around the impaired connections. As an example, shortly after someone becomes blind, neurons that were devoted to vision take on different roles, potentially improving other sensory perception. Furthermore, while it was previously thought that neurons of the central nervous system were irreplaceable, neurogenesis, the birth of new neurons, has been found to occur to a small extent in the hippocampus and cerebellum.

Memory and Learning

&ldquoWhat fires together, wires together.&rdquo In other words, nearby neurons that fire impulses simultaneously form associations with each other. These associations can create neural nets, or patterns of activation, that represent information that is learned or stored in memory. Therefore, if any part of the neural net is activated, a memory may be recalled. This provides a neurological basis for the usefulness of retrieval cues discussed earlier. The process of learning and memory through the lifetime does not involve the enlarging of the brain or the gaining of neurons, but rather involves increased interconnectivity of the brain through increasing the synapses between existing neurons. As neurons fire together, more associations are formed. The strength of these associations is further based on the frequency with which simultaneous firing occurs, and other aspects such as the presence of emotion (which strengthens associations).

Chapter 5 Summary

&bull Nonassociative learning occurs when an organism is repeatedly exposed to a stimulus and includes habituation and sensitization.

&bull Associative learning occurs when an organism learns that an event, object, or action is connected with another the two major types are classical conditioning and operant conditioning.

&bull Classical conditioning pairs a neutral stimulus with an unconditioned stimulus to generate a conditioned stimulus and conditioned response (for example, Pavlov&rsquos dogs). In acquisition the response is learned, in extinction the response is &ldquolost,&rdquo and in spontaneous recovery an extinct response occurs again when the stimulus is presented after some period of time.

&bull Taste-aversion is a very strong and long-lasting association between a specific taste or smell and illness taste-aversion challenges some of the tenets of classical conditioning because it is learned quickly (after one time) and is very slow to extinguish.

&bull Operant conditioning uses reinforcement and punishment to mold behavior and eventually cause associative learning B. F. Skinner and his work with rats and pigeons in a &ldquoSkinner box&rdquo is a famous example of operant conditioning.

&bull Reinforcement increases the likelihood that a preceding behavior will be repeated positive reinforcement is a positive stimulus that occurs immediately following a behavior whereas negative reinforcement is a negative stimulus that is removed immediately following a behavior.

&bull A fixed-ratio schedule provides the reinforcement after a set number of instances of the behavior while a variable-ratio schedule provides the reinforcement after an unpredictable number of occurrences.

&bull A fixed-interval schedule provides the reinforcement after a set period of time that is constant while a variable-interval schedule provides the reinforcement after an inconsistent amount of time.

&bull Punishment is a consequence that follows a behavior and decreases the likelihood that the behavior will be repeated positive punishment pairs a negative stimulus with the behavior while negative punishment removes a reinforcing stimulus after the behavior has occurred.

&bull Insight learning occurs when previously learned behaviors are suddenly combined in unique ways.

&bull Latent learning occurs when previously unseen behavior can manifest quickly when required.

&bull Long-term potentiation occurs when, following brief periods of stimulation, a persistent increase in the synaptic strength between two neurons leads to stronger electrochemical responses to a given stimuli.

&bull Observational learning is a social process in modeling, an observer sees the behavior and later imitates it Albert Bandura&rsquos Bobo doll experiment is a famous example of modeling.

&bull Mirror neurons have been identified in various parts of the human brain, and are believed to fire when observing another performing a task.

&bull The elaboration likelihood model of persuasion is a theory that attitudes are formed by dual processes the central processing route (which includes high motivation and deep processing of the message) or by the peripheral processing route (which includes low motivation and superficial processing of the messenger).

&bull Behavioral genetics attempts to determine the role of inheritance in behavioral traits the interaction between heredity and experience determines an individual&rsquos personality and social behavior.

&bull Twin studies and adoption studies are used to help elucidate the roles of genetic effects, shared environment, and unique environment in shaping behavior.

&bull Newborns have many automatic behaviors, called reflexes (such as the moro reflex, rooting reflex, and suckling reflex), which are useful for survival.

&bull Important information about attachment was discovered through studies conducted by Mary Ainsworth, and the impact of deprivation through studies conducted by Harry and Margaret Harlow.

&bull A mnemonic is any technique for improving retention and retrieval of information from memory.

&bull Short-term memory is also limited in duration and in capacity recall capacity for an adult is typically around seven items, plus or minus two.

&bull Long-term memory is information that is retained sometimes indefinitely it is believed to have an infinite capacity. Long-term memory consists of implicit (procedural) memory and explicit (declarative) memory explicit memory includes episodic and semantic memory.

&bull The spreading activation theory of memory posits that during recall, nodes (concepts) are activated, which are connected to other nodes, and so on.

&bull Anterograde amnesia is an inability to encode new memories, while retrograde amnesia is an inability to recall information that was previously encoded.

&bull Decay results in a failure to retain stored information decay does not occur linearly, but is a process of the amount of time since the information was learned.

&bull Proactive interference happens when information previously learned interferes with the ability to recall information learned later retroactive interference happens when newly learned information interferes with the recall of information learned previously.


1. Retroactive interference occurs when:

A) old information interferes with learning new material.

B) new material interferes with recalling old material.

C) new information decays over time.

D) old information decays over time.

2. Which of the following types of memory does not affect behavior consciously and can be measured only indirectly?

3. A five-year-old boy has formed a habit of writing on his parents&rsquo living room walls. Based on operant conditioning principles, which of the following types of punishment would be least effective in stopping this behavior?

A) Giving the child a time out immediately after he writes on the wall, every time the child writes on the wall.

B) Providing the child with a cookie at the end of each day that he abstains from writing on the walls.

C) Spanking the child (an intense punishment) every time that the child writes on the wall.

D) Punishing the child occasionally, when the parents happen to notice writing on the wall.

4. Jay joins a social media website to lose weight, and he receives points based on the intensity of his daily exercise and praise from fellow website users for each workout he logs on the website. This increases his exercise frequency and intensity. Eventually he stops logging onto the website, but continues to exercise with increased frequency. This is an example of:

A) vicarious reinforcement.

5. A student cramming for finals memorizes the steps in solving a physics problem early in the afternoon and then studies for his other subjects for several hours before his physics exam. When he arrives at the exam he can no longer remember how to solve the physics equation. This is an example of:

A) retroactive interference.

6. A researcher studying several patients gives each of them the same maze to solve. Although each works independently on it for 30 minutes &ndash with varying degrees of success &ndash none of them recalls seeing the maze when presented with it the next day. Nonetheless, their overall speed and success in solving it has improved significantly. These patients are likely experiencing impairment in:


Elizabeth Loftus is widely known as one of the leading experts in the field of false memories, especially regarding childhood sexual abuse. However, this particular topic is deeply controversial, with many experts divided over whether these memories are truly false, or if they are instead repressed to protect the individual from reliving further trauma. Loftus is most famous for her theory of the misinformation effect, which refers to the phenomenon in which exposure to incorrect information between the encoding of a memory and its later recall causes impairment to the memory. That is to say, if you witnessed a hit-and-run car accident, and heard a radio commercial for Ford before giving your testimony to the police, you might incorrectly recall that the offending vehicle was a Ford, even if it was not. Loftus&rsquo research has been used in many cases of eyewitness testimony in high-profile court cases to demonstrate the malleability of the human memory.

To test this theory, researchers in New York City set up a &ldquocrime&rdquo for participants to &ldquowitness&rdquo (unbeknownst to them). 175 local female college students were recruited to participate in a study about memory, and were directed to complete some computer tasks involving word and picture recall in a room overlooking an alley. While completing the computer tasks, participants witnessed a young woman being &ldquomugged&rdquo by a young man in the alley outside the lab&mdashboth individuals were confederates of the researchers. After reporting the &ldquocrime&rdquo to the researchers, participants were escorted out of the lab and told that this crime would be reported to the local police, and that they might be called back in to give a testimony. For half of the participants, a research confederate acting as a custodial worker was present as they were being escorted out. For the other half, no decoys were present. Participants were randomly assigned to either the decoy or control group. Participants who did not report the &ldquocrime&rdquo to the researchers were excluded from the study (25 women were excluded).

One week later, participants were called back to the lab to give their testimony to a police officer &ndash another confederate. Participants were told that the police had several leads on who the mugger might be, and were asked to pick out the suspect from five different photo options. Included in the photo set were photos of the mugger, the custodial worker, and three neutral faces chosen to be similar to the two experimental faces. After recalling the event to the police officer and choosing a face, participants were debriefed (they were told that the mugging was fake) and awarded course credit for their participation. The results of this study are summarized in Table 1.

Table 1 Number of positive identifications in photo line-up

1. What conclusions can be drawn from the data presented in Table 1?

A) The misinformation effect is present in the decoy group.

B) The control group had a better memory than the decoy group.

C) There are no significant differences between the decoy and the control group.

D) No conclusions can be drawn from these data.

2. What part of the brain is most associated with the formation of long-term memories?

3. The inability to form new memories is called:

4. Suppose that after selecting someone from the photo line-up, all of the subjects in the control group watched a ten-minute film presentation in which a &ldquopolice officer&rdquo provided additional evidence about why the custodial worker (whom the control subjects never met) was suspected to be the culprit responsible for the mugging. Half of the control subjects had a &ldquovery handsome&rdquo police officer presenting the information, and the other half had an &ldquounattractive&rdquo police officer presenting the same information. 85% of the control subjects who watched the video with the handsome police officer either changed their answer to the custodial worker (or if they had initially selected the custodial worker, confirmed that selection). 45% of the control subjects who watched the video with the unattractive police officer either changed their answer to the custodial worker (or if they had initially selected the custodial worker, confirmed that selection). The elaboration likelihood model suggests that the discrepancy in the two groups is based on:

A) the peripheral route of information processing.

C) message characteristics.

D) the central route of information processing.

5. What type of memory is used in a multiple-choice test, such as this one?

6. What are the three main stages of memory, according to the information processing perspective?

A) Encoding, storage, and retrieval.

B) Recognition, detection, and regurgitation.

C) Consolidation, reconsolidation, and recovery.

D) Identification, encrypting, and reclamation.

7. What part of the brain is responsible for procedural memories for skills?


1. B Retroactive interference is a type of memory interference in which new information interferes with our ability to recall older material (choice B is correct). Proactive interference occurs when old information interferes with learning new material (choice A is wrong). Answer choices C and D refer to memory decay which occurs regardless of intereference (choices C and D are wrong).

2. A Nondeclarative memory, or implicit memory, is a form of memory that is not conscious. It is the autopilot of memory (so it does not affect behavior consciously) and may be difficult to verbalize, making it measurable only indirectly (choice A is correct). Declarative memory, also referred to as explicit memory, is a long-term recollection that can be consciously or intentionally called upon (choices B and D are wrong). Episodic memory is a type of declarative memory that is responsible for the recall of autobiographical events (choice C is wrong).

3. D Operant conditioning employs consequences to modify behavior. Reinforcement is more effective at modifying behavior than punishment is, but for both, the timing/schedule and intensity of the reinforcement or punishment is important. In general, delivering punishment consistently for every occurrence of the behavior produces more effective suppression of the behavior than does delivering punishment intermittently or inconsistently. Punishing the child only when the negative behavior happens to be noticed would not qualify as consistent and would thus not be very effective (choice D is correct). Immediacy, or delivering the punishment immediately after the act, will increase the effectiveness of the punishment, as will consistency (choice A would be more effective than D, and is therefore wrong). Positive reinforcement is generally more effective than punishment for increasing the frequency of a desired behavior (not writing on the walls), and diminishes his motivation to engage in the undesired response (choice B is more effective than D, and is therefore wrong). Finally, while severe punishment can have undesirable side effects, in general, the more intense the punishment, the more effective the punishment is in producing major, rapid, and long lasting suppression (choice C is more effective than D, and is therefore wrong).

4. B Operant conditioning is accomplished when someone receives a reward after performing a task after the person has performed the task and received the reward enough times, they will perform the task without the reward (choice B is correct). Vicarious reinforcement involves watching another person receive a reward for his or her behavior there is no mention of Jay being motived by other people getting rewarded (choice A is wrong). An innate behavior is one that does not need to be conditioned, and therefore not what is being described in the question stem (choice C is wrong). Classical conditioning is accomplished by pairing two stimuli, one that is neutral with another that is unconditioned. Over time, the neutral stimulus becomes the conditioned stimulus. Since the question is not describing the pairing of two stimuli, nor is a stimulus presented before the behavior, classical conditioning does not explain the behavior described in the question stem (choice D is wrong).

5. A Retroactive interference occurs when new information interferes with the storage of information learned beforehand (choice A is correct). Proactive interference occurs when information that is learned first interferes with the ability to recall later information the opposite is described in this question (choice B is wrong). Retrieval cues are used to retrieve stored memories there is no mention of retrieval cues in this question (choice C is wrong). Long term potentiation is part of long term memory storage while a failure to remember information is partially a result of a failure to convert information via long-term potentiation, it does not explain the interference of information learned after memorizing the step to a physics problem (choice D is wrong).

6. B Item I is false: the subjects have no recall of the maze, meaning that their declarative memory (long-term, concerning specific facts, details, situations, and context) is not functioning properly procedural memory, which concerns development of specific skills for how to do something, is biologically distinct from declarative memory, and must be functioning if they display improvement on the maze (choices A and C can be eliminated). Note that both remaining answer choices include Item II, so Item II must be true: episodic memory, (part of declarative memory) includes memory of events that have been experienced personally. Amnesic patients with hippocampal damage may not have declarative memory for a skill they have recently learned (due to amnesia), and yet may be able to demonstrate the skill, indicating that implicit (procedural) memory exists, much like the patients described in the question stem. Item III is false: echoic memory, part of the short-term sensory memory system, is brief memory for sound. The question stem does not provide any information about the patients&rsquo ability to process or remember sound information (choice D can be eliminated and choice B is correct).


1. A Based on the data presented in Table 1, one can reasonably conclude that the misinformation effect is present in the decoy group. As noted in the table, the overwhelming majority of participants positively identified the custodial worker as the mugger, most likely because his face was presented to the participants before the memory of the mugging could be fully encoded (choice A is correct). Although the control group chose the mugger at a higher rate than did the decoy group, this is not enough information to reasonably conclude that the control group has a better memory (choice B is wrong). Based on the data in Table 1, there are significant differences between the decoy group and the control group. The decoy group overwhelmingly chose the custodial worker as the suspect over the other photo choices, whereas the control group chose the actual mugger at a slightly higher rate (choice C is wrong). Based on these findings, some conclusions can indeed be drawn from these data (choice D is wrong).

2. C The hippocampus is the part of the brain most commonly associated with long-term memory, as it plays a major part in constructing, integrating, and then storing information from short-term into long-term memory (choice C is correct). The pre-frontal cortex stores information on an extremely temporary basis (anywhere from several seconds to several minutes), and thus is more commonly associated with short-term memory than with long-term memory (choice A is wrong). The amygdala has been implicated in encoding and storing memories that include intense emotional themes, but it works in concert with the hippocampus and other limbic area brain systems, and thus is not alone responsible for long-term memory (choice B is wrong). The thalamus is responsible for sensory and motor action, and is not typically associated with memory (choice D is wrong).

3. B Anterograde amnesia is defined as the inability to form new memories (choice B is correct). Retrograde amnesia is defined as the inability to retrieve information from one&rsquos own past (choice A is wrong). Source amnesia is defined as the attribution of an event one has experienced, heard about, or imagined to the wrong source (choice C is wrong). Infantile amnesia is used to explain why individuals are typically unable to remember anything from before the age of 3 the human brain pathways are not yet fully developed enough to form memories at this age (choice D is wrong).

4. A The elaboration likelihood model of persuasion is a theory that attitudes are formed by dual processes, via the central processing route (which includes deep processing of the message itself) or by the peripheral processing route (which includes superficial processing focused on specifics of the messenger). Since the controls with the handsome police officer were far more likely to be persuaded that the photo of the custodial worker was the mugger, this suggests that both groups are focusing on characteristics of the person delivering the message (attractiveness). In the peripheral processing route, people are more likely to be persuaded by attractive messengers (choice A is correct). The central processing route involves focusing on the information of the message in this scenario, the message was the same for both groups, so the central processing route does not explain the difference in outcome (choice D is wrong). Target characteristics describe the characteristics of the person receiving the message (motivation, interest) there is no information provided in the stem that would indicate that the characteristics of the two groups were responsible for the different outcome (choice B is wrong). Similarly, the message characteristics include specific features of the message itself (like length, logic, and evidence). Since the message was the same for the two groups, message characteristics do not explain the difference in outcomes (choice C is wrong).

5. B To the delight of many exhausted college students, multiple-choice tests rely most heavily on recognition memory that is, test-takers do not have to generate the correct response, but rather have to recognize the correct response from several options (choice B is correct). On the flip side, a test that involved short-answer or essay questions would rely on recall memory, which requires individuals to retrieve previously learned information and repeat it in some context (choice A is wrong). Repressed memories are a phenomenon that is hotly debated in the legal and psychological fields, and would likely not come into play during a multiple-choice test (choice C is wrong). Déjà vu is the phenomenon in which cues from the current situation subconsciously trigger retrieval of an earlier experience, creating that &ldquoI&rsquove been here before&rdquo feeling. This phenomenon is not typically associated with testing (choice D is wrong).

6. A The three main stages of memory are encoding (receiving, processing, and combining information), storage (the creation of a permanent record of the encoded information), and retrieval (the recovery of the stored information in response to a particular cue or activity choice A is correct). The other terms, while sometimes associated with the study of memory, are not part of the formal stages of memory (choices B, C, and D are wrong).

7. D The basal ganglia are the part of the brain most commonly associated with procedural memory for skills. They do this by receiving input from the cortex and storing it, but they do not send that information back to the cortex for conscious awareness. This is why one does not typically recall consciously reminding oneself how to pedal a bicycle after having learned as a child (choice D is correct). The hypothalamus is not implicated in memory, but is instead responsible for many of the autonomic functions that keep us alive, such as body temperature, hunger, thirst, sleep, et cetera (choice A is wrong). The parietal lobe is responsible for integrating sensory information, and in particular helps with determining spatial sense and navigation (choice B is wrong). The occipital lobe is the visual processing center of the brain, and is not associated with memory (choice C is wrong).

1 Mature red blood cells do not contain DNA

2 Though MZ twins have the same genes, they don&rsquot always have the same number of copies of a gene this may help to explain why one develops a disease while the other does not. Furthermore, X inactivation in female somatic cells is not identical between MZ female twins, therefore female MZ twins are actually not quite as identical as male MZ twins!

3 About 70% of MZ twins share a placenta in the womb, and in some instances, one twin will receive more blood flow, resulting in differential nutrition and growth between the two. Additionally, approximately 30% of MZ twins develop separate placentas and therefore may have slightly different prenatal conditions, as well.

4 In Greek, &ldquoexo&rdquo means &ldquooutside&rdquo and &ldquogignomi&rdquo means &ldquoto come to be&rdquo

5 46 total chromosomes in a normal zygote&mdash23 from the ovum and 23 from the sperm

6 Note: this type of extreme deprivation experiment would no longer be considered ethical or humane today research animals in captivity are treated much better.

7 It is possible that certain children have a genetic disposition to be easygoing, confident, and socially adjusted, so the authoritative parent has an &ldquoeasy time&rdquo raising this easy child, and their resultant behavior is attributed to the parent when in actuality, there was something innately about the child that caused the parent to respond to him in that way.

8 From the Latin word adolescere, which means &ldquoto grow up.&rdquo

9 In fact the frontal lobes are not completely developed until roughly age 26!

Balancing Prenatal Drug Exposure

Basic and clinical research unequivocally demonstrates that recreational or prescription use of drugs during pregnancy can be viewed as an anathema to healthy development. There is a clear conundrum, however, with regard to the need to balance exposure (in a clinically prescribed population) versus non-exposure when considering the long-term functional impact on brain development. The difficulties relate to the fact that high prenatal stress, malnutrition and untreated maternal psychiatric disorders can themselves increase risk for developmental disabilities in children.

With regard to recreational drugs, human addictive behavior makes this issue far more complicated than a simple public policy approach of ‘just say no’. Furthermore, it is clear that the idea that illegal drugs are more harmful to the unborn fetus than legal drugs is a misnomer this concept, which strongly influences public policy, is not supported by findings from carefully designed and controlled research studies. For pregnant women who abuse drugs prior to conception, withdrawal of these drugs post-conception is not without risk to the fetus. Maternal stress can be severe during withdrawal, and general health status may decline both severely impacting brain development of the fetus.

We now realize that prescription drug treatments for pregnant women with psychiatric and neurological disorders can have negative effects on fetal brain development and long-term behavioral outcomes. However, the underlying maternal pathophysiology of untreated disorders can lead to high risk nutritional and stress status for both the mother and the fetus. Perhaps underscoring the level of complexity of the maternal-fetal relationship, balancing these issues makes formulation of public and medical policies less straightforward and far more difficult than generally appreciated.


Episodic memory is influenced by a multitude of genes (Papassotiropoulos & de Quervain, 2011 Rasch, Papassotiropoulos, & de Quervain, 2010 de Quervain & Papassotiropoulos, 2006) determined by different molecular mechanisms. Human aging is associated with a decline in episodic memory and with large interindividual differences in performance (Nyberg, Lövdén, Riklund, Lindenberger, & Bäckman, 2012). By combining candidate gene and receptor imaging approaches, we investigate how individual differences at the molecular level may result in between-person differences in episodic memory performance among older individuals.

Nearly three decades ago, dopamine (DA) was shown to prolong long-term potentiation (LTP Frey, Huang, & Kandel, 1993 Frey, Schroeder, & Matthies, 1990), a cellular mechanism necessary for successful memory formation and consolidation (for a review, see Cooke & Bliss, 2006). Since then, animal studies have repeatedly shown that episodic memory performance is impaired when hippocampal DA receptors are blocked and enhanced when DA agonists are injected into the hippocampus (for a review, see Lisman & Grace, 2005). In humans, receptor imaging studies have related higher DA D2 receptor (D2DR) availability in the hippocampus to better episodic memory performance (Nyberg et al., 2016 Takahashi et al., 2007, 2008).

Two other proteins, the kidney and brain expressed protein (KIBRA) and the brain-derived neurotrophic factor (BDNF), have been implicated in hippocampal-based episodic memory. In the human brain, KIBRA is mainly expressed in the hippocampus and interacts with proteins involved in LTP (Schneider et al., 2010). BDNF promotes synaptic plasticity and is crucial for hippocampus-dependent learning and memory (Binder & Scharfman, 2004). A genetic variation in the KIBRA gene has been associated with episodic memory, with T-allele carriers exhibiting higher performance than C-allele homozygotes (Papassotiropoulos et al., 2006). Similarly, a variation in the BDNF gene is associated with individual differences in secretion of this protein, which is greater in Val homozygotes than in Met carriers (Egan et al., 2003). Meta-analytic evidence confirms negative effects of the KIBRA C allele and the BDNF Met allele on human episodic memory (Kambeitz et al., 2012 Milnik et al., 2012). Using the Cognition, Brain, and Aging (COBRA) sample of healthy older adults (n = 181, age = 64–68 years), we investigate the interplay between hippocampal D2DR status and genetic variations in BDNF and KIBRA and their contributions to episodic memory. Using the same data set, we have previously demonstrated that D2DR availability in caudate and hippocampus are associated with episodic memory performance (Nyberg et al., 2016).

More specifically, we expect the hippocampal D2DR–cognition link to be modulated by genes implicated in synaptic plasticity (i.e., BDNF, KIBRA). DA interacts with the same proteins involved in synaptic plasticity as KIBRA and BDNF, such as protein kinase Mzeta (PKMzeta), which is critical for maintenance of episodic memories (for a review, see Glanzman, 2013). BDNF facilitates LTP maintenance through PKMzeta (Mei, Nagappan, Ke, Sacktor, & Lu, 2011), and KIBRA regulates learning and memory through stabilization of PKMzeta (Vogt-Eisele et al., 2014). DA release in the hippocampus enhances LTP (Lisman & Grace, 2005). Importantly, however, PKMzeta is critical for the induction and maintenance of DA-induced LTP in the hippocampus (Navakkode, Sajikumar, Sacktor, & Frey, 2010). Therefore, we hypothesize that individuals with high synaptic efficacy and plasticity (BDNF Val-allele, KIBRA T-allele) may benefit most from high hippocampal D2DR status and perform particularly well on episodic memory tasks. By contrast, we did not expect to find caudate D2DR availability to interact with variations in KIBRA and BDNF because of lower expression of these proteins in the striatum (Johannsen, Duning, Pavenstädt, Kremerskothen, & Boeckers, 2008 Hofer, Pagliusi, Hohn, Leibrock, & Barde, 1990).

Handbook of Basal Ganglia Structure and Function, Second Edition

IV LTD at Glutamatergic Synapses on Striatal Projection Neurons

LTD at MSN glutamatergic synapses is the form of synaptic plasticity that is easiest to see in the dorsal striatum and, as a consequence, has been studied most thoroughly (see also chapter: Regulation of Corticostriatal Synaptic Plasticity in Physiological and Pathological Conditions). Unlike the situation at many other synapses, striatal LTD induction requires pairing of postsynaptic depolarization with moderate- to high-frequency afferent stimulation at near physiological temperatures ( Kreitzer and Malenka, 2005 ). Typically for the induction to be successful, postsynaptic L-type Ca 2+ channels and Gq-linked metabotropic glutamate receptor 5 (mGluR5) receptors need to be co-activated ( Kreitzer and Malenka, 2005 Lovinger et al., 1993 ) ( Fig. 9.3 ). Both L-type Ca 2+ channels and mGluR5 receptors are appropriately positioned at glutamatergic synapses on MSN spines. What is less clear is the nature of the interaction between these two membrane proteins in the process of induction. A clue has come from recent work showing that prolonging the opening of L-type channels with an allosteric modulator eliminates the need to stimulate mGluR5 receptors ( Adermark and Lovinger, 2007 ), pointing to shared regulation of dendritic Ca 2+ concentration, elevation of which is required for LTD induction. However, there is an asymmetry, as increasing mGluR5 activation by bath application of agonists does not eliminate the need for L-type Ca 2+ channel opening ( Kreitzer and Malenka, 2005 Ronesi et al., 2004 ). This might reflect a requirement for Ca 2+ -induced Ca 2+ release (CICR) from intracellular stores in LTD induction. In many cell types, CICR depends upon Ca 2+ influx through L-type channels ( Nakamura et al., 2000 ). Activation of mGluR5 and the production of IP3 could serve to prime these dendritic Ca 2+ stores, boosting CICR evoked by activity-dependent Ca 2+ entry through L-type Ca 2+ channels and thus promoting LTD induction ( Plotkin et al., 2013 Taufiq-Ur-Rahman et al., 2009 Wang et al., 2000 ).

Figure 9.3 . Schematic depicting hypothesized pathways mediating LTP and LTD in striatal MSNs. Signal transduction pathways mediating the effects of activation of Golf-coupled D1R/A2aR, Gi-coupled D2R/M4R, Gq-coupled mGluR5, and TrkBR are shown. Note that high PKA activity inhibits Gq-coupled mGluR5 signaling via RGS4. This inhibition effectively prevents the mobilization of both 2-AG and AEA. Activation of CaMKII suppresses striatal DGLα activity. Elevating cAMP/PKA signaling promotes NMDAR-mediated Ca 2+ permeability. Black arrowheads depict positive regulation and black circles depict negative regulation. 2-AG, 2-arachidonoylglycerol A2aR, adenosine receptor 2a AC5, adenylyl cyclase 5 AEA, anandamide CaMKII, Ca 2+ /calmodulin-dependent protein kinase II cAMP, 3′-5′-cyclic adenosine monophosphate DAG, 1,2-diacylglycerol DGLα, diacylglycerol lipase alpha D1R, dopamine D1 receptor D2R, dopamine D2 receptor MEK, mitogen-activated protein kinase PI3K, phosphatidylinositide 3-kinases PIP2, phosphatidylinositol 4,5-bisphosphate PLCß, phospholipase Cß PLD, phospholipase D Src, Src-family kinases.

Although the induction of LTD is postsynaptic, its expression is presynaptic. The activity-induced elevation in dendritic Ca 2+ concentration triggers the production of an endocannabinoid (eCB) that diffuses to presynaptically located CB1 receptors (CB1Rs) ( Fig. 9.1 ) (see chapter: Endocannabinoid Signaling in the Striatum). The combination of presynaptic CB1R activation, spiking and altered gene expression in the presynaptic cell, leads to a long-lasting reduction in glutamate release ( Chevaleyre et al., 2006 Lovinger, 2008 ). Having both pre- and postsynaptic induction criteria confers synaptic specificity on LTD expression ( Singla et al., 2007 ). The molecular identity of the metabolic pathway leading to eCB production in MSNs is still uncertain. This is important for a variety of reasons, not the least of which is knowledge of the critical signaling event responsible for triggering plasticity. There are two abundant striatal eCBs: anandamide and 2-arachidonylglycerol (2-AG). Although earlier work has underscored the neural regulation of anandamide synthesis in the striatum ( Giuffrida et al., 1999 ), collateral support for it as the obligate signaling molecule has been scant ( Ade and Lovinger, 2007 ). Recent work has supported the notion that anandamide is the eCB responsible for LTD induced by high-frequency electrical stimulation (HFS) and 2-AG is the eCB for LTD induction by moderate-frequency stimulation ( Lerner and Kreitzer, 2012 ).

In iMSNs, the evidence is clear that D2R signaling promotes the induction of LTD at corticostriatal synapses ( Kreitzer and Malenka, 2007 Lerner and Kreitzer, 2012 Shen et al., 2008 Wang et al., 2006 ). The issue is whether eCB-dependent LTD is inducible in the other major population of MSNs that do not express D2Rs—the D1R dominated dMSNs. This eCB-dependent form of LTD (eCB-LTD) is seen in the majority of MSNs ( Bagetta et al., 2011 Wang et al., 2006 ), not just half as predicted by the restricted expression of D2Rs in iMSNs. How then does LTD get induced in dMSNs? In agreement with earlier work ( Wang et al., 2006 ), Tozzi et al. (2011) have shown that the D2R-dependence of LTD induced by HFS is indirect, reflecting the influence of cholinergic interneurons (ChIs). ChIs express D2Rs that slow their discharge rate and diminish acetylcholine (ACh) release ( Maurice et al., 2004 ). Reduced ACh release and M1 muscarinic receptor (M1R) signaling in dMSNs appears to be permissive for LTD induction. Although there are suggestions ( Tozzi et al., 2011 Wang et al., 2006 ), precisely why is not clear. One complication in these experiments (as in the vast majority of this literature) is the reliance upon HFS in which there undoubtedly is current spread to the striatum, leading not only to the activation of striatal DAergic and cholinergic fibers but an undetermined array of other cellular elements. Another is lack of receptor subtype-specific muscarinic antagonists used in these studies.

In the last few years, several important studies have helped create some clarity in the rules by which DA influences synaptic strength. In iMSNs, the mechanisms are clearer. In a beautiful piece of detective work, Lerner and Kreitzer (2012) showed that by inhibiting AC5, D2R signaling diminished PKA stimulation of regulator of G-protein signaling 4 (RGS4), disinhibiting mGluR5-mediated eCB production and LTD ( Fig. 9.3 ). This mechanism also explained at least in part the ability of A2a adenosine receptors (A2aRs), which are positively coupled to AC5 in iMSNs, to blunt LTD induction, as suggested by spike-timing-dependent plasticity (STDP) work ( Shen et al., 2008 ).

Another modulator of eCB production is Ca 2+ -calmodulin-dependent kinase II (CaMKII). Colbran's group ( Shonesy et al., 2013 ) has revealed that CaMKII, which is implicated in most forms of LTP, suppresses striatal eCB production by DAG lipase-α. Ca 2+ entry through NMDARs leads to CaMKII phosphorylation. Therefore, PKA regulation of Ca 2+ entry through NMDARs ( Murphy et al., 2014 ) could be an efficient means of turning on and off eCB production ( Fig. 9.3 ). Certainly work by Higley and Sabatini (2010) showing D2R suppression of Ca 2+ entry through NMDARs in iMSNs is consistent with this sort of a mechanism.

It is unclear whether there is a receptor in dMSNs that is homologous to the D2R and promotes LTD. One candidate for this role is the Gi-coupled M4 muscarinic receptor (M4R) ( Fig. 9.1 ). The M4R is the most abundant striatal muscarinic receptor and it is preferentially expressed in dMSNs, where it is clustered near axospinous glutamatergic synapses ( Bernard et al., 1992 Hersch et al., 1994 Jeon et al., 2010 ). Giant ChIs controlling striatal ACh signaling have dense terminal fields that overlap those of DA neurons and activation of M4Rs suppresses D1R signaling through AC5 ( Jeon et al., 2010 Sánchez et al., 2009 ). In spite of these signaling linkages, the role of M4Rs in Hebbian forms of synaptic plasticity in dMSNs has not been determined.

Are there forms of LTD in the striatum that do not depend upon eCBs? Both opioids and serotonin induce a presynaptic form of LTD at corticostriatal synapses that is eCB-independent ( Atwood et al., 2014 Mathur et al., 2011 ) ( Fig. 9.1 ).

The neuromodulator mixture created by nonspecific electrical stimulation could also be a factor in slice studies implicating nitric oxide (NO) signaling in LTD induction (see chapter: Nitric Oxide Signaling in the Striatum). First, it must be acknowledged that this form of LTD might not be eCB-dependent, in spite of the fact that its induction occludes conventional HFS-induced LTD ( Calabresi et al., 1999b ). Because activation of striatal interneurons containing nitric oxide synthase (NOS) depends upon NMDAR and D1/D5 DA receptors ( Ondracek et al., 2008 ), this form of LTD should be sensitive to antagonism of either. Although a form of NMDA-dependent LTD has been reported in the ventral striatum, this is not the case in the dorsal striatum. Moreover, eCB-dependent LTD is not commonly found to be dependent upon D1/D5Rs in the dorsal striatum.

The lack of specificity in activating inputs to MSNs during the induction of plasticity also raises questions about the type of glutamatergic synapse being affected by eCB-dependent LTD. Studies using nominal white matter or cortical stimulation in a coronal brain slice typically assume that the glutamatergic fibers being stimulated are of cortical origin, but very few of these fibers are left intact in this preparation ( Kawaguchi et al., 1989 ). The thalamic glutamatergic innervation of MSNs is similar in magnitude to that of the cerebral cortex, perhaps constituting as much as 40% of the total glutamatergic input to MSNs, terminating on both shafts and spines ( Wilson, 1992 ). As a consequence, it is not really known whether eCB-dependent LTD is present at corticostriatal or thalamostriatal synapses or both. The localization of CB1Rs on corticostriatal terminals, but not thalamostriatal terminals ( Uchigashima et al., 2007 ), is consistent with the hypothesis that LTD is a corticostriatal phenomenon. More recent study using optogenetic approaches that dissect glutamatergic inputs from various cortical and thalamic regions explicitly demonstrates that eCB-dependent LTD is only expressed at corticostriatal synapses ( Wu et al., 2015 ) (see chapter: Endocannabinoid Signaling in the Striatum).


Knowing what bacterial species are depleted after chemotherapy, how long these effects last, and the physiological mechanisms that may drive psychological and cognitive issues among survivors is a crucial step forward in gut microbiota and young adult cancer research. Understanding the bio-behavioural mechanisms that drive psychological and cognitive dysfunction among survivors will allow for tailored interventions to be developed. Future studies can aim to aid cancer patients and survivors by co-administering specific health promoting bacteria (i.e. probiotics), with the potential of preventing or reversing physical and mental health issues that many young survivors face.


Animals and surgical preparations. All experiments were approved by the University of British Columbia Animal Care Committee and were conducted in accordance with the standards of the Canadian Council on Animal Care. Forty-three male Long–Evans rats weighing 250–350 gm were anesthetized with urethane (1.5 gm/kg, i.p.) and mounted in a stereotaxic frame. Body temperature was maintained at 37°C with a temperature-controlled heating pad. Rats were implanted with concentric bipolar electrical stimulating electrodes in the BLA (flat skull, anteroposterior (AP), −3.2 mm mediolateral (ML), +5.0 mm dorsoventral (DV), −7.0 mm). In some rats, stearate-modified graphite paste electrochemical recording electrodes were implanted stereotaxically into the NAc, ipsilateral to the stimulating electrode (AP, +1.5 ML, −1.0 at a 15° angle DV, −6.5 mm). In these preparations, an Ag–AgC1 reference and stainless steel auxiliary electrode combination was placed in contact with cortical tissue 4 mm posterior to bregma.

Extracellular recordings. Extracellular single-unit activity was recorded with filament-filled glass microelectrodes (outer diameter, 1.5 mm World Precision Instruments, Sarasota, FL) pulled by a programmable horizontal electrode puller (P-87 Sutter Instruments, Novato, CA). The microelectrodes were filled with fast green (Sigma, St. Louis, MO) mixed in 0.5 m sodium acetate and had an impedance of 5–10 MΩ. The electrode was targeted toward the medial shell of the NAc (AP, +1.7–0.7 mm ML, 0.9–1.3 mm DV, 5.0–7.0 mm) using a hydraulic microdrive (MHW-40 Narishige, Tokyo, Japan). Single-unit activity was amplified by an XCell-3 amplifier (Frederick Haer & Co. Inc., Brunswick, ME) and filtered (bandpass, 500–5000 Hz) individual action potentials from single units were isolated from background noise using a window discriminator and sampled on-line by a computer connected to a Data Translation DT 2821 A/d board interface. Sampling of the spike signals was performed at 10 kHz by a PC computer using IPEE software (Dr. Conrad Yim, CY Electronics, Toronto, Canada). Peristimulus time histograms (see Figs. 2, 4) were compiled from 100 stimulus sweeps and plotted with a bin width of 1 msec. For rats that had electrochemical recording electrodes implanted in the NAc, the glass microelectrode was driven vertically in the same anterior plane as the electrochemical recording electrode (AP, 1.5 mm at 0° angle) and was 0.9–1.3 mm lateral from midline and 5.0–7.0 mm ventral from cortex.

Electrochemical recordings. Repetitive chronoamperometric measurements selective for DA (Blaha and Phillips, 1996, and references therein) of oxidation current using an electrometer (Echempro, Vancouver, British Columbia, Canada) were made by applying a potential pulse from −0.15 V to +0.25 V versus Ag–AgCl to the recording electrode for 1 sec at 30 sec intervals and monitoring the oxidation current at the end of each 1 sec pulse. The timing of the potential pulse was set so that it would occur >300 msec after electrical stimulation of the BLA to ensure that the artifact produced by the pulse would not overlap with spikes evoked by BLA stimulation. Prestimulation baseline currents were normalized to zero current values with stimulated changes in the baseline signal presented as absolute changes in DA oxidation current (Floresco et al., 1998).

Stimulation protocol. During an initial cell searching procedure, stimuli were delivered to the BLA at 1 Hz while the microelectrode was advanced into the NAc. Cathodal monophasic square current pulses (0.2 msec duration) were delivered to the BLA through a concentric bipolar electrode (NE-100 Rhodes Medical Co.) connected to an Iso-Flex optically isolated stimulator (AMPI, Jerusalem, Israel) that received programmed pulses from a Master-8 pulse generator (AMPI). After isolating a NAc cell that responded to BLA stimulation, we adjusted the stimulation currents to approximately half-maximal stimulation intensity [i.e., ∼50 action potentials were evoked in response to a train of 100 BLA stimulations delivered at 2 Hz (range 100–1800 μA mean current, 1075 ± 75 μA median current, 1200 μA)]. There were no differences in the mean level of stimulation currents used between treatment groups (F(5,49) = 1.53 p > 0.1).

Evoked spike probabilities were calculated by dividing the number of action potentials observed by the number of stimuli administered. Changes in spike probabilities were used as an index of the effect of tetanic stimulation on the magnitude of change in NAc neuronal activity produced by subsequent BLA stimulation. Single-pulse stimuli trains (2 Hz, 100 pulses for 50 sec) were delivered to the BLA every 3–5 min to sample the evoked spike probability. Once stable levels of evoked-spiking activity were obtained (<15% variability in spike probability over 10–15 min), one train of tetanic stimulation was administered to the BLA (200 pulses delivered at 20 Hz for 10 sec, current adjusted to near maximal intensity so that each stimulus evoked a spike). This tetanus parameter produces robust increases in mesoaccumbens DA efflux (Floresco et al., 1998) and is comparable with the activity of BLA neurons recorded from freely moving rats that have been presented with primary or conditioned rewards (e.g., 10–30 Hz) (Ono et al., 1995 Pratt and Mizumori, 1998). After tetanus, trains of 2 Hz stimulation (100 pulses) were administered at the same submaximal current intensity used before tetanus, at 2 min after tetanus, and then at 5 min intervals for another 25 min. No more than two tetani were administered per animal, and the delivery of a second tetanus was spaced by an interval of at least 3 hr.

Pharmacological manipulations. Drugs obtained from Research Biochemicals (Natick, MA) were administered via intravenous jugular catheters. No electrochemical recordings were taken during these pharmacological experiments. The D1 receptor antagonist SCH23390 (0.5 mg/kg) and NMDA receptor antagonist 3-(2-carboxypiperazin-4-yl)-propyl-1-phosphonic acid (CPP) (1.0 mg/kg) were dissolved in physiological saline. The D2 receptor antagonist sulpiride (5.0 mg/kg) was dissolved in a drop of NaOH and PBS. No more than two drug injections were given per animal, separated by at least 3.5 hr. Based on pharmacokinetic studies, this would have been a sufficient period of time to allow for >90% clearance of these drugs from plasma and brain (Segura et al., 1976 Patel et al., 1990 Hietala et al., 1992). There were no significant differences between cells in a particular drug treatment group regarding the effects between the first and second drug injections or between the first and second tetanus for control cells (all Fs < 1.6 NS). The doses of the DA antagonists were chosen from previous studies (White and Wang, 1986 Floresco et al., 2001).

The D1 antagonist SCH23390 was iontophoretically applied onto some NAc cells before tetanic stimulation of the BLA. In these experiments, three-barrel glass micropipettes were pulled from a vertical electrode puller (Narishige), and the tips were broken back to an average size of 5 μ m . One barrel was filled with fast green in 0.5 m sodium acetate, the second barrel contained a 5 m m solution of SCH23390, pH 4.0, and the third barrel was filled with 1 m NaCl solution for automatic current balancing. SCH23390 was ejected using a Dagan 6400 iontophoretic current generator with an ejection current of +40 nA. A retaining current of −10 nA was used between injection periods.

Histology. After completion of each experiment, an iron deposit was made in the BLA stimulation site by passing DC current (100 μA for 10 sec) through the stimulating electrode. A dye deposit was made in the NAc recording site by ejecting fast green with a 20 μA anodal current for 20 min. The brain was removed and placed in 10% buffered formalin containing 0.1% potassium ferricyanide. After fixation, 50 μm sections were cut on a freezing microtome and stained for Nissl substance with cresyl violet. A Prussian blue spot resulting from a redox reaction of ferricyanide marked the stimulation site. Both the glass microelectrodes and electrochemical recording electrodes were verified to be located in the medial NAc or the border between the NAc shell and core (Fig.1A). The stimulation electrodes were found to be located primarily in the caudal BLA (Fig.1B).

Histology. A, Schematic of coronal sections of the rat brain (Paxinos and Watson, 1997) showing representative placements of electrochemical electrodes (squares) and location of extracellular single-unit recording electrodes (circles) recorded from control rats and rats whose data are presented in Figure 3A–C.Numbers correspond to millimeters from bregma.B, Photograph of a representative placement of a stimulating electrode in the BLA. Arrow highlights the location of stimulating electrode placements. Opt, Optic tract.

Data analysis. In the electrochemical studies, the data were analyzed using a one-way, repeated measures ANOVA with time as a within-subjects factor. Only samples that were taken at tetanus and at time points at which the BLA was stimulated at 2 Hz were used in this analysis. Multiple comparisons were made versus the sample taken 2 min before tetanus and for data points at time 0 (tetanus) and at time points when the BLA was stimulated at 2 Hz. For the extracellular recording data, pretetanus baseline spike probabilities were normalized to the spike probability that was recorded 2 min before tetanus so that the change in spike probability at this time point would be 0. These data were then converted to percentage change in spike probability, using the formula:

Statistical comparisons were made with the spike probability recorded 7 min before tetanus. Multiple comparisons were made using Dunnett's test for repeated measures for both the electrochemical and electrophysiological comparisons.

Metaplasticity: tuning synapses and networks for plasticity

Synaptic plasticity, such as long-term potentiation (LTP) and long-term depression (LTD), must be tightly regulated to prevent saturation, which would impair learning. Metaplasticity mechanisms have evolved to help implement this essential computational constraint. Metaplasticity refers to neural changes that are induced by activity at one point in time and that persist and affect subsequently induced LTP or LTD.

The activation of NMDA (N-methyl- D -aspartate) receptors can cause a persistent reduction in LTP induction and an enhancement of LTD. These effects are synapse-specific, last tens of minutes and contribute to LTD induction during conventional low-frequency stimulation protocols. The mechanisms of this regulation are poorly understood, but activation of protein phosphatases and alteration of calcium/calmodulin-dependent protein kinase II function are clear candidates.

Prior activation of group 1 metabotropic glutamate receptors (group 1 mGluRs) facilitates both the induction and the persistence of LTP in the hippocampus. The facilitated induction probably involves depression of afterhyperpolarizations (AHPs) and trafficking of AMPA (α-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid) receptors to the extrasynaptic membrane, whereas the facilitated persistence entails de novo local protein synthesis.

Heterosynaptic metaplasticity that crosses between synapses can also occur. Stimulation of protein synthesis by activity in one set of synapses can facilitate LTP persistence through a synaptic tag-and-capture process operating at a second set of weakly activated synapses. Heterosynaptic metaplasticity can also be mediated by altered postsynaptic ion-channel function and retrograde endocannabinoid signalling that reduces transmission at nearby inhibitory GABAergic terminals.

Behaviourally, stress can inhibit LTP and facilitate LTD through NMDA-receptor-dependent mechanisms. Sensory stimulation or deprivation alters plasticity thresholds in cortical regions, especially during developmental periods. Reductions in the slow AHP in piriform and hippocampal neurons support the learning of behavioural tasks, suggesting a metaplastic role for this mechanism in controlling learning-related plasticity thresholds.

The ability to harness metaplasticity mechanisms might contribute to strategies for treating adult amblyopia or the development of therapies aimed at improving cognition in individuals with neurological disorders. Metaplasticity paradigms also share commonalities with ischaemic preconditioning, so its mechanisms might present targets for preventing stroke in at-risk individuals.

In conclusion, metaplasticity is a major regulator of plasticity thresholds and therefore has a key role in keeping synapses working in a range that permits the full expression of plasticity. In turn, this helps to keep networks operating at an appropriate level for information processing and storage. Considerable research is still needed to clarify the mechanisms that underpin different forms of metaplasticity and their contribution to network dynamics and behavioural learning.


Department of Psychology, Davie Hall CB no. 3270, University of North Carolina at Chapel Hill, Chapel Hill, 27599, North Carolina, USA

Jeremy J Day, Mitchell F Roitman & Regina M Carelli

Department of Chemistry, Venable and Kenan Laboratories CB no. 3290, University of North Carolina at Chapel Hill, Chapel Hill, 27599, North Carolina, USA

Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, 27599, North Carolina, USA


Dopamine D1/D5, but not β-adrenergic, receptor activation reverses synaptically induced LTD

The initial experiments were designed to investigate the effect of D1/D5R activation on LTD induced by a previous LFS in CA1. Control LTD of moderate magnitude was induced by a single bout of LFS consisting of 1200 pulses at 1 mV and applied at 3 Hz (−16.5 ± 4.7% measured 2 h after LFS n = 7) (Fig. 1A). Application of the dopamine D1/D5R agonist SKF 38393 (100 μ m , 20 min) immediately after the LFS did not alter the expression of the early phase of the depression, but, after 15 min, the depression began to decline and had completely reversed back to baseline within 40 min. The EPSP slope then remained at baseline levels for the remaining 80 min of the recording period (1.4 ± 4.4% n = 8) (Fig. 1B). SKF 38393, applied by itself without a preceding LFS, had no immediate effect on baseline recording levels, although a gradual decline over the duration of the experiment was observed (−11.6 ± 2.5% n = 4) (Fig. 1C) as reported previously (Mockett et al., 2004). To test whether the reversal of LTD is a general phenomenon of adenylate cyclase activation through G-protein-linked receptors, we investigated the effect of β-adrenergic receptor activation on the reversal of LFS-induced LTD. Previously, we determined that activation of β-adrenergic receptors by 0.5 μ m isoproterenol facilitated LTP induction in CA1 (Cohen et al., 1999) and that 1 μ m caused a twofold increase in cAMP production, similar to that induced by SKF 38393 (S. C. Webb, W. C. Abraham, and W. P. Tate, unpublished observations). We therefore chose 1 μ m as an effective concentration for the present experiment. Isoproterenol, applied immediately after the LFS, did not significantly reverse the LTD (−12.2 ± 3.4% n = 6) (Fig. 1D). This suggests that the reversal of synaptically induced LTD is specific to D1/D5R activation.

Selective activation of dopamine D1/D5Rs reverses LTD. A, A single bout of LFS (1200 pulses, 3 Hz n = 7) induced LTD lasting at least 120 min. B, Application of the D1/D5R agonist SKF 38393 (100 μ m , 20 min n = 8) immediately after the LFS completely reversed the LTD within 40 min. C, SKF 38393 applied in the absence of LFS did not significantly change baseline transmission levels (n = 4). Pauses in recording correspond to periods of LFS delivery in this and other experiments. D, Stimulation of β-adrenergic receptors using isoproterenol (1 μ m , 20 min n = 6) immediately after the LFS did not reverse the LTD. Sample fEPSP recordings are shown to the right of the figure. Each numbered recording is the average of 10 traces taken at the time indicated by the corresponding number on the adjacent plot. Calibration: 0.5 mV, 5 ms.

To examine further the involvement of the adenylate cyclase/cAMP cascade in LTD reversal, we applied the general adenylate cyclase activator forskolin (10 μ m , 20 min) to induce a global rise in intracellular cAMP levels. This concentration causes a more than fivefold increase in cAMP production in CA1 (Webb, Abraham, and Tate, unpublished observations). When applied immediately after the LFS, forskolin initiated a rapid reversal of the LTD, which was complete within the period of forskolin perfusion and persisted after forskolin washout (1.3 ± 3.2% n = 5) (Fig. 2A). However, when application of forskolin was delayed for 60 min after the LFS, the recovery back to baseline, although rapid, was only transient and returned to levels indistinguishable from LFS controls by the end of the experiment (−15.8 ± 3.6% n = 9) (Fig. 2B). To determine whether the D1/D5R-mediated reversal of LTD was also time dependent, we repeated the protocol and replaced forskolin with SKF 38393. LFS again produced a significant LTD 60 min after LFS (−18.2 ± 4.1% n = 4) that was not reversed by SKF 383893 (−21.8 ± 10.8%, measured 40 min after drug washout) (Fig. 2C). These findings demonstrate a time-dependent consolidation of LTD that renders it resistant to reversal by D1/D5R receptor activation and cAMP accumulation.

Time-dependent reversal of LTD by forskolin. A, Forskolin (10 μ m , 20 min n = 5), applied immediately after the delivery of an LFS, rapidly reversed the LTD. B, Forskolin, applied 60 min after the LFS, caused only a transient reversal of the depression, which then returned to control LTD levels (n = 9). C, SKF 38393 (100 μ m , 20 min n = 4), applied 60 min after the LFS, did not reverse the LFS-induced LTD. Sample waveforms are as in Figure 1.

Having established that D1/D5R and adenylate cyclase activation can completely reverse a moderate LTD, we next sought to determine whether a similar reversal would occur after the induction of a strong LTD. A stronger, more persistent LTD was induced by LFS consisting of 1200 pulses (1 mV, 3 Hz) delivered twice with a 5 min interval (−25.2 ± 2.8%, measured 120 min after LFS n = 7) (Fig. 3A). This LTD was confirmed to be NMDAR dependent, because administration of d -APV (50 μ m ) during the LFS protocol blocked the LTD (−5.9 ± 3.9%, 60 min after LFS n = 5) (Fig. 3C). A second period of LFS administration after APV washout successfully established LTD (−33.2 ± 2.7% p < 0.001) (Fig. 3C). Application of SKF 38393 (100 μ m , 20 min) immediately after two bouts of 1200 pulses appeared to cause a partial reversal after 120 min, but this did not reach statistical significance (−14.3 ± 4.1% n = 10 p = 0.062) (Fig. 3B). Delaying the application of the SKF 38393 by 30 min was also ineffective in reversing the LTD (−19.8 ± 5.5% n = 5 data not shown). In a final attempt to induce the reversal of strong LTD, we also examined the effects of isoproterenol (1 μ m , 20 min n = 5) and forskolin (10 μ m , 20 min n = 5) applied immediately after the second LFS. LTD levels measured 60 min after the second bout of LFS in SKF 38393, isoproterenol, and forskolin groups were not significantly different from the LTD observed in the control group (Fig. 3D).

Strong LTD induced by two bouts of LFS is resistant to reversal. A, LTD induced by two bouts of LFS (1200 pulses, 3 Hz n = 7) each separated by 5 min. B, D1/D5R receptor activation by SKF 38393 (100 μ m , 20 min n = 10), applied immediately after the second bout of LFS, failed to significantly reverse the LTD after 120 min (p = 0.06). C, LFS-induced LTD is NMDAR dependent. NMDAR blockade by APV (50 μ m n = 5) during LFS blocked LTD induction. After APV washout, LTD could again be induced by LFS. D, Histogram comparing LTD recorded in control slices 60 min after two bouts of LFS with that recorded in slices treated with APV (50 μ m ), SKF 38393 (SKF 100 μ m , 20 min), isoproterenol (ISO 1 μ m , 20 min n = 5), or forskolin (FSK 10 μ m , 20 min n = 5). No significant reversal of LTD was observed in any group. Waveforms are as in Figure 1.

One consideration regarding the resistance to reversal by strong LTD is that the 5 min delay between bouts of LFS may have provided sufficient time for intracellular events to consolidate the LTD before the D1/D5R activation. To test this possibility, we applied the strong LTD-inducing protocol in one continuous uninterrupted bout (2400 pulses, 3 Hz, 1 mV). This protocol produced robust LTD, measured 60 min after LFS, that was very similar to that observed at the same time point in the previous experiment (−23.6 ± 2.3%, n = 9, and −21.7 ± 3.3%, n = 11, respectively) (Fig. 4A). When SKF 38393 (100 μ m , 20 min) was applied immediately after the uninterrupted LFS, a partial, but significant, reversal of the LTD was observed (−12.7 ± 2.1% n = 7 p < 0.01 relative to control slices) (Fig. 4B). Together, these experiments demonstrate that, although moderate LTD is completely reversible by D1/D5R activation and global intracellular adenylate cyclase activation, a stronger LTD-inducing protocol generates a rapidly consolidating LTD that renders it only partially susceptible to cAMP-linked mechanisms of de-depression within 5 min and irreversible by these mechanisms by 60 min.

D1/D5R activation partially reverses strong LTD induced by uninterrupted LFS. A, LTD induced by one continuous bout of LFS (2400 pulses, 3 Hz n = 11). B, D1/D5R activation by SKF 38393 (100 μ m , 20 min n = 7) immediately after continuous LFS partially reversed the LTD (p < 0.05). Waveforms are as in Figure 1.

Pharmacological induction of LTD by NMDA is also reversed by D1/D5R activation

To understand the generality of the reversal effect, we asked whether pharmacologically induced LTD was also sensitive to D1/D5R-mediated reversal. Although several protocols for inducing LTD by bath-applied NMDA have been published (Lee et al., 1998 Kamal et al., 1999 van Dam et al., 2002 Li et al., 2004), these have not reliably induced LTD in our hands. Consequently, we developed a priming paradigm that consistently induced LTD in CA1 of adult hippocampal slices (Fig. 5). NMDA (20 μ m , 2 min), when applied once, produced no long-lasting effect on the fEPSPs (−4.6 ± 1.1% n = 5). However, when a second identical administration of NMDA was made 45 min later, an LTD of −18.3 ± 6.3% was generated that persisted for at least 100 min after the second NMDA application (Fig. 5A). This LTD was prevented by the application of SKF 38393 (100 μ m , 40 min) when applied immediately after the second NMDA application (1.8 ± 3.9% n = 6 p < 0.05) (Fig. 5B). When the application of SKF 38393 was delayed by 60 min, however, no reversal occurred (−21.8 ± 7.7%, measured 40 min after drug washout n = 4) (Fig. 5C). In contrast to the LTD-reversing effect of early D1/D5R activation, β-adrenergic receptor activation by isoproterenol (1 μ m , 40 min) failed to significantly modulate LTD (−13.7 ± 5.7% n = 3) (Fig. 5D). Together with the findings from LFS-induced LTD, these data indicate that D1/D5R, but not β-adrenergic receptor, activation reverses the NMDAR-induced mechanisms that lead to the induction and maintenance of LTD.

Pharmacological induction of LTD by NMDA is reversed by D1/D5R activation. A, NMDA (20 μ m , 2 min n = 5) induced a response that was typified by a near-complete depression of synaptic transmission followed by a rapid recovery to a transient potentiated state. When applied twice at 45 min intervals, the NMDA treatment led to a stable LTD after the second application. B, Application of SKF 38393 (100 μ m , 40 min n = 6) immediately after the second NMDA application completely prevented the development of LTD from the potentiated state, with synaptic transmission returning only to baseline levels. C, Delayed application of SKF 38393 (100 μ m , 40 min n = 4) 60 min after the second NMDA application did not reverse the LTD. D, Isoproterenol (1 μ m , 40 min n = 3) delivered immediately after the second NMDA application did not prevent the development of LTD or return it to baseline transmission levels. Waveforms are as in Figure 1.

LTD induced by group 1 metabotropic glutamate receptor activation is not reversed by D1/D5R activation

To examine the specificity of D1/D5R-mediated LTD reversal, we tested whether a non-NMDAR-dependent form of LTD is also reversible by a D1/D5R agonist. Activation of G-protein-linked group 1 metabotropic glutamate receptors (mGluR) by the agonist DHPG rapidly induces an LTD that is reported to be mechanistically distinct from NMDAR-dependent LTD (Oliet et al., 1997 Harris et al., 2004). To be comparable with the NMDA-dependent LTD experiments, we selected a DHPG paradigm that induced moderate levels of LTD, similar to that induced by a single bout of LFS or by the NMDA paradigm, which our previous experiments demonstrated could be reversed by SKF 38393 (Figs. 1, 5). DHPG (50 μ m , 10 min) induced a moderate LTD that persisted for at least 140 min after DHPG washout (−16.8 ± 2.0% n = 4) (Fig. 6A). SKF 38393 (100 μ m , 20 min), applied immediately after DHPG, failed to reverse this LTD (−17.1 ± 4.5% n = 6) (Fig. 6B). This finding indicates that LTD reversal by D1/D5R activation is not a feature of all forms of LTD but is specific to NMDAR-mediated LTD.

mGluR-mediated LTD is not reversed by D1/D5R activation. A, Application of the mGluR group 1 agonist DHPG (50 μ m , 10 min n = 4) produced a small, slowly decaying LTD similar in magnitude to that produced in previous experiments using a mild LFS protocol. B, DHPG-induced LTD was not reversed by SKF 38393 (100 μ m , 20 min n = 6) applied immediately after the DHPG perfusion (−17.1 ± 4.5% n = 6). Waveforms are as in Figure 1.

NMDA-induced LTD is expressed through dephosphorylation of synaptic GluR1 serine 845

One of the early events identified in the development of NMDAR-dependent LTD is the dephosphorylation of the PKA-targeted serine 845 residue of the AMPA receptor GluR1 subunit (Kameyama et al., 1998 Lee et al., 1998, 2000). We therefore investigated whether the NMDAR-dependent LTD in our hands showed a similar dephosphorylation correlate and, if so, whether D1/D5R or β-adrenergic receptor activation would reverse the effect. For this study, we chose the chemical LTD approach, using two NMDA applications, to maximize the number of synapses exhibiting the LTD effect in the slice. Consistent with the paradigm for reversal of pharmacological LTD, SKF 38393 (100 μ m ) was applied for 40 min either immediately or 60 min after the second NMDA application, whereas isoproterenol (1 μ m ) was applied only at the earlier time point. Control slices received SKF 38393 or isoproterenol alone.

Quantitative Western blot analysis was used to determine the phosphorylation state of the GluR1 serine 845 and 831 residues in NMDA-treated CA1 mini-slices (see Materials and Methods). All tissue was collected at a time equivalent to that at which LTD levels were determined in the electrophysiological experiments. Initial experiments examined these phosphorylation states in whole homogenates made from the mini-slices. Neither 2× NMDA nor SKF 38393, either alone or together, had a significant effect on serine 831 phosphorylation in this preparation, measured 60 min after drug application (data not shown). In contrast, SKF 38393, by itself or in combination with NMDA, induced a twofold increase in serine 845 phosphorylation (2.18 ± 0.41 n = 4, p < 0.05 and 2.31 ± 0.48, n = 7, p < 0.05, respectively). This effect appears to be entirely attributable to D1/D5R activation because NMDA by itself had no significant effect (1.15 ± 0.13 n = 7). In no condition was the total amount of GluR1 altered from control levels.

The lack of an effect by NMDA on GluR1 serine 845 phosphorylation levels was unexpected because previous studies had reported a dephosphorylation of this site during NMDA receptor-dependent LTD (Lee et al., 1998). Because previous studies had used a crude membrane preparation, we therefore refined our whole-cell extract to a PSD-enriched fraction (see Materials and Methods), using mini-slices collected in an additional experiment and with each data point normalized to total GluR1 levels. Using this preparation, NMDA caused a significant reduction in serine 845 phosphorylation (0.48 ± 0.10 p < 0.001 n = 6) (Fig. 7A,B), without affecting the serine 831 phosphorylation state (1.18 ± 0.12 n = 4) (Fig. 7A,D). D1/D5R activation by SKF 38393 immediately after the second NMDA application reversed the NMDA-induced dephosphorylation of serine 845 (1.40 ± 0.22 n = 6 p < 0.01), but was ineffective when applied after 60 min (0.72 ± 0.09 n = 6) (Fig. 7A,B). SKF 38393 by itself had no significant effect on the phosphorylation state of serine 845 (1.24 ± 0.18 n = 5), implying that the phosphorylation response observed to the drug in whole homogenates primarily occurred in nonsynaptic receptors.

NMDA-induced LTD and its reversal are correlated with the phosphorylation state of GluR1 serine 845 in PSD-enriched fractions prepared from synaptoneurosomes. CA1 mini-slices were subjected to the same experimental paradigm as shown in Figure 5. Changes in GluR1 phosphorylation states were determined by extracting PSD-enriched fractions from CA1 mini-slice synaptoneurosomes followed by Western blot analysis. A, Representative Western blots of PSD-enriched fractions probed with an antibody recognizing GluR1ser845, GluR1ser831, and GluR1. Total GluR1 was not affected by any manipulation, and thus all phosphorylation state data in B–D were normalized against total GluR1. SKF, SKF 38393 Iso, isopreterenol. B, Summary graph showing that NMDA application (N n = 6) caused a significant reduction in GluR1Ser845 phosphorylation relative to the no-drug controls (C n = 6). SKF 38393 (S n = 5) applied by itself did not significantly affect GluR1Ser845 phosphorylation but completely reversed the NMDA effect (N+S) when applied early (E 0–40 min n = 6) but not late (L 60–100 min n = 6) after the second NMDA administration. Isoproterenol (I n = 6) applied by itself resulted in a significant increase in GluR1Ser845 phosphorylation, which was significantly reduced when applied in combination with NMDA (N+I n = 6) at the early time point. RAU, Relative absorbance units. *p < 0.05, **p < 0.01, ***p < 0.001 by two-tailed independent t test. C, Summary graph showing data from B, corrected for effect of SKF 38393 and isoproterenol alone by subtracting the mean change in response to SKF 38393 or isoproterenol alone from the corresponding value when the drug was applied with NMDA. The NMDA-induced reduction in GluR1Ser845 phosphorylation was reversed by early but not late application of SKF 38393 or by isoproterenol (**p < 0.01, one-way ANOVA with Bonferroni's post hoc test). D, GluR1 serine 831 phosphorylation (n = 4) was not significantly altered by NMDAR, D1/D5R, or β-adrenergic receptor activation.

Isoproterenol caused a large increase in serine 845 phosphorylation when applied alone (1.47 ± 0.16% n = 6) and significantly increased these levels above NMDA-treated levels when applied immediately after NMDA treatment (NMDA plus isoproterenol, 0.90 ± 0.09% NMDA, 0.48 ± 0.10% n = 6 p < 0.01) however, this remained significantly less than isoproterenol applied alone (p < 0.01) (Fig. 7A,B). We considered that, because neither SKF 38393 (Fig. 1) nor isoproterenol (data not shown) by themselves caused an increase in basal synaptic transmission, then any change in serine 845 phosphorylation levels induced by these agonists alone was unlikely to be involved in the modulation of synaptic transmission after LTD induction. On this basis, we subtracted the mean change in serine 845 phosphorylation levels induced by the agonists alone from that of any NMDA treatment group in which the agonist was applied. As can be seen in Figure 7C, this analysis demonstrates that only SKF 38393 applied immediately after NMDA treatment was effective in fully reversing the induced dephosphorylation, whereas neither SKF 38393 applied 60 min after NMDA treatment nor isoproterenol applied immediately after NMDA treatment could rescue the serine 845 dephosphorylation effect.

Because NMDAR-dependent LTD might be expected to be mediated in part by a reduction in PSD-associated AMPA receptors (Beattie et al., 2000 Ehlers 2000), we also examined whether NMDA treatment caused a reduction in GluR1- or GluR3-containing AMPA receptors in the same PSD extracts as used for the phosphorylation analysis. No significant change in either GluR1 or GluR3 protein was found in the PSD fraction after NMDA or its reversal by SKF 38393 (e.g., NMDA treatment group: GluR1, 1.02 ± 0.05 GluR3, 1.18 ± 0.22 n = 6). Together, our findings suggest that the reversal of NMDA-induced LTD by D1/D5R activation is mediated, at least in part, by either preventing or reversing the dephosphorylation of the synaptic GluR1 serine 845 residue without changing overall levels of synaptic AMPA receptors.