Information

How do humans optimize noisy multi-variable functions in experimental settings?

How do humans optimize noisy multi-variable functions in experimental settings?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Imagine an experiment like this:

A participant is asked to optimize an unknown function (let's say minimize) . On each trial the participant provides several input values, and receives an output value. Now also imagine that the output is noisy, in that the same inputs lead to an output plus a random component.

To think of but one of many possible specific examples imagine the following function

$$Y = (X -3)^2 + (Z-2)^2 + (W-4)^2 + e,$$

where $e$ is normally distributed, mean = 0, sd = 3.

On each trial, the participant would provide a value for $X$, $Z$, and $W$. And they would obtain a $Y$ value based on this underlying function. Their aim would be to minimise the value of $Y$. They have not been told the underlying functional form. They only know that there is a global minimum and that there is a random component.

I'm interested in reading about the strategies used by humans to do this task in experimental settings. Note I'm not directly interested in how computers do the task or how programmers and mathematicians might complete this task.

Questions

  • What are some good references for learning about the literature on how humans learn to optimize noisy multi-variable functions?
  • What are some of the key findings on how humans optimize noisy multi-variable functions?

This is a bit of a tangential answer, but hopefully still useful.

When we give humans noisy data, we can basically think of them as some sort of Bayesian inference machines that try to figure out what the function that data came from looks like. The important thing we then need to know, is how strong of a bias (prior) humans have towards expecting certain relationships.

Unfortunately, it seems that humans are extremely biased towards positive linear relationships. I think this will make it very hard for them to optimize data presented as in your question, because they will constantly assume it comes from a straight line. This is really well captured by the following figure from Kalish et al. (2007):

The experiment that generated the above picture is rather different from the one you describe, but we can think of it is a very particular type of noise. A person at stage $n$ is given 25 $(x,y)$ pairs from the function at stage $n - 1$. The person is then tested by being given an x value and asked for a y, 25 times. The results of this are passed on to the person at trial $n + 1$ as the training data. Thus, we could think of the errors of person at stage n as noise/errors (although systematic errors) for the person at stage $n + 1$. As you can see, it doesn't take much of this noise to lose all structure of the function you started with and revert to the natural bias of a positive linear relationship. In fact, in condition 1 participants are already completely confused about the U-shaped function after strage $n = 1$ (so the first participant, with no error already has a hard time understanding the function from $(x,y)$ pairs).

References

Kalish, M. L., Griffiths, T. L., & Lewandowsky, S. (2007). Iterated learning: Intergenerational knowledge transmission reveals inductive biases. Psychonomic Bulletin & Review, 14(M), 288-294. [pdf]


This seems related to the literature on multiple-cue probability learning (MCPL). In this paradigm, a typical task presents subjects a list of cues and values, and asks them to predict the probability of certain outcomes. This paradigm has a decent amount of literature both in the JDM (judgment and decision making) community as well as the human factors community. To see the relevance, consider a doctor who has to diagnose a patient (provide treatment) based on a finite set of cues (symptoms).

Empirically, human judgment of this type has been modeled using Egon Brunswik's probabilistic functionalism, perhaps more commonly known as the lens model or social judgment theory. This is a useful methodology for comparing human judgment to true ecological correlations.

The image of above depicts the lens model. To give an example, consider the task of a college admissions board who must decide who to admit. The environment/criterion might be their final college GPA, and cues might be high school GPA, SAT scores, writing sample, etc. You can use multiple regression to find the 'true' ecological weights of these cues on the environment criterion, and similarly you can do the same for the admission board's estimate of a student's success (if they were to estimate GPA).

An admission board will (hopefully) observe the effect of different cues on success, and revise their cue weights with experience. Unfortunately, people are typically not so good at this task.

Some common findings:

  • People tend to use no more than 3 cues, even if they claim that they use more.
  • People are typically outperformed by a bootstrap model of themselves
  • People are often outperformed by a unit weight model of themselves: In other words, if you simply set the highest observed cues weights (on the right hand side) to 1, and all others to 0, you may get a better predictor of outcome.

What I take from this is that people will probably have little chance of success at estimating cue weights from a complex equation such as the one you present. However, you could measure these cue weights iteratively to observe learning rates and do other fun stuff-- even if we are all better off being judged by computer algorithms.


Motor Control

2.1 Movement Units and Their Limits

One of the basic challenges in the study of motor control is the dissection of fundamental movement units. In this terminology a unit is a relatively invariant pattern of muscular contractions that are typically elicited together. Reference to one of the most complex forms of human movement control, speech, is illustrative (Neville 1995 ). When we speak we articulate a broadly invariant set of phonemes (basic sound patterns) in sequence. This allows for the articulation of speech to be relatively ‘thought free’ in normal discourse, and allows the listener to decode the message through a complex array of perceptual and higher-order cognitive processes (see Syntactic Aspects of Language, Neural Basis of Motor Control Sign Language: Psychological and Neural Aspects Lexical Processes (Word Knowledge): Psychological and Neural Aspects Speech Production, Neural Basis of ). The example is useful in that in the smooth flow of speech individual phonemes are often ‘co-articulated’ with the fine details of one sound unit reflecting the production of preceding and subsequent sound patterns. So even here the ‘intrinsic’ properties of a speech unit can be sensitive to extrinsic influences as defined by other speech units. This is what gives speech its smooth and flowing nature (as opposed, for example, to most computer generated ‘speech’).

Thus, in some sense movement and their motor control ‘units’ are abstractions that frequently blur at the edges, and which can be defined from a number of complementary perspectives (Golani 1992 ). In complex forms of movement, such as playing a sport or musical instrument, or dancing with a moving partner, many neural pathways are orchestrated together in dazzling patterns that are widely distributed, serially ordered, and parallel in their operations. It is for this reason that the very distinction between motor control systems and other properties of the nervous system are often difficult to untangle (see Vision for Action: Neural Mechanisms Cognitive Control (Executive Functions): Role of Prefrontal Cortex ).


REVIEW article

  • 1 School of Arts and Humanities, Edith Cowan University, Joondalup, WA, Australia
  • 2 Mary Immaculate College, University of Limerick, Limerick, Ireland

Despite recent close attention to issues related to the reliability of psychological research (e.g., the replication crisis), issues of the validity of this research have not been considered to the same extent. This paper highlights an issue that calls into question the validity of the common research practice of studying samples of individuals, and using sample-based statistics to infer generalizations that are applied not only to the parent population, but to individuals. The lack of ergodicity in human data means that such generalizations are not justified. This problem is illustrated with respect to two common scenarios in psychological research that raise questions for the sorts of theories that are typically proposed to explain human behavior and cognition. The paper presents a method of data analysis that requires closer attention to the range of behaviors exhibited by individuals in our research to determine the pervasiveness of effects observed in sample data. Such an approach to data analysis will produce results that are more in tune with the types of generalizations typical in reports of psychological research than mainstream analysis methods.


Introduction To Robust Design (Taguchi Method)

Robust Design method, also called the Taguchi Method, pioneered by Dr. Genichi Taguchi, greatly improves engineering productivity. By consciously considering the noise factors (environmental variation during the product’s usage, manufacturing variation, and component deterioration) and the cost of failure in the field the Robust Design method helps ensure customer satisfaction. Robust Design focuses on improving the fundamental function of the product or process, thus facilitating flexible designs and concurrent engineering. Indeed, it is the most powerful method available to reduce product cost, improve quality, and simultaneously reduce development interval.

1. Why Use Robust Design Method?
Over the last five years many leading companies have invested heavily in the Six Sigma approach aimed at reducing waste during manufacturing and operations. These efforts have had great impact on the cost structure and hence on the bottom line of those companies. Many of them have reached the maximum potential of the traditional Six Sigma approach. What would be the engine for the next wave of productivity improvement?

Brenda Reichelderfer of ITT Industries reported on their benchmarking survey of many leading companies, “design directly influences more than 70% of the product life cycle cost companies with high product development effectiveness have earnings three times the average earnings and companies with high product development effectiveness have revenue growth two times the average revenue growth.” She also observed, 󈬘% of product development costs are wasted!”

These and similar observations by other leading companies are compelling them to adopt improved product development processes under the banner Design for Six Sigma. The Design for Six Sigma approach is focused on 1) increasing engineering productivity so that new products can be developed rapidly and at low cost, and 2) value based management.

Robust Design method is central to improving engineering productivity. Pioneered by Dr. Genichi Taguchi after the end of the Second World War, the method has evolved over the last five decades. Many companies around the world have saved hundreds of millions of dollars by using the method in diverse industries: automobiles, xerography, telecommunications, electronics, software, etc.

1.1. Typical Problems Addressed By Robust Design
A team of engineers was working on the design of a radio receiver for ground to aircraft communication requiring high reliability, i.e., low bit error rate, for data transmission. On the one hand, building series of prototypes to sequentially eliminate problems would be forbiddingly expensive. On the other hand, computer simulation effort for evaluating a single design was also time consuming and expensive. Then, how can one speed up development and yet assure reliability?

In an another project, a manufacturer had introduced a high speed copy machine to the field only to find that the paper feeder jammed almost ten times more frequently than what was planned. The traditional method for evaluating the reliability of a single new design idea used to take several weeks. How can the company conduct the needed research in a short time and come up with a design that would not embarrass the company again in the field?

The Robust Design method has helped reduce the development time and cost by a factor of two or better in many such problems.

In general, engineering decisions involved in product/system development can be classified into two categories:

  • Error-free implementation of the past collective knowledge and experience
  • Generation of new design information, often for improving product quality/reliability, performance, and cost.

While CAD/CAE tools are effective for implementing past knowledge, Robust Design method greatly improves productivity in generation of new knowledge by acting as an amplifier of engineering skills. With Robust Design, a company can rapidly achieve the full technological potential of their design ideas and achieve higher profits.

2. Robustness Strategy

Variation reduction is universally recognized as a key to reliability and productivity improvement. There are many approaches to reducing the variability, each one having its place in the product development cycle.

By addressing variation reduction at a particular stage in a product’s life cycle, one can prevent failures in the downstream stages. The Six Sigma approach has made tremendous gains in cost reduction by finding problems that occur in manufacturing or white-collar operations and fixing the immediate causes. The robustness strategy is to prevent problems through optimizing product designs and manufacturing process designs.

The manufacturer of a differential op-amplifier used in coin telephones faced the problem of excessive offset voltage due to manufacturing variability. High offset voltage caused poor voice quality, especially for phones further away from the central office. So, how to minimize field problems and associated cost? There are many approaches:

  1. Compensate the customers for their losses.
  2. Screen out circuits having large offset voltage at the end of the production line.
  3. Institute tighter tolerances through process control on the manufacturing line.
  4. Change the nominal values of critical circuit parameters such that the circuit’s function becomes insensitive to the cause, namely, manufacturing variation.

The approach 4 is the robustness strategy. As one moves from approach 1 to 4, one progressively moves upstream in the product delivery cycle and also becomes more efficient in cost control. Hence it is preferable to address the problem as upstream as possible. The robustness strategy provides the crucial methodology for systematically arriving at solutions that make designs less sensitive to various causes of variation. It can be used for optimizing product design as well as for manufacturing process design.

The Robustness Strategy uses five primary tools:

  1. P-Diagram is used to classify the variables associated with the product into noise, control, signal (input), and response (output) factors.
  2. Ideal Function is used to mathematically specify the ideal form of the signal-response relationship as embodied by the design concept for making the higher-level system work perfectly.
  3. Quadratic Loss Function (also known as Quality Loss Function) is used to quantify the loss incurred by the user due to deviation from target performance.
  4. Signal-to-Noise Ratio is used for predicting the field quality through laboratory experiments.
  5. Orthogonal Arrays are used for gathering dependable information about control factors (design parameters) with a small number of experiments.

2.1 P-Diagram

P-Diagram is a must for every development project. It is a way of succinctly defining the development scope. First we identify the signal (input) and response (output) associated with the design concept. For example, in designing the cooling system for a room the thermostat setting is the signal and the resulting room temperature is the response.

Next consider the parameters/factors that are beyond the control of the designer. Those factors are called noise factors. Outside temperature, opening/closing of windows, and number of occupants are examples of noise factors. Parameters that can be specified by the designer are called control factors. The number of registers, their locations, size of the air conditioning unit, insulation are examples of control factors.

Ideally, the resulting room temperature should be equal to the set point temperature. Thus the ideal function here is a straight line of slope one in the signal-response graph. This relationship must hold for all operating conditions. However, the noise factors cause the relationship to deviate from the ideal.

The job of the designer is to select appropriate control factors and their settings so that the deviation from the ideal is minimum at a low cost. Such a design is called a minimum sensitivity design or a robust design. It can be achieved by exploiting nonlinearity of the products/systems. The Robust Design method prescribes a systematic procedure for minimizing design sensitivity and it is called Parameter Design.

An overwhelming majority of product failures and the resulting field costs and design iterations come from ignoring noise factors during the early design stages. The noise factors crop up one by one as surprises in the subsequent product delivery stages causing costly failures and band-aids. These problems are avoided in the Robust Design method by subjecting the design ideas to noise factors through parameter design.

The next step is to specify allowed deviation of the parameters from the nominal values. It involves balancing the added cost of tighter tolerances against the benefits to the customer. Similar decisions must be made regarding the selection of different grades of the subsystems and components from available alternatives. The quadratic loss function is very useful for quantifying the impact of these decisions on customers or higher-level systems. The process of balancing the cost is called Tolerance Design.

The result of using parameter design followed by tolerance design is successful products at low cost.

2.2 Quality Measurement

In quality improvement and design optimization the metric plays a crucial role. Unfortunately, a single metric does not serve all stages of product delivery.

It is common to use the fraction of products outside the specified limits as the measure of quality. Though it is a good measure of the loss due to scrap, it miserably fails as a predictor of customer satisfaction. The quality loss function serves that purpose very well.

Let us define the following variables:

m: target value for a critical product characteristic

+/- Delta : allowed deviation from the target

A : loss due to a defective product

Then the quality loss, L, suffered by an average customer due to a product with y as value of the characteristic is given by the following equation:

If the output of the factory has distribution of the critical characteristic with mean m and variance s2, then the average quality loss per unit of the product is given by:

2.3 Signal To Noise (S/N) Ratios

The product/process/system design phase involves deciding the best values/levels for the control factors. The signal to noise (S/N) ratio is an ideal metric for that purpose.

The equation for average quality loss, Q, says that the customer’s average quality loss depends on the deviation of the mean from the target and also on the variance. An important class of design optimization problem requires minimization of the variance while keeping the mean on target.

Between the mean and standard deviation, it is typically easy to adjust the mean on target, but reducing the variance is difficult. Therefore, the designer should minimize the variance first and then adjust the mean on target.Among the available control factors most of them should be used to reduce variance. Only one or two control factors are adequate for adjusting the mean on target.

The design optimization problem can be solved in two steps:

1. Maximize the S/N ratio, h, defined as

This is the step of variance reduction.

2. Adjust the mean on target using a control factor that has no effect on h. Such a factor is called a scaling factor. This is the step of adjusting the mean on target.

One typically looks for one scaling factor to adjust the mean on target during design and another for adjusting the mean to compensate for process variation during manufacturing.

2.4 Static Versus Dynamic S/N Ratios

In some engineering problems, the signal factor is absent or it takes a fixed value. These problems are called Static problems and the corresponding S/N ratios are called static S/N ratios. The S/N ratio described in the preceding section is a static S/N ratio.

In other problems, the signal and response must follow a function called the ideal function. In the cooling system example described earlier, the response (room temperature) and signal (set point) must follow a linear relationship. Such problems are called dynamic problems and the corresponding S/N ratios are called dynamic S/N ratios.

The dynamic S/N ratio will be illustrated in a later section using a turbine design example.

Dynamic S/N ratios are very useful for technology development, which is the process of generating flexible solutions that can be used in many products.

3. Steps in Robust Parameter Design

Robust Parameter design has 4 main steps:

1. Problem Formulation:

This step consists of identifying the main function, developing the P-diagram, defining the ideal function and S/N ratio, and planning the experiments. The experiments involve changing the control, noise and signal factors systematically using orthogonal arrays.

2. Data Collection/Simulation:

The experiments may be conducted in hardware or through simulation. It is not necessary to have a full-scale model of the product for the purpose of experimentation. It is sufficient and more desirable to have an essential model of the product that adequately captures the design concept. Thus, the experiments can be done more economically.

3. Factor Effects Analysis:

The effects of the control factors are calculated in this step and the results are analyzed to select optimum setting of the control factors.

4. Prediction/Confirmation:

In order to validate the optimum conditions we predict the performance of the product design under baseline and optimum settings of the control factors. Then we perform confirmation experiments under these conditions and compare the results with the predictions. If the results of confirmation experiments agree with the predictions, then we implement the results. Otherwise, the above steps must be iterated.

See Also: Robust Design Case Studies


Intelligible speech despite noisy surroundings

Prof Dr Dorothea Kolossa and Mahdie Karbasi from the research group Cognitive Signal Processing at Ruhr-Universität Bochum (RUB) have developed a method for predicting speech intelligibility in noisy surroundings. The results of their experiments are more precise than those gained through the standard methods applied hitherto. They might thus facilitate the development process of hearing aids. The research was carried out in the course of the EU-funded project "Improved Communication through Applied Hearing Research," or "I can hear" for short.

Specific algorithms in hearing aids filter out background noises to ensure that wearers are able to understand speech in every situation -- regardless if they are in a packed restaurant or near a busy road. The challenge for the researchers is to maintain high speech transmission quality while filtering out background noises. Before an optimised hearing aid model is released to the market, new algorithms are subject to time-consuming tests.

Researchers and industrial developers run hearing tests with human participants to analyse to what extent the respective new algorithms will ensure speech intelligibility. If they were able to assess speech intelligibility reliably in an automated process, they could cut down on time-consuming test practices.

New algorithm developed

To date, the standard approaches for predicting speech intelligibility have included the so-called STOI method (short time objective speech intelligibility measure) and other reference-based methods. These methods require a clear original signal, i.e. an audio track that's been recorded without any background noises. Based on the differences between original and filtered sound, the value of speech intelligibility is estimated. Kolossa and Karbasi have found a way to predict intelligibility without needing a clear reference signal, which is still more precise than the STOI method. Consequently, Kolossa and Karbasi's findings might help reduce test processes in the product development phase of hearing aids.

The RUB researchers have tested their method with 849 individuals with normal hearing. To this end, the participants were asked to assess audio files via an online platform. With the aid of their algorithm, Kolossa and Karbasi estimated which percentage of a sentence from the respective file would be understood by the participants. Subsequently, they compared their predicted value with the test results.

Research outlook

In the next step, Kolossa and Karbasi intend to run the same tests with hearing-impaired participants. They are working on algorithms that can assess and optimise speech intelligibility in accordance with the individual perception threshold or type of hearing impairment. In the best case scenario, the study will thus provide methods for engineering an intelligent hearing aid. Such hearing aids could automatically recognise the wearer's current surroundings and situation. If he or she steps from a quiet street into a restaurant, the hearing aid would register an increase in background noises. Accordingly, it would filter out the ambient noises -- if possible without impairing the quality of the speech signal.

About the project

The main objective of the project "Improved Communication through Applied Hearing Research" was to optimise hearing aids and cochlear implants to ensure that they fulfil their function for their wearer even in very noisy surroundings. RUB researchers worked in an international team together with researchers from the UK, Switzerland, Denmark, and Belgium. Prof Dr Rainer Martin from the RUB Faculty of Electrical Engineering and Information Technology headed the EU-funded project. Industrial partners were hearing aid manufacturer Sivantos and cochlear implant company Cochlear. "I can hear" ended in December 2016.


How learned helplessness is acquired

In a review paper summarizing fifty years of research on the topic, two of the leading researchers in the field—Martin Seligman and Steven Maier—who conducted the premier experiments on the topic, describe, as follows, the mechanisms that were originally assumed to lead to learned helplessness, in the context of animal experiments:

  • First, DETECT. Animals DETECT the dimension of controllability and uncontrollability. (This is also referred to sometimes as the dimension of contingency and noncontingency)
  • Second, EXPECT. Animals that DETECT uncontrollability EXPECT shock or other events to once again be uncontrollable in new situations, which undermines attempts to escape in those situations.

Essentially, based on this theory, when individuals realize that they cannot control the situation that they’re in, they later expect to be unable to control similar situations too.

However, the researchers suggest that based on the fifty years of evidence that has been accumulated since the initial studies on the topic, and particularly in light of the neuroscientific evidence on the topic, the original theory got the mechanisms of learned helplessness backward. As the researchers state:

“Passivity in response to shock is not learned. It is the default, unlearned response to prolonged aversive events and it is mediated by the serotonergic activity of the dorsal raphe nucleus, which in turn inhibits escape. This passivity can be overcome by learning control, with the activity of the medial prefrontal cortex, which subserves the detection of control leading to the automatic inhibition of the dorsal raphe nucleus. So animals learn that they can control aversive events, but the passive failure to learn to escape is an unlearned reaction to prolonged aversive stimulation.”

Accordingly, they suggest the following mechanism for the acquisition of learned helplessness:

  • First: PASSIVITY/ANXIETY. “The intense activation of the dorsal raphe nucleus by shock sensitizes these neurons and this sensitization lasts for a few days and results in poor escape (passivity) and heightened anxiety… The detection of uncontrollability is not necessary nor is it sufficient for passivity. This is caused by prolonged exposure to aversive stimulation per se.”
  • Second: DETECT and ACT. “When shock is initially escapable, the presence of control is DETECTed… After detection of control, a separate and distinct population of prelimbic neurons are activated that here we call ACT. These neurons project to the dorsal raphe nucleus and inhibit the 5-HT cells that are activated by aversive stimulation, thus preventing dorsal raphe nucleus activation and thereby preventing sensitization of these cells, eliminating passivity and exaggerated fear. So it is the presence of control, not the absence of control, that is detected…”
  • Third: EXPECT. “After the prelimbic-dorsal raphe nucleus ACT circuit is activated, a set of changes that require several hours occurs in this pathway and involves the formation of new proteins related to plasticity. This is now a circuit that EXPECTS control… However, it should be clearly understood that this EXPECTATION may not be a cognitive process or entity as psychologists tend to view them. It is a circuit that provides an expectational function, in the sense that it changes or biases how organisms respond in the future as a consequence of the events that occur in the present.”

In summary, the researchers state that “as the original theory claimed, organisms are sensitive to the dimension of control, and this dimension is critical. However, the part of the dimension that is detected or expected seems now to be the presence of control, not the absence of control“. Crucially, however, they also state the following:

“At the psychological level, there are several other loose ends. As a general statement, neural processes in the prefrontal cortex become narrowed by stress (Arnsten, 2015). Thus, the fact that in an aversive situation the brain seems to detect control as the active ingredient rather than a lack of control, does not mean that the brain cannot detect lack of control in other types of circumstances, such as uncontrollable food or unsolvable cognitive problems, or even loud noise.

That is, the findings that we have reviewed do not imply that the brain does not have circuitry to detect noncontingency between events that include actions and outcomes. Rather, it may be that this processing can occur, but is not deployed in situations that are highly aversive such as the original helplessness experiments. So it is important to distinguish between what the brain does under a given set of conditions, and what the brain is capable of under different conditions. This possibility is in need of further research.”

The complexity of this phenomenon is supported via other research on the topic, such as research showing that learned helplessness can be acquired vicariously, by viewing someone else’s experiences, even if you did not have those experiences yourself.

Overall, the mechanisms behind learned helplessness are the subject of much research.

When focusing on learned helplessness as it’s acquired in the context of the initial experiments on the topic, and particularly on situations where animals were exposed to shock that they cannot control, the original theory was that animals who experience uncontrollable situations detect that uncontrollability, and expect it in future situations.

A newer theory, that is based on the neuroscientific research on the topic, suggests that passivity in response to shock is the default, unlearned behavior, and that animals can learn to overcome it by detecting the response of controllability.

However, this does not necessarily explain how learned helplessness is acquired in all cases, as there can be variability in terms of how it’s acquired by different organisms in different situations. For example, a mouse exposed to shock could develop learned helplessness in a different manner than a student developing learned helplessness as a result of negative feedback in school.

Nevertheless, from a practical perspective, when it comes to understanding why people, including yourself, display learned helplessness, the key factor is generally the inability to control the outcomes of situations that they’re in. Accordingly, individuals who experience situations where they do not have an ability to control outcomes are expected to display more learned helplessness than individuals who experience situations where they do have an ability to control the outcomes.

Objective vs. subjective helplessness

When considering the concept of learned helplessness, it can help to understand the difference between two types of helplessness:

  • Objective helplessness.Objective helplessness is a state where someone can do nothing to affect the outcome of a situation.
  • Subjective helplessness.Subjective helplessness is a state of mind where someone believes that they can do nothing to affect the outcome of a situation.

Studies on learned helplessness are primarily concerned with situations where individuals who experienced objective helplessness end up developing subjective helplessness, which carries over to other situations where they are not objectively helpless.


Why We Like What We Like

Paul Bloom’s How Pleasure Works: The New Science of Why We Like What We Like provides a wonderful set of arguments for why we love what we love. In my own work I was struck that children seem to have automatic preferences toward social groups that mimic the adult state (in spite of far less experience) and have been working to understand these preferences and their origins. Paul’s book gave me several ideas that I hadn’t considered and I thought his proposals worth sharing more broadly. Enjoy!

I am grateful to APS President Mahzarin Banaji for giving me the opportunity to discuss the science of pleasure.

One of the most exciting ideas in cognitive science is the theory that people have a default assumption that things, people, and events have invisible essences that make them what they are. Experimental psychologists have argued that essentialism underlies our understanding of the physical and social worlds, and developmental and cross-cultural psychologists have proposed that it is instinctive and universal. We are natural-born essentialists.

I propose that this essentialism not only influences our understanding of the world, it also shapes our experience, including our pleasures. What matters most is not the world as it appears to our senses. Rather, the enjoyment we get from something derives from what we think that thing really is. This is true for more intellectual pleasures, such as the appreciation of paintings and stories, but it is true as well for pleasures that seem more animalistic, such as the satisfaction of hunger and lust. For a painting, it matters who the artist was for a story, it matters whether it is truth or fiction for a steak, we care about what sort of animal it came from for sex, we are strongly affected by who we think our sexual partner really is.

What motivates this sort of theory? After all, some questions about pleasure have easy answers, and these have little to do with essentialism. We know why humans get so much joy from eating and drinking. We know why we enjoy eating some things, such as sweet fruit, more than other things, like stones. We know why sex is often fun, and why it can be pleasing to look at a baby’s smiling face and listen to a baby’s laugh. The obvious answers are that animals like us need food and water to survive, need sex to reproduce, and need to attend to our children in order for them to survive. Pleasure is the carrot that drives us toward these reproductively useful activities. As George Romanes observed in 1884, “Pleasure and pain must have been evolved as the subjective accompaniment of processes which are respectively beneficial or injurious to the organism, and so evolved for the purpose or to the end that the organism should seek the one and shun the other.”

We still need to explain how it all worked out so nicely, why it so happens (to mangle the Rolling Stones lyric) that we can’t always get what we want — but we want what we need. This is where Darwin comes in. The theory of natural selection explains, without appeal to an intelligent designer, why our pleasures so nicely incline us toward activities that are beneficial to survival and reproduction — why pleasure is good for the genes.

This is an adaptationist theory of pleasure. It is quite successful for non-human animals. They like what evolutionary biology says that they should like, such as food, water, and sex. To a large extent, this is true of humans as well. But many human pleasures are more mysterious. I begin How Pleasure Works with some examples of this:

Some teenage girls enjoy cutting themselves with razors. Some men pay good money to be spanked by prostitutes. The average American spends over four hours a day watching television. The thought of sex with a virgin is intensely arousing to many men. Abstract art can sell for millions of dollars. Young children enjoy playing with imaginary friends and can be comforted by security blankets. People slow their cars to look at gory accidents and go to movies that make them cry.

Consider also the pleasures of music, sentimental objects (like a child’s security blanket), and religious ritual. Now, one shouldn’t be too quick to abandon adaptationist explanations, and there are some serious proposals about the selective advantages of certain puzzling pleasures: The universal love of stories might evolve as a form of mental practice to build up vicarious experience with the world, and to safely explore alternative realities. Art and sports might exist as displays of fitness. Animals constantly assess one another as allies and mates these human activities might be our equivalent of the peacock’s tail, evolved to show off our better selves. Music and dance might have evolved as a coordinating mechanism to enhance social cooperation and good feelings toward one another.

Still, this approach is limited. Many of our special pleasures are useless or maladaptive, both in our current environment and the environment in which our species has evolved. There is no reproductive benefit in enjoying security blankets, paintings by Kandinsky, or sexual masochism.

Many psychologists are wary of adaptationist explanations and would defend the alternative that our uniquely human pleasures are cultural inventions. They don’t doubt that human brains have evolved, but they argue that what humans have come to possess is an enhanced capacity for flexibility we can acquire ideas, practices, and tastes that are arbitrary from a biological perspective.

This plasticity theory has to be right to some extent. Nobody could deny that culture can shape and structure human pleasure even those pleasures that we share with other animals, such as food and sex, manifest themselves in different ways across societies. Taken to an extreme, then, one might conclude that although natural selection played some limited role in shaping what we like — we have evolved hunger and thirst, a sex drive, curiosity, some social instincts — it had little to do with the specifics. In the words of the critic Louis Menand, “every aspect of life has a biological foundation in exactly the same sense, which is that unless it was biologically possible, it wouldn’t exist. After that, it’s up for grabs.”

I spend much of How Pleasure Works arguing that this is mistaken. Most pleasures have early developmental origins — they are not acquired through immersion into a society. And they are shared by all humans the variety that one sees can be understood as variation on a universal theme. Painting is a cultural invention, but the love of art is not. Societies have different stories, but all stories share certain themes. Tastes in food and sex differ — but not by all that much. It is true that we can imagine cultures in which pleasure is very different, where people rub food in feces to improve its taste and have no interest in salt or sugar, or where they spend fortunes on forgeries and throw originals into the trash, or spend happy hours listening to static, cringing at the sound of a melody. But this is science fiction, not reality.

I think that humans start off with a fixed list of pleasures and we can’t add to that list. This might sound like an insanely strong claim, given the inventions of chocolate, video games, cocaine, dildos, saunas, crossword puzzles, reality television, novels, and so on. But I would suggest that these are enjoyable because they connect — in a reasonably direct way — to pleasures that humans already possess. Hot fudge sundaes and barbecued ribs are modern inventions, but they appeal to our prior love of sugar and fat. There are novel forms of music created all the time, but a creature that was biologically unprepared for rhythm would never grow to like any of them they will always be noise.

Some pleasures, then, are neither biological adaptations nor arbitrary cultural inventions. This brings us to a third approach, explored in my book, which is that many of our most interesting pleasures are evolutionary accidents.

The most obvious cases here are those in which something has evolved for function X but later comes to be used for function Y — what Darwin called “preadaptations.” As a simple example, many people enjoy pornography but this isn’t because our porn-loving ancestors had more offspring than the porn-abstainers. Rather, certain images have their effect, at least in part, because they tickle the same part of the mind that responds to actual sex. This arousal is neither an adaptation nor an arbitrary learned response — it’s a byproduct, an accident. I have argued elsewhere that the same holds for the human capacity for word learning. Children are remarkable at learning words, but they do so, not through a capacity specifically evolved for that purpose, but through systems that have evolved for other functions, such as monitoring the intentions of others. Word learning is a lucky accident.

More specifically, many of our pleasures may be accidental byproducts of our essentialism. Different sorts of essentialism have been proposed by psychologists. There is category essentialism, which is the belief that members of a given category share a deep hidden nature. This includes belief in the physical essences of natural things like animals and plants, where the essence is internal to the object, as well as belief in the psychological essences of human-made things such as tools and artwork, where the essence is the object’s history, including the intentions of the person who created it. Then there is individual essentialism, which is the belief that a given individual has an essence that distinguishes it from other members of its category, even from perfect duplicates.

Our essentialist psychology shapes our pleasure. Sometimes the relevant essence is category essence, such as in the domain of sex, where the assumed essences of categories such as male and female turn out to powerfully constrain what people like. Sometimes the relevant essence is individual essence, which helps capture how certain consumer products get their value — such as an original painting by Marc Chagall or John F. Kennedy’s tape measure (which sold for about $50,000). More generally, the proposal is that our likes and dislikes are powerfully influences by our beliefs about the essences of things.

I hope my book sparks debate over these different theories of why we like what we like. In a recent discussion, Paul Rozin has worried about the narrowness of the modern sciences of the mind and points out that if you look through a psychology textbook you will find little or nothing about sports, art, music, drama, literature, play, and religion. These are wonderful and important domains of human life, and we won’t fully understand any of them until we understand pleasure.


Results

Manipulation of decision bias affects sensory evidence accumulation

In three EEG recording sessions, human participants (N = 16) viewed a continuous stream of horizontal, vertical and diagonal line textures alternating at a rate of 25 textures/second. The participants’ task was to detect an orientation-defined square presented in the center of the screen and report it via a button press (Figure 2A). Trials consisted of a fixed-order sequence of textures embedded in the continuous stream (total sequence duration 1 s). A square appeared in the fifth texture of a trial in 75% of the presentations (target trials), while in 25% a homogenous diagonal texture appeared in the fifth position (nontarget trials). Although the onset of a trial within the continuous stream of textures was not explicitly cued, the similar distribution of reaction times in target and nontarget trials suggests that participants used the temporal structure of the task even when no target appeared (Figure 2—figure supplement 1A). Consistent and significant EEG power modulations after trial onset (even for nontarget trials) further confirm that subjects registered trial onsets in the absence of an explicit cue, plausibly using the onset of a fixed order texture sequence as an implicit cue (Figure 2—figure supplement 1B).

Strategic decision bias shift toward liberal biases evidence accumulation.

(A) Schematic of the visual stimulus and task design. Participants viewed a continuous stream of full-screen diagonally, horizontally and vertically oriented textures at a presentation rate of 40 ms (25 Hz). After random inter-trial intervals, a fixed-order sequence was presented embedded in the stream. The fifth texture in each sequence either consisted of a single diagonal orientation (target absent), or contained an orthogonal orientation-defined square (either 45° or 135° orientation). Participants decided whether they had just seen a target, reporting detected targets by button press. Liberal and conservative conditions were administered in alternating nine-min blocks by penalizing either misses or false alarms, respectively, using aversive tones and monetary deductions. Depicted square and fixation dot sizes are not to scale. (B) Average detection rates (hits and false alarms) during both conditions. Miss rate is equal to 1 – hit rate since both are computed on stimulus present trials, and correct-rejection rate is equal to 1 – false alarm rate since both are computed on stimulus absent trials, together yielding the four SDT stimulus-response categories. (C) SDT parameters for sensitivity and criterion. (D) Schematic and simplified equation of drift diffusion model accounting for reaction time distributions for actively reported target-present and implicit target-absent decisions. Decision bias in this model can be implemented by either shifting the starting point of the evidence accumulation process (Z), or by adding an evidence-independent constant (‘drift bias’, db) to the drift rate. See text and Figure 1 for details. Notation: dy, change in decision variable y per unit time dt v·dt, mean drift (multiplied with one for signal + noise (target) trials, and −1 for noise-only (nontarget) trials) db·dt, drift bias and cdW, Gaussian white noise (mean = 0, variance = c2·dt). (E) Difference in Bayesian Information Criterion (BIC) goodness of fit estimates for the drift bias and the starting point models. A lower delta BIC value indicates a better fit, showing superiority of the drift bias model to account for the observed results. (F) Estimated model parameters for drift rate and drift bias in the drift bias model. Error bars, SEM across 16 participants. ***p<0.001 n.s., not significant. Panel D. is modified and reproduced with permission from de Gee et al. (2017) (Figure 4A, published under a CC BY 4.0 license).

Figure 2—source data 1

This csv table contains the data for Figure 2 panels B, C, E and F.

In alternating nine-minute blocks of trials, we actively biased participants’ perceptual decisions by instructing them either to report as many targets as possible (‘Detect as many targets as possible!” liberal condition), or to only report high-certainty targets ("Press only if you are really certain!" conservative condition). Participants were free to respond at any time during a block whenever they detected a target. A trial was considered a target present response when a button press occurred before the fixed-order sequence ended (i.e. within 0.84 s after onset of the fifth texture containing the (non)target, see Figure 2A). We provided auditory feedback and applied monetary penalties following missed targets in the liberal condition and following false alarms in the conservative condition (Figure 2A see Materials and methods for details). The median number of trials for each SDT category across participants was 1206 hits, 65 false alarms, 186 misses and 355 correct rejections in the liberal condition, and 980 hits, 12 false alarms, 419 misses and 492 correct rejections in the conservative condition.

Participants reliably adopted the intended decision bias shift across the two conditions, as shown by both the hit rate and the false alarm rate going down in tandem as a consequence of a more conservative bias (Figure 2B). The difference between hit rate and false alarm rate was not significantly modulated by the experimental bias manipulations (p=0.81, two-sided permutation test, 10,000 permutations, see Figure 2B). However, target detection performance computed using standard SDT d’ (perceptual sensitivity, reflecting the distance between the noise and signal distributions in Figure 1A) (Green and Swets, 1966) was slightly higher during conservative (liberal: d’=2.0 (s.d. 0.90) versus conservative: d’=2.31 (s.d. 0.82), p=0.0002, see Figure 2C, left bars). We quantified decision bias using the standard SDT criterion measure c, in which positive and negative values reflect conservative and liberal biases, respectively (see the blue and red vertical lines in Figure 1A). This uncovered a strong experimentally induced bias shift from the conservative to the liberal condition (liberal: c = – 0.13 (s.d. 0.4), versus conservative: c = 0.73 (s.d. 0.36), p=0.0001, see Figure 2C), as well as a conservative average bias across the two conditions (c = 0.3 (s.d. 0.31), p=0.0013).

Because the SDT framework is static over time, we further investigated how bias affected various components of the dynamic decision process by fitting different variants of the drift diffusion model (DDM) to the behavioral data (Figure 1B,C) (Ratcliff and McKoon, 2008). The DDM postulates that perceptual decisions are reached by accumulating noisy sensory evidence toward one of two decision boundaries representing the choice alternatives. Crossing one of these boundaries can either trigger an explicit behavioral report to indicate the decision (for target-present responses in our experiment), or remain implicit (i.e. without active response, for target-absent decisions in our experiment). The DDM captures the dynamic decision process by estimating parameters reflecting the rate of evidence accumulation (drift rate), the separation between the boundaries, as well as the time needed for stimulus encoding and response execution (non-decision time) (Ratcliff and McKoon, 2008). The DDM is able to estimate these parameters based on the shape of the RT distributions for actively reported (target-present) decisions along with the total number of trials in which no response occurred (i.e. implicit target-absent decisions) (Ratcliff et al., 2018).

We fitted two variants of the DDM to distinguish between two possible mechanisms that can bring about a change in choice bias: one in which the starting point of evidence accumulation moves closer to one of the decision boundaries (‘starting point model’, Figure 1B) (Mulder et al., 2012), and one in which the drift rate itself is biased toward one of the boundaries (de Gee et al., 2017) (‘drift bias model’, see Figure 1C, referred to as drift criterion by Ratcliff and McKoon (2008)). The drift bias parameter is determined by estimating the contribution of an evidence-independent constant added to the drift (Figure 2D). In the two respective models, we freed either the drift bias parameter (db, see Figure 2D) for the two conditions while keeping starting point (z) fixed across conditions (for the drift bias model), or vice versa (for the starting point model). Permitting only one parameter at a time to vary freely between conditions allowed us to directly compare the models without having to penalize either model for the number of free parameters. These alternative models make different predictions about the shape of the RT distributions in combination with the response ratios: a shift in starting point results in more target-present choices particularly for short RTs, whereas a shift in drift bias grows over time, resulting in more target-present choices also for longer RTs (de Gee et al., 2017 Ratcliff and McKoon, 2008 Urai et al., 2018). The RT distributions above and below the evidence accumulation graphs in Figure 1B and C illustrate these different effects. In both models, all of the non-bias related parameters (drift rate v, boundary separation a and non-decision time u + w, see Figure 2D) were also allowed to vary by condition.

We found that the starting point model provided a worse fit to the data than the drift bias model (starting point model, Bayesian Information Criterion (BIC) = 7938 drift bias model, BIC = 7926, Figure 2E, see Materials and methods for details). Specifically, for 15/16 participants, the drift bias model provided a better fit than the starting point model, for 12 of which delta BIC >6, indicating strong evidence in favor of the drift bias model (Kass and Raftery, 1995). Despite the lower BIC for the drift bias model, however, we note that to the naked eye both models provide similarly reasonable fits to the single participant RT distributions (Figure 2—figure supplement 2). Finally, we compared these two models to a model in which both drift bias and starting point were fixed across the conditions, while still allowing the non-bias-related parameters to vary per condition. This model provided the lowest goodness of fit (delta BIC >6 for both models for all participants).

Given the superior performance of the drift bias model (in terms of BIC), we further characterized decision making under the bias manipulation using parameter estimates from this model (see below where we revisit the implausibility of the starting point model when inspecting the lack of pre-stimulus baseline effects in sensory or motor cortex). Drift rate, reflecting the participants’ ability to discriminate targets and nontargets, was somewhat higher in the conservative compared to the liberal condition (liberal: v = 2.39 (s.d. 1.07), versus conservative: v = 3.06 (s.d. 1.16), p=0.0001, permutation test, Figure 2F, left bars). Almost perfect correlations across participants in both conditions between DDM drift rate and SDT d’ provided strong evidence that the drift rate parameter captures perceptual sensitivity (liberal, r = 0.98, p=1e –10 conservative, r = 0.96, p=5e –9 , see Figure 2—figure supplement 3A). Regarding the DDM bias parameters, the condition-fixed starting point parameter in the drift bias model was smaller than half the boundary separation (i.e. closer to the target-absent boundary (z = 0.24 (s.d. 0.06), p<0.0001, tested against 0.5)), indicating an overall conservative starting point across conditions (Figure 2—figure supplement 3D), in line with the overall positive SDT criterion (see Figure 2C, right panel). Strikingly, however, whereas the drift bias parameter was on average not different from zero in the conservative condition (db = –0.04 (s.d. 1.17), p=0.90), drift bias was strongly positive in the liberal condition (db = 2.08 (s.d. 1.0), p=0.0001 liberal vs conservative: p=0.0005 Figure 2F, right bars). The overall conservative starting point combined with a condition-specific neutral drift bias explained the conservative decision bias (as quantified by SDT criterion) in the conservative condition (Figure 2C). Likewise, in the liberal condition, the overall conservative starting point combined with a condition-specific positive drift bias (pushing the drift toward the target-present boundary) explained the neutral bias observed with SDT criterion (c around zero for liberal, see Figure 2C).

Convergent with these modeling results, drift bias was strongly anti-correlated across participants with both SDT criterion (r = –0.89 for both conditions, p=4e –6 ) and average reaction time (liberal, r = –0.57, p=0.02 conservative, r = –0.82, p=1e –4 , see Figure 2—figure supplement 3B C). The strong correlations between drift rate and d’ on the one hand, and drift bias and c on the other, provide converging evidence that the SDT and DDM frameworks capture similar underlying mechanisms, while the DDM additionally captures the dynamic nature of perceptual decision making by linking the decision bias manipulation to the evidence accumulation process itself. As a control, we also correlated starting point with criterion, and found that the correlations were somewhat weaker in both conditions (liberal, r = –0.75. conservative, r = –0.77), suggesting that the drift bias parameter better captured decision bias as instantiated by SDT.

Finally, the bias manipulation also affected two other parameters in the drift bias model that were not directly related to sensory evidence accumulation: boundary separation was slightly but reliably higher during the liberal compared to the conservative condition (p<0.0001), and non-decision time (comprising time needed for sensory encoding and motor response execution) was shorter during liberal (p<0.0001) (Figure 2—figure supplement 3D). In conclusion, the drift bias variant of the drift diffusion model best explained how participants adjusted to the decision bias manipulations. In the next sections, we used spectral analysis of the concurrent EEG recordings to identify a plausible neural mechanism that reflects biased sensory evidence accumulation.

Task-relevant textures induce stimulus-related responses in visual cortex

Sensory evidence accumulation in a visual target detection task presumably relies on stimulus-related signals processed in visual cortex. Such stimulus-related signals are typically reflected in cortical population activity exhibiting a rhythmic temporal structure (Buzsáki and Draguhn, 2004). Specifically, bottom-up processing of visual information has previously been linked to increased high-frequency (>40 Hz, i.e. gamma) electrophysiological activity over visual cortex (Bastos et al., 2015 Michalareas et al., 2016 Popov et al., 2017 van Kerkoerle et al., 2014). Figure 3 shows significant electrode-by-time-by-frequency clusters of stimulus-locked EEG power, normalized with respect to the condition-specific pre-trial baseline period (–0.4 to 0 s). We observed a total of four distinct stimulus-related modulations, which emerged after target onset and waned around the time of response: two in the high-frequency range (>36 Hz, Figure 3A (top) and Figure 3B) and two in the low-frequency range (<36 Hz, Figure 3A (bottom) and Figure 3C). First, we found a spatially focal modulation in a narrow frequency range around 25 Hz reflecting the steady state visual evoked potential (SSVEP) arising from entrainment by the visual stimulation frequency of our experimental paradigm (Figure 3A, bottom panel), as well as a second modulation from 42 to 58 Hz comprising the SSVEP’s harmonic (Figure 3A, top panel). Both SSVEP frequency modulations have a similar topographic distribution (see left panels of Figure 3A).

EEG spectral power modulations related to stimulus processing and motor response.

Each panel row depicts a three-dimensional (electrodes-by-time-by-frequency) cluster of power modulation, time-locked both to trial onset (left two panels) and button press (right two panels). Power modulation outside of the significant clusters is masked out. Modulation was computed as the percent signal change from the condition-specific pre-stimulus period (–0.4 to 0 s) and averaged across conditions. Topographical scalp maps show the spatial extent of clusters by integrating modulation over time-frequency bins. Time-frequency representations (TFRs) show modulation integrated over electrodes indicated by black circles in the scalp maps. Circle sizes indicate electrode weight in terms of proportion of time-frequency bins contributing to the TFR. P-values above scalp maps indicate multiple comparison-corrected cluster significance using a permutation test across participants (two-sided, N = 14). Solid vertical lines indicate the time of trial onset (left) or button press (right), dotted vertical lines indicate time of (non)target onset. Integr. M., integrated power modulation. SSVEP, steady state visual evoked potential. (A) (Top) 42–58 Hz (SSVEP harmonic) cluster. (A) (Bottom). Posterior 23–27 Hz (SSVEP) cluster. (B) Posterior 59–100 Hz (gamma) cluster. The clusters in A (Top) and B were part of one large cluster (hence the same p-value), and were split based on the sharp modulation increase precisely in the 42–58 Hz range. (C) 12–35 Hz (beta) suppression cluster located more posteriorly aligned to trial onset, and more left-centrally when aligned to button press.

Third, we observed a 59—100 Hz (gamma) power modulation (Figure 3B), after carefully controlling for high-frequency EEG artifacts due to small fixational eye movements (microsaccades) by removing microsaccade-related activity from the data (Hassler et al., 2011 Hipp and Siegel, 2013 Yuval-Greenberg et al., 2008), and by suppressing non-neural EEG activity through scalp current density (SCD) transformation (Melloni et al., 2009 Perrin et al., 1989) (see Materials and methods for details). Importantly, the topography of the observed gamma modulation was confined to posterior electrodes, in line with a role of gamma in bottom-up processing in visual cortex (Ni et al., 2016). Finally, we observed suppression of low-frequency beta (11—22 Hz) activity in posterior cortex, which typically occurs in parallel with enhanced stimulus-induced gamma activity (Donner and Siegel, 2011 Kloosterman et al., 2015a Meindertsma et al., 2017 Werkle-Bergner et al., 2014) (Figure 3C). Response-locked, this cluster was most pronounced over left motor cortex (electrode C4), plausibly due to the right-hand button press that participants used to indicate target detection (Donner et al., 2009). In the next sections, we characterize these signals separately for the two conditions, investigating stimulus-related signals within a pooling of 11 occipito-parietal electrodes based on the gamma enhancement in Figure 3B (Oz, POz, Pz, PO3, PO4, and P1 to P6), and motor-related signals in left-hemispheric beta (LHB) suppression in electrode C4 (Figure 3C) (O'Connell et al., 2012).

EEG power modulation time courses consistent with the drift bias model

Our behavioral results suggest that participants biased sensory evidence accumulation in the liberal condition, rather than changing their starting point. We next sought to provide converging evidence for this conclusion by examining pre-stimulus activity, post-stimulus activity, and motor-related EEG activity. Following previous studies, we hypothesized that a starting point bias would be reflected in a difference in pre-motor baseline activity between conditions before onset of the decision process (Afacan-Seref et al., 2018 de Lange et al., 2013), and/or in a difference in pre-stimulus activity such as seen in bottom up stimulus-related SSVEP and gamma power signals (Figure 4A shows the relevant clusters as derived from Figure 3). Thus, we first investigated the timeline of raw power in the SSVEP, gamma and LHB range between conditions (see Figure 4B). None of these markers showed a meaningful difference in pre-stimulus baseline activity. Statistically comparing the raw pre-stimulus activity between liberal and conservative in a baseline interval between –0.4 and 0 s prior to trial onset yielded p=0.52, p=0.51 and p=0.91, permutation tests, for the respective signals. This confirms a highly similar starting point of evidence accumulation in all these signals. Next, we predicted that a shift in drift bias would be reflected in a steeper slope of post-stimulus ramping activity (leading up to the decision). We reasoned that the best way of ascertaining such an effect would be to baseline the activity to the interval prior to stimulus onset (using the interval between –0.4 to 0 s), such that any post-stimulus effect we find cannot be explained by pre-stimulus differences (if any). The time course of post-stimulus and response-locked activity after baselining can be found in Figure 4C. All three signals showed diverging signals between the liberal and conservative condition after trial onset, consistent with adjustments in the process of evidence accumulation. Specifically, we observed higher peak modulation levels for the liberal condition in all three stimulus-locked signals (p=0.08, p=0.002 and p=0.023, permutation tests for SSVEP, gamma and LHB, respectively), and found a steeper slope toward the button press for LHB (p=0.04). Finally, the event related potential in motor cortex also showed a steeper slope toward report for liberal (p=0.07, Figure 4, bottom row, baseline plot is not meaningful for time-domain signals due to mean removal during preprocessing). Taken together, these findings provide converging evidence that participants implemented a liberal decision bias by adjusting the rate of evidence accumulation toward the target-present choice boundary, but not its starting point. In the next sections, we sought to identify a neural mechanism that could underlie these biases in the rate of evidence accumulation.

Experimental task manipulations affect the time course of stimulus- and motor-related EEG signals, but not its starting point.

Raw power throughout the baseline period and time courses of power modulation time-locked to trial start and button press. (A) Relevant electrode clusters and frequency ranges (from Figure 3): Posterior SSVEP, Posterior gamma and Left-hemispheric beta (LHB). (B) The time course of raw power in a wide interval around the stimulus –0.8 to 0.8 s ms for these clusters. (C) Stimulus locked and response locked percent signal change from baseline (baseline period: –0.4 to 0 s). Error bars, SEM. Black horizontal bar indicates significant difference between conditions, cluster-corrected for multiple comparison (p<0.05, two sided). SSVEP, steady state visual evoked potential LHB, left hemispheric beta ERP, event-related potential SCD, scalp current density.

Liberal bias is reflected in pre-stimulus midfrontal theta enhancement and posterior alpha suppression

Given a lack of pre-stimulus (starting-point) differences in specific frequency ranges involved in stimulus processing or motor responses (Figure 4B), we next focused on other pre-stimulus differences that might be the root cause of the post-stimulus differences we observed in Figure 4C. To identify such signals at high frequency resolution, we computed spectral power in a wide time window from –1 s until trial start. We then ran a cluster-based permutation test across all electrodes and frequencies in the low-frequency domain (1–35 Hz), looking for power modulations due to our experimental manipulations. Pre-stimulus spectral power indeed uncovered two distinct modulations in the liberal compared to the conservative condition: (1) theta modulation in midfrontal electrodes and (2) alpha modulation in posterior electrodes. Figure 5A depicts the difference between the liberal and conservative condition, confirming significant clusters (p<0.05, cluster-corrected for multiple comparisons) of enhanced theta (2–6 Hz) in frontal electrodes (Fz, Cz, FC1,and FC2), as well as suppressed alpha (8—12 Hz) in a group of posterior electrodes, including all 11 electrodes selected previously based on post-stimulus gamma modulation (Figure 3). The two modulations were uncorrelated across participants (r = 0.06, p=0.82), suggesting they reflect different neural processes related to our experimental task manipulations. These findings are consistent with literature pointing to a role of midfrontal theta as a source of cognitive control signals originating from pre-frontal cortex (Cohen and Frank, 2009 van Driel et al., 2012) and alpha in posterior cortex reflecting spontaneous trial-to-trial fluctuations in decision bias (Iemi et al., 2017). The fact that these pre-stimulus effects occur as a function of our experimental manipulation suggests that they are a hallmark of strategic bias adjustment, rather than a mere correlate of spontaneous shifts in decision bias. Importantly, this finding implies that humans are able to actively control pre-stimulus alpha power in visual cortex (possibly through top-down signals from frontal cortex), plausibly acting to bias sensory evidence accumulation toward the response alternative that maximizes reward.

Adopting a liberal decision bias is reflected in increased midfrontal theta and suppressed pre-stimulus alpha power.

(A) Significant clusters of power modulation between liberal and conservative in a pre-stimulus window between −1 and 0 s before trial onset. When performing a cluster-based permutation test over all frequencies (1–35 Hz) and electrodes, two significant clusters emerged: theta (2–6 Hz, top), and alpha (8–12 Hz, bottom). Left panels: raw power spectra of pre-stimulus neural activity for conservative and liberal separately in the significant clusters (for illustration purposes), Middle panels: Liberal – conservative raw power spectrum. Black horizontal bar indicates statistically significant frequency range (p<0.05, cluster-corrected for multiple comparisons, two-sided). Right panels: Corresponding liberal – conservative scalp topographic maps of the pre-stimulus raw power difference between conditions for EEG theta power (2–6 Hz) and alpha power (8–12 Hz). Plotting conventions as in Figure 3. Error bars, SEM across participants (N = 15). (B) Probability density distributions of single trial alpha power values for both conditions, averaged across participants.

Pre-stimulus alpha power is linked to cortical gamma responses

Next, we asked how suppression of pre-stimulus alpha activity might bias the process of sensory evidence accumulation. One possibility is that alpha suppression influences evidence accumulation by modulating the susceptibility of visual cortex to sensory stimulation, a phenomenon termed ‘neural excitability’ (Iemi et al., 2017 Jensen and Mazaheri, 2010). We explored this possibility using a theoretical response gain model formulated by Rajagovindan and Ding (2011). This model postulates that the relationship between the total synaptic input that a neuronal ensemble receives and the total output activity it produces is characterized by a sigmoidal function (Figure 6A) – a notion that is biologically plausible (Destexhe et al., 2001 Freeman, 1979). In this model, the total synaptic input into visual cortex consists of two components: (1) sensory input (i.e. due to sensory stimulation) and (2) ongoing fluctuations in endogenously generated (i.e. not sensory-related) neural activity. In our experiment, the sensory input into visual cortex can be assumed to be identical across trials, because the same sensory stimulus was presented in each trial (see Figure 2A). The endogenous input, in contrast, is thought to vary from trial to trial reflecting fluctuations in top-down cognitive processes such as attention. These fluctuations are assumed to be reflected in the strength of alpha power suppression, such that weaker alpha is associated with increased attention (Figure 6B). Given the combined constant sensory and variable endogenous input in each trial (see horizontal axis in Figure 6A), the strength of the output responses of visual cortex are largely determined by the trial-to-trial variations in alpha power (see vertical axis in Figure 6A). Furthermore, the sigmoidal shape of the input-output function results in an effective range in the center of the function’s input side which yields the strongest stimulus-induced output responses since the sigmoid curve there is steepest. Mathematically, the effect of endogenous input on stimulus-induced output responses (see marked interval in Figure 6A) can be expressed as the first order derivative or slope of the sigmoid in Figure 6A, which is referred to as the response gain by Rajagovindan and Ding (2011). This derivative is plotted in Figure 6B (blue and red solid lines) across levels of pre-stimulus alpha power, predicting an inverted-U shaped relationship between alpha and response gain in visual cortex.

Pre-stimulus alpha power is linked to cortical gamma responses.

(A) Theoretical response gain model describing the transformation of stimulus-induced and endogenous input activity (denoted by Sx and SN respectively) to the total output activity (denoted by O(Sx +SN)) in visual cortex by a sigmoidal function. Different operational alpha ranges are associated with input-output functions with different slopes due to corresponding changes in the total output. (B) Alpha-linked output responses (solid lines) are formalized as the first derivative (slope) of the sigmoidal functions (dotted lines), resulting in inverted-U (Gaussian) shaped relationships between alpha and gamma, involving stronger response gain in the liberal than in the conservative condition. (C) Corresponding empirical data showing gamma modulation (same percent signal change units as in Figure 3) as a function of alpha bin. The location on the x-axis of each alpha bin was taken as the median alpha of the trials assigned to each bin and averaged across subjects. (D-F) Model prediction tests. (D) Raw pre-stimulus alpha power for both conditions, averaged across subjects. (E) Post-stimulus gamma power modulation for both conditions averaged across the two middle alpha bins (5 and 6) in panel C. (F) Liberal – conservative difference between the response gain curves shown in panel C, centered on alpha bin. Error bars, within-subject SEM across participants (N = 14).

Figure 6—source data 1

SPSS .sav file containing the data used in panels C, E, and F.

Regarding our experimental conditions, the model not only predicts that the suppression of pre-stimulus alpha observed in the liberal condition reflects a shift in the operational range of alpha (see Figure 5B), but also that it increases the maximum output of visual cortex (a shift from the red to the blue line in Figure 6A). Therefore, the difference between stimulus conditions is not modeled using a single input-output function, but necessitates an additional mechanism that changes the input-output relationship itself. The exact nature of this mechanism is not known (also see Discussion). Rajagovindan and Ding suggest that top-down mechanisms modulate ongoing prestimulus neural activity to increase the slope of the sigmoidal function, but despite the midfrontal theta activity we observed, evidence for this hypothesis is somewhat elusive. We have no means to establish directly whether this relationship exists, and can merely reflect on the fact that this change in the input-output function is necessary to capture condition-specific effects of the input-output relationship, both in the data of Rajagovindan and Ding (2011) and in our own data. Thus, as the operational range of alpha shifts leftwards from conservative to liberal, the upper asymptote in Figure 6A moves upwards such that the total maximum output activity increases. This in turn affects the inverted-U-shaped relationship between alpha and gain in visual cortex (blue line in Figure 6B), leading to a steeper response curve in the liberal condition resembling a Gaussian (bell-shaped) function.

To investigate sensory response gain across different alpha levels in our data, we used the post-stimulus gamma activity (see Figure 3B) as a proxy for alpha-linked output gain in visual cortex (Bastos et al., 2015 Michalareas et al., 2016 Ni et al., 2016 Popov et al., 2017 van Kerkoerle et al., 2014). We exploited the large number of trials per participant per condition (range 543 to 1391 trials) by sorting each participant’s trials into ten equal-sized bins ranging from weak to strong alpha, separately for the two conditions. We then calculated the average gamma power modulation within each alpha bin and finally plotted the participant-averaged gamma across alpha bins for each condition in Figure 6C (see Materials and methods for details). This indeed revealed an inverted-U shaped relationship between alpha and gamma in both conditions, with a steeper curve for the liberal condition.

To assess the model’s ability to explain the data, we statistically tested three predictions derived from the model. First, the model predicts overall lower average pre-stimulus alpha power for liberal than for conservative due to the shift in the operational range of alpha. This was confirmed in Figure 6D (p=0.01, permutation test, see also Figure 5). Second, the model predicts a stronger gamma response for liberal than for conservative around the peak of the gain curve (the center of the effective alpha range, see Figure 6B), which we indeed observed (p=0.024, permutation test on the average of the middle two alpha bins) (Figure 6E). Finally, the model predicts that the difference between the gain curves (when they are aligned over their effective ranges on the x-axis using alpha bin number, as shown in Figure 6—figure supplement 1A) also resembles a Gaussian curve (Figure 6—figure supplement 1B). Consistent with this prediction, we observed an interaction effect between condition (liberal, conservative) and bin number (1-10) using a standard Gaussian contrast in a two-way repeated measures ANOVA (F(1,13) = 4.6, p=0.051, partial η 2 = 0.26). Figure 6F illustrates this finding by showing the difference between the two curves in Figure 6C as a function of alpha bin number (see Figure 6—figure supplement 1C for the curves of both conditions as a function of alpha bin number). Subsequent separate tests for each condition indeed confirmed a significant U-shaped relationship between alpha and gamma in the liberal condition with a large effect size (F(1,13) = 7.7, p=0.016, partial η 2 = 0.37), but no significant effect in the conservative condition with only a small effect size (F(1,13) = 1.7, p=0.22, partial η 2 = 0.12), using one-way repeated measures ANOVA’s with alpha bin (Gaussian contrast) as the factor of interest.

Taken together, these findings suggest that the alpha suppression observed in the liberal compared to the conservative condition boosted stimulus-induced activity, which in turn might have indiscriminately biased sensory evidence accumulation toward the target-present decision boundary. In the next section, we investigate a direct link between drift bias and stimulus-induced activity as measured through gamma.

Visual cortical gamma activity predicts strength of evidence accumulation bias

The findings presented so far suggest that behaviorally, a liberal decision bias shifts evidence accumulation toward target-present responses (drift bias in the DDM), while neurally it suppresses pre-stimulus alpha and enhances poststimulus gamma responses. In a final analysis, we asked whether alpha-binned gamma modulation is directly related to a stronger drift bias. To this end, we again applied the drift bias DDM to the behavioral data of each participant, while freeing the drift bias parameter not only for the two conditions, but also for the 10 alpha bins for which we calculated gamma modulation (see Figure 6C). We directly tested the correspondence between DDM drift bias and gamma modulation using repeated measures correlation (Bakdash and Marusich, 2017), which takes all repeated observations across participants into account while controlling for non-independence of observations collected within each participant (see Materials and methods for details). Gamma modulation was indeed correlated with drift bias in both conditions (liberal, r(125) = 0.49, p=5e-09 conservative, r(125) = 0.38, p=9e-06) (Figure 7). We tested the robustness of these correlations by excluding the data points that contributed most to the correlations (as determined with Cook’s distance) and obtained qualitatively similar results, indicating these correlations were not driven by outliers (Figure 7, see Materials and methods for details). To rule out that starting point could explain this correlation, we repeated this analysis while controlling for the starting point of evidence accumulation estimated per alpha bin within the starting point model. To this end, we regressed both bias parameters on gamma. Crucially, we found that in both conditions starting point bias did not uniquely predict gamma when controlling for drift bias (liberal: F(1,124) = 5.8, p=0.017 for drift bias, F(1,124) = 0.3, p=0.61 for starting point conservative: F(1,124) = 8.7, p=0.004 for drift bias, F(1,124) = 0.4, p=0.53 for starting point. This finding suggests that the drift bias model outperforms the starting point model when correlated to gamma power. As a final control, we also performed this analysis for the SSVEP (23–27 Hz) power modulation (see Figure 3B, bottom) and found a similar inverted-U shaped relationship between alpha and the SSVEP for both conditions (Figure 7—figure supplement 1A), but no correlation with drift bias (liberal, r(125) = 0.11, p=0.72, conservative, r(125) = 0.22, p=0.47) (Figure 7—figure supplement 1B) or with starting point (liberal, r(125) = 0.08, p=0.02, conservative, r(125) = 0.22, p=0.95). This suggests that the SSVEP is similarly coupled to alpha as the stimulus-induced gamma, but is less affected by the experimental conditions and not predictive of decision bias shifts. Taken together, these results suggest that alpha-binned gamma modulation underlies biased sensory evidence accumulation.

Alpha-binned gamma modulation correlates with evidence accumulation bias.

Repeated measures correlation between gamma modulation and drift bias for the two conditions. Each circle represents a participant’s gamma modulation within one alpha bin. Drift bias and gamma modulation scalars were residualized by removing the average within each participant and condition, thereby removing the specific range in which the participants values operated. Crosses indicate data points that were most influential for the correlation, identified using Cook’s distance. Correlations remained qualitatively unchanged when these data points were excluded (liberal, r(120) = 0.46, p=8e-07 conservative, r(121) = 0.27, p=0.0009) Error bars, 95% confidence intervals after averaging across participants.

Figure 7—source data 1

MATLAB .mat file containing the data used.

Finally, we asked to what extent the enhanced tonic midfrontal theta may have mediated the relationship between alpha-binned gamma and drift bias. To answer this question, we entered drift bias in a two-way repeated measures ANOVA with factors theta and gamma power (all variables alpha-binned), but found no evidence for mediation of the gamma-drift bias relationship by midfrontal theta (liberal, F(1,13) = 1.3, p=0.25 conservative, F(1,13) = 0.003, p=0.95). At the same time, the gamma-drift bias relationship was qualitatively unchanged when controlling for theta (liberal, F(1,13) = 48.4, p<0.001 conservative, F(1,13) = 19.3, p<0.001). Thus, the enhanced midfrontal theta in the liberal condition plausibly reflects a top-down, attention-related signal indicating the need for cognitive control to avoid missing targets, but its amplitude seemed not directly linked to enhanced sensory evidence accumulation, as found for gamma. This latter finding suggests that the enhanced theta in the liberal condition served as an alarm signal indicating the need for a shift in response strategy, without specifying exactly how this shift was to be implemented (Cavanagh and Frank, 2014).


Introduction

Color, one of the most powerful aspects of the environment, has been reported to promote human adaptation to the environment and enhance spatial form [1]. Color may change the perceived size and warmth of a room, elicit associations, enhance introversion or extroversion, incite anger of relaxation, and influence physiological responses [2], [3]. Several studies have been conducted to study the psychological impact of color. For example, whereas warm colors provide visual activation and stimulation, cool colors communicate subtlety and relaxation.

The visual environment is a vital element influencing hospital staff morale and productivity studies have even reported that an enhanced visual environment have produced improved faster recovery rates by as much as 10%. In fact, these improvements have been attributed to particular elements of the visual environment they include the use of appropriate color in interior design. In hospital design, color can have an impact on peoples’ perceptions and responses to the environment and also affect patient recovery rates, improving the quality and overall experience of patients, staff and visitors. Color is also powerful tools for coding, navigation and wayfinding, color can also promote a sense of well-being and independence.

There are numerous studies in the field of psychology which demonstrate the relationship between human behavior and color. However, color studies in the environmental design field are almost non-existent. Nourse and Welch [4] tested Wilson’s [5] finding that red is more arousing than green. Jacobs and Hustmyer [6] found red was significantly more arousing than blue or yellow, and green more than blue. Fehrman and Fehrman [7] reported on the effects of blue, red, and yellow on task performance and arousal for the effective color selection of interior spaces. Kwallek, Lewis, and Robbins [8] examined the effects of a red colored office versus a blue one on subject productivity and mood. Kwallek and Lewis [9] investigated effects of environmental color on gender using a red, white, and green office. The experiment assessed worker performance in proofreading and mood under different colored office environments. Weller and Livingston [10] examined the effect of colored-paper on emotional responses obtained from questionnaires. Six different questionnaires were designed and compiled in this order: pink-guilty, pink-not guilty blue-guilty, blue-not guilty white-guilty, white-not guilty. Boyatzis and Varghese [11] investigated children’s color and emotion associations. They found that children showed positive emotions to bright colors (pink, red, yellow, green, purple, or blue) and negative emotions for dark colors (brown, black, gray). Hemphill [12] examined adults’ color and emotion associations and compared them with the findings by Boyatzis and Varghese [11].

Scientific research means reviewing a concept through observation in order to test a theory or hypothesis in order to explain a phenomenon. The presupposed theory and hypotheses are tested by systematic observations to make a general explanation for the natural phenomena. Experiments are useful to explore the phenomena because they involve testing hypotheses and investigating cause-and-effect relationships [13]. Experiments are characterized by the manipulation of independent variables and identifying possible cause-and-effect relationships between an independent variable and a dependent variable. Types of experiments are categorized by the degree of random assignment of subjects to the various conditions they are true experiments, quasi-experiments, and single-subject experiments. True experiments require unbiased random assignment of subjects to treatment groups and the researcher’s ability to manipulate independent variables directly. Rigorous experiments are typically done in a laboratory where it is possible to control variables. The major advantage of these experiments is their ability to establish causal relationships quasi-experiments do not establish causal relationships to the same degree as true experiments since experiments in the field routinely encounter uncontrollable factors. The advantage of quasi-experiments is their higher generalizeability because of their naturalness when compared to the artificiality of true experiments [14].

Based on previous studies, the consistent trend is that blue and green are the most preferred colors. However, the majority of color preference studies failed to control the confounding variables such as color attributes [15], [16]. A well-controlled color preference study of psychological patients appears to be non-existent. In order to address the limitations found in previous research, this study is the first to use experimental design in order to provide a foundation for color studies. This research advances the understanding of the value of color in a counseling room by studying psychological patients’ perceptions of color. This knowledge should facilitate improvements in the design of hospitals. The main purposes of this study are: 1) to propose an experiment-based color design research approach, 2) to create an optimization solution on the use of color design.


Molly Crockett: "The Neuroscience of Moral Decision Making"

Imagine we could develop a precise drug that amplifies people's aversion to harming others on this drug you won't hurt a fly, everyone taking it becomes like Buddhist monks. Who should take this drug? Only convicted criminals—people who have committed violent crimes? Should we put it in the water supply? These are normative questions. These are questions about what should be done. I feel grossly unprepared to answer these questions with the training that I have, but these are important conversations to have between disciplines. Psychologists and neuroscientists need to be talking to philosophers about this. These are conversations that we need to have because we don't want to get to the point where we have the technology but haven't had this conversation, because then terrible things could happen.

MOLLY CROCKETT is an associate professor in the Department of Experimental Psychology, University of Oxford Wellcome Trust Postdoctoral Fellow, Wellcome Trust Centre for Neuroimaging. Molly Crockett's Edge Bio Page

THE NEUROSCIENCE OF MORAL DECISION MAKING

I'm a neuroscientist at the University of Oxford in the UK. I'm interested in decision making, specifically decisions that involve tradeoffs for example, tradeoffs between my own self-interest and the interests of other people, or tradeoffs between my present desires and my future goals.

One case study for this is moral decision making. When we can see that there's a selfish option and we can see that there's an altruistic or a cooperative option, we can reason our way through the decision, but there are also gut feelings about what's right and what's wrong. I've studied the neurobiology of moral decision making, specifically how different chemicals in our brains—neuromodulators—can shape the process of making moral decisions and push us one way or another when we're reasoning and deciding.

Neuromodulators are chemicals in the brain. There are a bunch of different neuromodulator systems that serve different functions. Events out in the world activate these systems and then they perfuse into different regions of the brain and influence the way that information is processed in those regions. All of you have experience with neuromodulators. Some of you are drinking cups of coffee right now. Many of you probably had wine with dinner last night. Maybe some of you have other experiences that are a little more interesting.

But you don't need to take drugs or alcohol to influence your neurochemistry. You can also influence your neurochemistry through natural events: Stress influences your neurochemistry, sex, exercise, changing your diet. There are all these things out in the world that feed into our brains through these chemical systems. I've become interested in studying if we change these chemicals in the lab, can we cause changes in people's behavior and their decision making?

One thing to keep in mind about the effects of these different chemicals on our behavior is that the effects here are subtle. The effect sizes are really small. This has two consequences for doing research in this area. The first is because the effect sizes are so small, the published literature on this is likely to be underpowered. There are probably a lot of false positives out there. We heard earlier that there is a lot of thought on this in science, not just in psychology but in all of science about how we can do better powered experiments, and how we can create a set of data that will tell us what's going on.

The other thing—and this is what I've been interested in—is because the effects of neuromodulators are so subtle, we need precise measures in the lab of the behaviors and decision processes that we're interested in. It's only with precise measures that we're going to be able to pick up these subtle effects of brain chemistry, which maybe at the individual level aren't going to make a dramatic difference in someone's personality, but at the aggregate level, in collective behaviors like cooperation and public goods problems, these might become important on a global scale.

How can we measure moral decision making in the lab in a precise way, and also in a way that we can agree is actually moral? This is an important point. One big challenge in this area is there's a lot of disagreement about what constitutes a moral behavior. What is moral? We heard earlier about cooperation—maybe some people think that's a moral decision but maybe other people don't. That's a real issue for getting people to cooperate.

First we have to pick a behavior that we can all agree is moral, and secondly we need to measure it in a way that tells us something about the mechanism. We want to have these rich sets of data that tell us about these different moving parts—these different pieces of the puzzle—and then we can see how they map onto different parts of the brain and different chemical systems.

What I'm going to do over the next 20 minutes is take you through my thought process over the past several years. I tried a bunch of different ways of measuring the effects of neurochemistry on what at one point I think is moral decision making, but then turns out maybe is not the best way to measure morality. And I'll show you how I tried to zoom in on more advanced and sophisticated ways of measuring the cognitions and emotions that we care about in this context.

When I started this work several years ago, I was interested in punishment and economic games that you can use to measure punishment—if someone treats you unfairly then you can spend a bit of money to take money away from them. I was interested specifically in the effects of a brain chemical called serotonin on punishment. The issues that I'll talk about here aren't specific to serotonin but apply to this bigger question of how can we change moral decision making.

When I started this work the prevailing view about punishment was that punishment was a moral behavior—a moralistic or altruistic punishment where you're suffering a cost to enforce a social norm for the greater good. It turned out that serotonin was an interesting chemical to be studying in this context because serotonin has this long tradition of being associated with prosocial behavior. If you boost serotonin function, this makes people more prosocial. If you deplete or impair serotonin function, this makes people antisocial. If you go by the logic that punishment is a moral thing to do, then if you enhance serotonin, that should increase punishment. What we actually see in the lab is the opposite effect. If you increase serotonin people punish less, and if you decrease serotonin people punish more.

That throws a bit of a spanner in the works of the idea that punishment is this exclusively prosocially minded act. And this makes sense if you just introspect into the kinds of motivations that you go through if someone treats you unfairly and you punish them. I don't know about you, but when that happens to me I'm not thinking about enforcing a social norm or the greater good, I just want that guy to suffer I just want him to feel bad because he made me feel bad.

The neurochemistry adds an interesting layer to this bigger question of whether punishment is prosocially motivated, because in some ways it's a more objective way to look at it. Serotonin doesn't have a research agenda it's just a chemical. We had all this data and we started thinking differently about the motivations of so-called altruistic punishment. That inspired a purely behavioral study where we give people the opportunity to punish those who behave unfairly towards them, but we do it in two conditions. One is a standard case where someone behaves unfairly to someone else and then that person can punish them. Everyone has full information, and the guy who's unfair knows that he's being punished.

Then we added another condition, where we give people the opportunity to punish in secret— hidden punishment. You can punish someone without them knowing that they've been punished. They still suffer a loss financially, but because we obscure the size of the stake, the guy who's being punished doesn't know he's being punished. The punisher gets the satisfaction of knowing that the bad guy is getting less money, but there's no social norm being enforced.

What we find is that people still punish a lot in the hidden punishment condition. Even though people will punish a little bit more when they know the guy who's being punished will know that he's being punished—people do care about norm enforcement—a lot of punishment behavior can be explained by a desire for the norm violator to have a lower payoff in the end. This suggests that punishment is potentially a bad way to study morality because the motivations behind punishment are, in large part, spiteful.

Another set of methods that we've used to look at morality in the lab and how it's shaped by neurochemistry is trolley problems—the bread and butter of moral psychology research. These are hypothetical scenarios where people are asked whether it's morally acceptable to harm one person in order to save many others.

We do find effects of neuromodulators on these scenarios and they're very interesting in their own right. But I've found this tool unsatisfying for the question that I'm interested in, which is: How do people make moral decisions with real consequences in real time, rather than in some hypothetical situation? I'm equally unsatisfied with economic games as a tool for studying moral decision making because it's not clear that there's a salient moral norm in something like cooperation in a public goods game, or charitable giving in a dictator game. It's not clear that people feel guilty if they choose the selfish option in these cases.

After all this I've gone back to the drawing board and thought about what is the essence of morality? There's been some work on this in recent years. One wonderful paper by Kurt Gray, Liane Young, and Adam Waytz argues that the essence of morality is harm, specifically intentional interpersonal harm—an agent harming a patient. Of course morality is more than this absolutely morality is more than this. It will be hard to find a moral code that doesn't include some prohibition against harming someone else unless you have a good reason.

What I wanted to do was create a measure in the lab that can precisely quantify how much people dislike causing interpersonal harms. What we came up with was getting people to make tradeoffs between personal profits—money—and pain in the form of electric shocks that are given to another person.

What we can do with this method is calculate, in monetary terms, how much people dislike harming others. And we can fit computational models to their decision process that give us a rich picture of how people make these decisions -- not just how much harm they're willing to deliver or not -- but what is the precise value they place on the harm of others relative to, for example, harm to themselves? What is the relative certainty or uncertainty with which they're making those decisions? How noisy are their choices? If we're dealing with monetary gains or losses, how does loss aversion factor into this?

We can get a more detailed picture of the data and of the decision process from using methods like these, which are largely inspired by work on non-social decision making and computational neuroscience where a lot of progress has been made in recent years. For example, in foraging environments how do people decide whether to go left or right when there are fluctuating reward contingencies in the environment?

What we're doing is importing those methods to the study of moral decision making and a lot of interesting stuff has come out of it. As you might expect there is individual variation in decision making in this setting. Some people care about avoiding harm to others and other people are like, "Just show me the money, I don't care about the other person." I even had one subject who was almost certainly on the psychopathy scale. When I explained to him what he had to do he said, "Wait, you're going to pay me to shock people? This is the best experiment ever!" Whereas other people are uncomfortable and are even distressed by this. This is capturing something real about moral decision making.

One thing that we're seeing in the data is that people who seem to be more averse to harming others are slower when they're making their decisions. This is an interesting contrast to Dave's work where the more prosocial people are faster. Of course there are issues that we need to work out about correlation versus causation in response times and decision making, but there are some questions here in thinking about the differences between a harm context and helping context. It may be that the heuristics that play out in a helping context come from learning about what is good and latch onto neurobiological systems that approach rewards and get invigorated when there are awards around, in contrast to neurobiological systems that avoid punishments and slow down or freeze when there are punishments around.

In the context of tradeoffs between profit for myself and pain for someone else, it makes sense that people who are maximizing the profit for themselves are going to be faster because if you're considering the harm to someone else, that's an extra computational step you have to take. If you're going to factor in someone else's suffering—the negative externality of your decisions—you have to do that computation and that's going to take a little time.

In this broader question of the time course of moral decision making, there might be a sweet spot where on the one hand you have an established heuristic of helping that's going to make you faster, but at the same time considering others is also a step that requires some extra processing. This makes sense.

When I was developing this work in London I was walking down the street one day checking my phone, as we all do, and this kid on a bike in a hoodie came by and tried to steal my phone. He luckily didn't get it, it just crashed to the floor -- he was an incompetent thief. In thinking about what his thought process was during that time, he wasn't thinking about me at all. He had his eye on the prize. He had his eye on the phone, he was thinking about his reward. He wasn't thinking about the suffering that I would feel if I lost my phone. That's a broader question to think about in terms of the input of mentalizing to moral decision making.

Another observation is that people who are nicer in this setting seem to be more uncertain in their decision making. If you look at the parameters that describe uncertainty, you can see that people who are nicer seem to be more noisy around their indifference point. They waver more in these difficult decisions.

So I've been thinking about uncertainty and its relationship to altruism and social decision making, more generally. One potentially fruitful line of thought is that social decisions—decisions that affect other people—always have this inherent element of uncertainty. Even if we're a good mentalizer, even if we're the best possible mentalizer, we're never going to fully know what it is like to be someone else and how another person is going to experience the effects of our actions on them.

One thing that it might make sense to do if we want to co-exist peacefully with others is we simulate how our behavior is going to effect others, but we err on the side of caution. We don't want to impose an unbearable cost on someone else so we think, "Well, I might dislike this outcome a certain amount but maybe my interaction partner is going to dislike it a little more so I'm just going to add a little extra safety—a margin of error—that's going to move me in the prosocial direction." We're seeing this in the context of pain but this could apply to any cost—risk or time cost.

Imagine that you have a friend who is trying to decide between two medical procedures. One procedure produces the most desirable outcome, but it also has a high complication or a high mortality rate. Another procedure doesn't achieve as good of an outcome but it's much safer. Suppose your friend says to you, "I want you to choose which procedure I'm going to have. I want you to choose for me." First of all, most of us would be very uncomfortable making that decision for someone else. Second, my intuition is that I would definitely go for the safer option because if something bad happened in the risky decision, I would feel terrible.

This idea that we can't access directly someone else's utility function is a rather old idea and it goes back to the 1950s with the work of John Harsanyi, who did some work on what he called interpersonal utility comparisons. How do you compare one person's utility to another person's utility? This problem is important, particularly in utilitarian ethics, because if you want to maximize the greatest good for the greatest number, you have to have some way of measuring the greatest good for each of those numbers.

The challenge of doing this was recognized by the father of utilitarianism, Jeremy Bentham, who said, "'Tis vain to talk of adding quantities which after the addition will continue to be as distinct as they were before one man's happiness will never be another man's happiness: a gain to one man is no gain to another: you might as well pretend to add 20 apples to 20 pears."

This problem has still not been solved. Harsanyi has done a lot of great work on this but what he ended up with—his final solution—was still an approximation that assumes that people have perfect empathy, which we know is not the case. There's still room in this area for exploration.

The other thing about uncertainty is that, on one hand it could lead us towards prosocial behavior, but on the other hand there's evidence that uncertainty about outcomes and about how other people react to those outcomes can license selfish behavior. Uncertainty can also be exploited for personal gain for self-serving interests.

Imagine you're the CEO of a company. You're trying to decide whether to lay off some workers in order to increase shareholder value. If you want to do the cost benefit analysis, you have to calculate what's the negative utility for the workers of losing their jobs and how does that compare to the positive utility of the shareholders for getting these profits? Because you can't directly access how the workers are going to feel, and how the shareholders are going to feel, there's space for self-interest to creep in, particularly if there are personal incentives to push you one direction or the other.

There's some nice work that has been done on this by Roberto Weber and Jason Dana who have shown that if you put people in situations where outcomes are ambiguous, people will use this to their advantage to make the selfish decision but still preserve their self-image as being a moral person. This is going to be an important question to address. When does uncertainty lead to prosocial behavior because we don't want to impose an unbearable cost on someone else? And when does it lead to selfish behavior because we can convince ourselves that it's not going to be that bad?

These are things we want to be able to measure in the lab and to map different brain processes—different neurochemical systems—onto these different parameters that all feed into decisions. We're going to see progress over the next several years because in this non-social computational neuroscience there are smart people who are mapping how basic decisions work. All people like me have to do is import those methods to studying more complex social decisions. There's going to be a lot of low-hanging fruit in this area over the next few years.

Once we figure out how all this works—and I do think it's going to be a while—I've been misquoted sometimes about saying morality pills are just around the corner, and I assure you that this is not the case. It's going to be a very long time before we're able to intervene in moral behavior and that day may never even come. The reason why this is such a complicated problem is because working out how the brain does this is the easy part. The hard part is what to do with that. This is a philosophical question. If we figure out how all the moving parts work, then the question is should we intervene and if so how should we intervene?

Imagine we could develop a precise drug that amplifies people's aversion to harming others on this drug you won't hurt a fly, everyone taking it becomes like Buddhist monks. Who should take this drug? Only convicted criminals—people who have committed violent crimes? Should we put it in the water supply? These are normative questions. These are questions about what should be done. I feel grossly unprepared to answer these questions with the training that I have, but these are important conversations to have between disciplines. Psychologists and neuroscientists need to be talking to philosophers about this. These are conversations that we need to have because we don't want to get to the point where we have the technology but haven't had this conversation, because then terrible things could happen.

The last thing that I'll say is it's also interesting to think about the implications of this work, the fact that we can shift around people's morals by giving them drugs. What are the implications of this data for our understanding of what morality is?

There's increasing evidence now that if you give people testosterone or influence their serotonin or oxytocin, this is going to shift the way they make moral decisions. Not in a dramatic way, but in a subtle yet significant way. And because the levels and function of our neuromodulators are changing all the time in response to events in our environment, that means that external circumstances can play a role in what you think is right and what you think is wrong.

Many people may find this to be deeply uncomfortable because we like to think of our morals as being core to who we are and one of the most stable things about us. We like to think of them as being written in stone. If this is not the case, then what are the implications for our understanding of who we are and what we should think about in terms of enforcing norms in society? Maybe you might think the solution is we should just try to make our moral judgments from a neutral stance, like the placebo condition of life. That doesn't exist. Our brain chemistry is shifting all the time so it's this very unsteady ground that we can't find our footing on.

At the end of the day that's how I try to avoid being an arrogant scientist who's like, "I can measure morality in the lab." I have deep respect for the instability of these things and these are conversations that I find deeply fascinating.

THE REALITY CLUB

L.A. PAUL: I had a question about how you want to think about these philosophical issues. Sometimes they get described as autonomy. You said if we could discover some chemical that would improve people’s moral capacities, do we put it in the water? The question I have is a little bit related to imaginability. In other words, the guy who tried to steal your phone. The thought was: If he were somehow better able to imagine how I would respond, he would somehow make maybe a better moral judgment. There’s an interesting normative versus descriptive question there because on the one hand, it might be easier to justify putting the drug in the water if it made people better at grasping true moral facts.

What if it just made them better at imagining various scenarios so that they acted in a morally better way, but in fact it had no connection at all to reality, it just made their behavior better. It seems like it’s important to make that distinction even with the work that you’re doing. Namely, are you focusing on how people actually act or are you focusing on the psychological facts? Which one are we prioritizing and which one are we using to justify whatever kinds of policy implications?

CROCKETT: This goes back to the question of do we want to be psychologists or economists if we're confronted with a worldly, all-powerful being. I am falling squarely in the psychologist camp in that it's so important to understand the motivations behind why people do the things they do -- because if you change the context, then people might behave differently. If you're just observing behavior and you don't know why that behavior occurs, then you could make incorrect predictions.

Back to your question, one thought that pops up is it's potentially less controversial to enhance capabilities that people think about as giving them more competence in the world.

PAUL: There's interesting work on organ donors in particular. When people are recruiting possible organ donors and they’re looking at the families who have to make the decision, it turns out that that you get better results by encouraging the families of potential donors to imagine that the daughter was killed in a car accident, the recipient of the organ will be 17 and also loves horses. It could just be some dude with a drug problem who’s going to get the organ, but the measured results of the donating family are much better if that family engages in this fictitious imagining even though it has no connection at all to the truth. It's not always simple. In other words, the moral questions sometimes come apart from the desired empirical result.

CROCKETT: One way that psychologists and neuroscientists can contribute to this discussion is to be as specific and precise as possible in understanding how to shape motivation versus how to shape choices. I don't have a good answer about the right thing to do in this case, but I agree that it is an important question.

DAVID PIZARRO: I have a good answer. This theme was something that was emerging at the end with Dave’s talk about promoting behavior versus understanding the mechanisms. There is—even if you are a psychologist and you have an interest in this—a way in which, in the mechanisms, you could say, "I’m going to take B.F. Skinner’s learning approach and say what I care about is essentially the frequency of the behavior. What are the things that I have to do to promote the behavior that I want to promote?"

You can get these nice, manipulated contingencies in the environment between reward and punishment. Does reward work better than punishment?

I want to propose that we have two very good intuitions, one, which should be discarded when we’re being social scientists, is what do we want our kids to be like? I want my kid to be good for the right reasons. In other words, I want her to develop a character that I can be proud of and that she can be proud of. I want her to donate to charity not because she’s afraid that if she doesn’t people will judge her poorly but because she genuinely cares about other people.

When I’m looking at society, and the more and more work that we do that might have implications for society, we should set aside those concerns. That is, we should be comfortable saying that there is one question about what the right reasons are and what the right motivations are in a moral sense. There’s another question that should ask from a public policy perspective: what will maximize the welfare of my society? I don’t give a rat’s ass why people are doing it!

It shouldn't make a difference if you’re doing it because you’re ashamed (like Jennifer might be talking about later): "I want to sign up for the energy program because I will get mocked by my peers," or if you’re doing it because you realize this is a calling that God gave to you­—to insert this little temperature reducer during California summers. That "by any means necessary" approach that seems so inhuman to us as individuals is a perfectly appropriate strategy to use when we’re making decisions for the public.

CROCKETT: Yes, that makes sense and it's a satisficing approach rather than a maximizing approach. One reason why we care about the first intuition so much is because in the context in which we evolved, which was small group interactions, someone who does a good thing for the right reasons is going to be more reliable and more trustworthy over time than someone who does it for an externally incentivized reason.

PIZARRO: And it may not be true. right? It may turn out to be wrong.

DAVID RAND: That's right, but I think it’s still true that it’s not just about when you were in a small group—hunter-gatherer—but in general: if you believe something for the right reason, then you’ll do it even if no one is watching. That creates a more socially optimal outcome than if you only do it when someone is watching.

PIZARRO: It’s an empirical question though. I don't know if it’s been answered. For instance, the fear of punishment.

RAND: We have data, of a flavor. If you look at people that cooperate in repeated prisoners dilemmas, they’re no more or less likely to cooperate in one shot, or they’re no more likely to give in a dictator game. When the rule is in place, everybody cooperates regardless of whether they’re selfish or not. When no incentive is there, selfish people go back to being selfish.

SARAH-JAYNE BLAKEMORE: There’s also data that in newsagents in the UK, where sometimes you can take a newspaper and put money in the slot, and if you put a couple of eyes above the money slot, people are more likely to pay their dues than if you don’t put any eyes there.

PIZARRO: That’s certainly not acting for the right reason. That can’t be the right reason.

RAND: You were bringing up the issue of thinking about the consequences for yourself versus the other person. When we’re thinking about how these decisions get made, there are two stages that are distinct but get lumped together a lot conceptually and measurement-wise. You have to understand what the options are, and then once you know what the options are, you have to choose which one you prefer. It seems to me that automatic versus deliberative processing has opposite roles in those two domains. Obviously to understand the problem you have to think about it. If you’re selfish, you don’t need to spend time to think about the decision because it’s obviously what to do. We try to separate those things by explaining the decision beforehand when you’re not constrained. Then when it comes time to make the decision, you put people under time pressure.

CROCKETT: That can explain what's going on and that's a good point because these ideas about uncertainty and moral wiggle room, those are going to play the biggest role in the first part—in the construing of the problem. Is this a moral decision or is this not a moral decision? Potentially also playing the biggest role is this idea you were talking about earlier about how do people internalize what is the right thing to do? How do you establish that this is the right thing to do?

We should talk more about this because, methodologically, this is important to separate out.

HUGO MERCIER: Can I say something about this issue of mentalizing? You're right in drawing attention to the importance of mentalizing in making moral decisions or moral judgments. It seems that the data indicates that we’re not very good at it, that we have biases and we tend to not be very good when we think about what might have caused other people’s behavior.

The reason is that in everyday life, as contrasted with many experimental settings, we can talk to people. If you do something that I think is bad, and we know from data about how people explain themselves, that spontaneously you’re going to tell me why you did this and you’re going to try to justify yourself. I don’t have to do the work of trying to figure out why you did this, what kind of excuse you might have had because you’re going to do it for me. Then we set up these experiments in which you don’t have this feedback and it’s just weird. It's not irrelevant because there are many situations in which that happens as well, but we still have to keep in mind that it is unnatural. In most of these games and most of these experiments, if you could just let people talk, they would find a good solution. The thing with the shocks, if the people could talk with each other, you could say "Well I’m happy to take the shock if you want to share the money." Then again I’m not saying it's not interesting to do the experiments at all, but we have to keep in mind that it’s kind of weird.

CROCKETT: That's true to a certain extent. A lot of moral decisions, particularly in the cooperation domain out in the real world, do usually involve some sort of communication. Increasingly, however, a lot of moral decisions are individual in the sense they involve someone that's not there. If you're deciding whether to buy a product that is fair trade or not, or if you're a politician making a decision about a health policy that's going to affect hundreds, thousands of people, millions of people who are not there. Some of the most wide-reaching moral decisions are made by an individual who does not see those who are going to bear the consequences of that decision. It's important to study both.

MERCIER: Maybe by realizing that the context in which these mechanisms of mentalizing evolved was one in which you had a huge amount of feedback can help us to better understand what happens when we don’t have this feedback.

CROCKETT: Maybe that's why we see selfish behavior is that we're used to having an opportunity to justify it when now there are many cases in which you don't have to justify it.

FIERY CUSHMAN: One of the things that’s unique and cool about your research is the focus on neuromodulators, whereas most research on how the brain processes morality has been on neural computation. Obviously, those things are inter-related. I guess I’ve always been, I don't know if confused is the right word, about what neuromodulators are for. It seems like neural computation can be incredibly precise. You can get a Seurat or a Vermeer out of neural computation, whereas neuromodulators give you Rothkos and Pollocks.

Why does the brain have such blunt tools? How does thinking about neuromodulators in particularly as a very blunt tool but also a very wide ranging one, inform your thought about their role in moral judgment as opposed again to neural computation?

CROCKETT: It's important to distinguish between the tools we have as researchers for manipulating neuromodulators, which are incredibly blunt, versus the way that these systems work in the brain, which are extremely precise. The serotonin system, for example, has at least 17 different kinds of receptors. Those receptors do different things and they're distributed differentially in the brain. Some types of receptors are only found subcortically and other receptors have their highest concentration in the medial prefrontal cortex. There's a high degree of precision in how these chemicals can influence brain processing in more local circuits.

To answer the first part of your question, the function of these systems is because cognition is not a one-size-fits-all kind of program. Sometimes you want to be more focused on local details at the exclusion of the bigger picture. Other times you want to be able to look at the bigger picture at the exclusion of small details. Whether you want to be processing in one way or the other is going to depend profoundly on the environmental context.

If you're in a very stressful situation, you want to be focusing your attention on how to get out of that situation. You don't want to be thinking about what you're going to have for breakfast tomorrow. Conversely if things are chilled out, that's the time when you can engage in long-term planning. There's evidence that things like stress, environmental events, events that have some important consequence for the survival of the organism are going to activate these systems which then shape cognition in such a way that's adaptive. That's the way that I think about neuromodulators.

Serotonin is interesting in this context because it's one of the least well understood in terms of how this works. The stress example that I was talking about, noradrenaline and cortisol and those neuromodulators are understood fairly well. Noradrenaline is stimulated by stress and it increases the signal to noise ratio in the prefrontal cortex and it focuses your attention.

Serotonin does tons of different things but it is one of the very few, if not the only major neuromodulator that can only be synthesized if you continually have nutritional input. You make serotonin from tryptophan, which is an amino acid that you can only get from the diet. You can only get it from eating foods that have tryptophan, which is most foods, but especially high protein foods. If you're in a famine, you're not going to be making as much serotonin.

This is interesting in an evolutionary context because when does it make sense to cooperate and care about the welfare of your fellow beings? When resources are abundant, then that's when you should be building relationships. When resources are scarce, maybe you want to be looking out for yourself, although there are some interesting wrinkles in there that Dave and I have talked about before where there could be an inverted U-shaped function where cooperation is critical in times of stress.

Perhaps one function of serotonin is to shape our social preferences in such a way that's adaptive to the current environmental context.


Introduction To Robust Design (Taguchi Method)

Robust Design method, also called the Taguchi Method, pioneered by Dr. Genichi Taguchi, greatly improves engineering productivity. By consciously considering the noise factors (environmental variation during the product’s usage, manufacturing variation, and component deterioration) and the cost of failure in the field the Robust Design method helps ensure customer satisfaction. Robust Design focuses on improving the fundamental function of the product or process, thus facilitating flexible designs and concurrent engineering. Indeed, it is the most powerful method available to reduce product cost, improve quality, and simultaneously reduce development interval.

1. Why Use Robust Design Method?
Over the last five years many leading companies have invested heavily in the Six Sigma approach aimed at reducing waste during manufacturing and operations. These efforts have had great impact on the cost structure and hence on the bottom line of those companies. Many of them have reached the maximum potential of the traditional Six Sigma approach. What would be the engine for the next wave of productivity improvement?

Brenda Reichelderfer of ITT Industries reported on their benchmarking survey of many leading companies, “design directly influences more than 70% of the product life cycle cost companies with high product development effectiveness have earnings three times the average earnings and companies with high product development effectiveness have revenue growth two times the average revenue growth.” She also observed, 󈬘% of product development costs are wasted!”

These and similar observations by other leading companies are compelling them to adopt improved product development processes under the banner Design for Six Sigma. The Design for Six Sigma approach is focused on 1) increasing engineering productivity so that new products can be developed rapidly and at low cost, and 2) value based management.

Robust Design method is central to improving engineering productivity. Pioneered by Dr. Genichi Taguchi after the end of the Second World War, the method has evolved over the last five decades. Many companies around the world have saved hundreds of millions of dollars by using the method in diverse industries: automobiles, xerography, telecommunications, electronics, software, etc.

1.1. Typical Problems Addressed By Robust Design
A team of engineers was working on the design of a radio receiver for ground to aircraft communication requiring high reliability, i.e., low bit error rate, for data transmission. On the one hand, building series of prototypes to sequentially eliminate problems would be forbiddingly expensive. On the other hand, computer simulation effort for evaluating a single design was also time consuming and expensive. Then, how can one speed up development and yet assure reliability?

In an another project, a manufacturer had introduced a high speed copy machine to the field only to find that the paper feeder jammed almost ten times more frequently than what was planned. The traditional method for evaluating the reliability of a single new design idea used to take several weeks. How can the company conduct the needed research in a short time and come up with a design that would not embarrass the company again in the field?

The Robust Design method has helped reduce the development time and cost by a factor of two or better in many such problems.

In general, engineering decisions involved in product/system development can be classified into two categories:

  • Error-free implementation of the past collective knowledge and experience
  • Generation of new design information, often for improving product quality/reliability, performance, and cost.

While CAD/CAE tools are effective for implementing past knowledge, Robust Design method greatly improves productivity in generation of new knowledge by acting as an amplifier of engineering skills. With Robust Design, a company can rapidly achieve the full technological potential of their design ideas and achieve higher profits.

2. Robustness Strategy

Variation reduction is universally recognized as a key to reliability and productivity improvement. There are many approaches to reducing the variability, each one having its place in the product development cycle.

By addressing variation reduction at a particular stage in a product’s life cycle, one can prevent failures in the downstream stages. The Six Sigma approach has made tremendous gains in cost reduction by finding problems that occur in manufacturing or white-collar operations and fixing the immediate causes. The robustness strategy is to prevent problems through optimizing product designs and manufacturing process designs.

The manufacturer of a differential op-amplifier used in coin telephones faced the problem of excessive offset voltage due to manufacturing variability. High offset voltage caused poor voice quality, especially for phones further away from the central office. So, how to minimize field problems and associated cost? There are many approaches:

  1. Compensate the customers for their losses.
  2. Screen out circuits having large offset voltage at the end of the production line.
  3. Institute tighter tolerances through process control on the manufacturing line.
  4. Change the nominal values of critical circuit parameters such that the circuit’s function becomes insensitive to the cause, namely, manufacturing variation.

The approach 4 is the robustness strategy. As one moves from approach 1 to 4, one progressively moves upstream in the product delivery cycle and also becomes more efficient in cost control. Hence it is preferable to address the problem as upstream as possible. The robustness strategy provides the crucial methodology for systematically arriving at solutions that make designs less sensitive to various causes of variation. It can be used for optimizing product design as well as for manufacturing process design.

The Robustness Strategy uses five primary tools:

  1. P-Diagram is used to classify the variables associated with the product into noise, control, signal (input), and response (output) factors.
  2. Ideal Function is used to mathematically specify the ideal form of the signal-response relationship as embodied by the design concept for making the higher-level system work perfectly.
  3. Quadratic Loss Function (also known as Quality Loss Function) is used to quantify the loss incurred by the user due to deviation from target performance.
  4. Signal-to-Noise Ratio is used for predicting the field quality through laboratory experiments.
  5. Orthogonal Arrays are used for gathering dependable information about control factors (design parameters) with a small number of experiments.

2.1 P-Diagram

P-Diagram is a must for every development project. It is a way of succinctly defining the development scope. First we identify the signal (input) and response (output) associated with the design concept. For example, in designing the cooling system for a room the thermostat setting is the signal and the resulting room temperature is the response.

Next consider the parameters/factors that are beyond the control of the designer. Those factors are called noise factors. Outside temperature, opening/closing of windows, and number of occupants are examples of noise factors. Parameters that can be specified by the designer are called control factors. The number of registers, their locations, size of the air conditioning unit, insulation are examples of control factors.

Ideally, the resulting room temperature should be equal to the set point temperature. Thus the ideal function here is a straight line of slope one in the signal-response graph. This relationship must hold for all operating conditions. However, the noise factors cause the relationship to deviate from the ideal.

The job of the designer is to select appropriate control factors and their settings so that the deviation from the ideal is minimum at a low cost. Such a design is called a minimum sensitivity design or a robust design. It can be achieved by exploiting nonlinearity of the products/systems. The Robust Design method prescribes a systematic procedure for minimizing design sensitivity and it is called Parameter Design.

An overwhelming majority of product failures and the resulting field costs and design iterations come from ignoring noise factors during the early design stages. The noise factors crop up one by one as surprises in the subsequent product delivery stages causing costly failures and band-aids. These problems are avoided in the Robust Design method by subjecting the design ideas to noise factors through parameter design.

The next step is to specify allowed deviation of the parameters from the nominal values. It involves balancing the added cost of tighter tolerances against the benefits to the customer. Similar decisions must be made regarding the selection of different grades of the subsystems and components from available alternatives. The quadratic loss function is very useful for quantifying the impact of these decisions on customers or higher-level systems. The process of balancing the cost is called Tolerance Design.

The result of using parameter design followed by tolerance design is successful products at low cost.

2.2 Quality Measurement

In quality improvement and design optimization the metric plays a crucial role. Unfortunately, a single metric does not serve all stages of product delivery.

It is common to use the fraction of products outside the specified limits as the measure of quality. Though it is a good measure of the loss due to scrap, it miserably fails as a predictor of customer satisfaction. The quality loss function serves that purpose very well.

Let us define the following variables:

m: target value for a critical product characteristic

+/- Delta : allowed deviation from the target

A : loss due to a defective product

Then the quality loss, L, suffered by an average customer due to a product with y as value of the characteristic is given by the following equation:

If the output of the factory has distribution of the critical characteristic with mean m and variance s2, then the average quality loss per unit of the product is given by:

2.3 Signal To Noise (S/N) Ratios

The product/process/system design phase involves deciding the best values/levels for the control factors. The signal to noise (S/N) ratio is an ideal metric for that purpose.

The equation for average quality loss, Q, says that the customer’s average quality loss depends on the deviation of the mean from the target and also on the variance. An important class of design optimization problem requires minimization of the variance while keeping the mean on target.

Between the mean and standard deviation, it is typically easy to adjust the mean on target, but reducing the variance is difficult. Therefore, the designer should minimize the variance first and then adjust the mean on target.Among the available control factors most of them should be used to reduce variance. Only one or two control factors are adequate for adjusting the mean on target.

The design optimization problem can be solved in two steps:

1. Maximize the S/N ratio, h, defined as

This is the step of variance reduction.

2. Adjust the mean on target using a control factor that has no effect on h. Such a factor is called a scaling factor. This is the step of adjusting the mean on target.

One typically looks for one scaling factor to adjust the mean on target during design and another for adjusting the mean to compensate for process variation during manufacturing.

2.4 Static Versus Dynamic S/N Ratios

In some engineering problems, the signal factor is absent or it takes a fixed value. These problems are called Static problems and the corresponding S/N ratios are called static S/N ratios. The S/N ratio described in the preceding section is a static S/N ratio.

In other problems, the signal and response must follow a function called the ideal function. In the cooling system example described earlier, the response (room temperature) and signal (set point) must follow a linear relationship. Such problems are called dynamic problems and the corresponding S/N ratios are called dynamic S/N ratios.

The dynamic S/N ratio will be illustrated in a later section using a turbine design example.

Dynamic S/N ratios are very useful for technology development, which is the process of generating flexible solutions that can be used in many products.

3. Steps in Robust Parameter Design

Robust Parameter design has 4 main steps:

1. Problem Formulation:

This step consists of identifying the main function, developing the P-diagram, defining the ideal function and S/N ratio, and planning the experiments. The experiments involve changing the control, noise and signal factors systematically using orthogonal arrays.

2. Data Collection/Simulation:

The experiments may be conducted in hardware or through simulation. It is not necessary to have a full-scale model of the product for the purpose of experimentation. It is sufficient and more desirable to have an essential model of the product that adequately captures the design concept. Thus, the experiments can be done more economically.

3. Factor Effects Analysis:

The effects of the control factors are calculated in this step and the results are analyzed to select optimum setting of the control factors.

4. Prediction/Confirmation:

In order to validate the optimum conditions we predict the performance of the product design under baseline and optimum settings of the control factors. Then we perform confirmation experiments under these conditions and compare the results with the predictions. If the results of confirmation experiments agree with the predictions, then we implement the results. Otherwise, the above steps must be iterated.

See Also: Robust Design Case Studies


Introduction

Color, one of the most powerful aspects of the environment, has been reported to promote human adaptation to the environment and enhance spatial form [1]. Color may change the perceived size and warmth of a room, elicit associations, enhance introversion or extroversion, incite anger of relaxation, and influence physiological responses [2], [3]. Several studies have been conducted to study the psychological impact of color. For example, whereas warm colors provide visual activation and stimulation, cool colors communicate subtlety and relaxation.

The visual environment is a vital element influencing hospital staff morale and productivity studies have even reported that an enhanced visual environment have produced improved faster recovery rates by as much as 10%. In fact, these improvements have been attributed to particular elements of the visual environment they include the use of appropriate color in interior design. In hospital design, color can have an impact on peoples’ perceptions and responses to the environment and also affect patient recovery rates, improving the quality and overall experience of patients, staff and visitors. Color is also powerful tools for coding, navigation and wayfinding, color can also promote a sense of well-being and independence.

There are numerous studies in the field of psychology which demonstrate the relationship between human behavior and color. However, color studies in the environmental design field are almost non-existent. Nourse and Welch [4] tested Wilson’s [5] finding that red is more arousing than green. Jacobs and Hustmyer [6] found red was significantly more arousing than blue or yellow, and green more than blue. Fehrman and Fehrman [7] reported on the effects of blue, red, and yellow on task performance and arousal for the effective color selection of interior spaces. Kwallek, Lewis, and Robbins [8] examined the effects of a red colored office versus a blue one on subject productivity and mood. Kwallek and Lewis [9] investigated effects of environmental color on gender using a red, white, and green office. The experiment assessed worker performance in proofreading and mood under different colored office environments. Weller and Livingston [10] examined the effect of colored-paper on emotional responses obtained from questionnaires. Six different questionnaires were designed and compiled in this order: pink-guilty, pink-not guilty blue-guilty, blue-not guilty white-guilty, white-not guilty. Boyatzis and Varghese [11] investigated children’s color and emotion associations. They found that children showed positive emotions to bright colors (pink, red, yellow, green, purple, or blue) and negative emotions for dark colors (brown, black, gray). Hemphill [12] examined adults’ color and emotion associations and compared them with the findings by Boyatzis and Varghese [11].

Scientific research means reviewing a concept through observation in order to test a theory or hypothesis in order to explain a phenomenon. The presupposed theory and hypotheses are tested by systematic observations to make a general explanation for the natural phenomena. Experiments are useful to explore the phenomena because they involve testing hypotheses and investigating cause-and-effect relationships [13]. Experiments are characterized by the manipulation of independent variables and identifying possible cause-and-effect relationships between an independent variable and a dependent variable. Types of experiments are categorized by the degree of random assignment of subjects to the various conditions they are true experiments, quasi-experiments, and single-subject experiments. True experiments require unbiased random assignment of subjects to treatment groups and the researcher’s ability to manipulate independent variables directly. Rigorous experiments are typically done in a laboratory where it is possible to control variables. The major advantage of these experiments is their ability to establish causal relationships quasi-experiments do not establish causal relationships to the same degree as true experiments since experiments in the field routinely encounter uncontrollable factors. The advantage of quasi-experiments is their higher generalizeability because of their naturalness when compared to the artificiality of true experiments [14].

Based on previous studies, the consistent trend is that blue and green are the most preferred colors. However, the majority of color preference studies failed to control the confounding variables such as color attributes [15], [16]. A well-controlled color preference study of psychological patients appears to be non-existent. In order to address the limitations found in previous research, this study is the first to use experimental design in order to provide a foundation for color studies. This research advances the understanding of the value of color in a counseling room by studying psychological patients’ perceptions of color. This knowledge should facilitate improvements in the design of hospitals. The main purposes of this study are: 1) to propose an experiment-based color design research approach, 2) to create an optimization solution on the use of color design.


REVIEW article

  • 1 School of Arts and Humanities, Edith Cowan University, Joondalup, WA, Australia
  • 2 Mary Immaculate College, University of Limerick, Limerick, Ireland

Despite recent close attention to issues related to the reliability of psychological research (e.g., the replication crisis), issues of the validity of this research have not been considered to the same extent. This paper highlights an issue that calls into question the validity of the common research practice of studying samples of individuals, and using sample-based statistics to infer generalizations that are applied not only to the parent population, but to individuals. The lack of ergodicity in human data means that such generalizations are not justified. This problem is illustrated with respect to two common scenarios in psychological research that raise questions for the sorts of theories that are typically proposed to explain human behavior and cognition. The paper presents a method of data analysis that requires closer attention to the range of behaviors exhibited by individuals in our research to determine the pervasiveness of effects observed in sample data. Such an approach to data analysis will produce results that are more in tune with the types of generalizations typical in reports of psychological research than mainstream analysis methods.


Why We Like What We Like

Paul Bloom’s How Pleasure Works: The New Science of Why We Like What We Like provides a wonderful set of arguments for why we love what we love. In my own work I was struck that children seem to have automatic preferences toward social groups that mimic the adult state (in spite of far less experience) and have been working to understand these preferences and their origins. Paul’s book gave me several ideas that I hadn’t considered and I thought his proposals worth sharing more broadly. Enjoy!

I am grateful to APS President Mahzarin Banaji for giving me the opportunity to discuss the science of pleasure.

One of the most exciting ideas in cognitive science is the theory that people have a default assumption that things, people, and events have invisible essences that make them what they are. Experimental psychologists have argued that essentialism underlies our understanding of the physical and social worlds, and developmental and cross-cultural psychologists have proposed that it is instinctive and universal. We are natural-born essentialists.

I propose that this essentialism not only influences our understanding of the world, it also shapes our experience, including our pleasures. What matters most is not the world as it appears to our senses. Rather, the enjoyment we get from something derives from what we think that thing really is. This is true for more intellectual pleasures, such as the appreciation of paintings and stories, but it is true as well for pleasures that seem more animalistic, such as the satisfaction of hunger and lust. For a painting, it matters who the artist was for a story, it matters whether it is truth or fiction for a steak, we care about what sort of animal it came from for sex, we are strongly affected by who we think our sexual partner really is.

What motivates this sort of theory? After all, some questions about pleasure have easy answers, and these have little to do with essentialism. We know why humans get so much joy from eating and drinking. We know why we enjoy eating some things, such as sweet fruit, more than other things, like stones. We know why sex is often fun, and why it can be pleasing to look at a baby’s smiling face and listen to a baby’s laugh. The obvious answers are that animals like us need food and water to survive, need sex to reproduce, and need to attend to our children in order for them to survive. Pleasure is the carrot that drives us toward these reproductively useful activities. As George Romanes observed in 1884, “Pleasure and pain must have been evolved as the subjective accompaniment of processes which are respectively beneficial or injurious to the organism, and so evolved for the purpose or to the end that the organism should seek the one and shun the other.”

We still need to explain how it all worked out so nicely, why it so happens (to mangle the Rolling Stones lyric) that we can’t always get what we want — but we want what we need. This is where Darwin comes in. The theory of natural selection explains, without appeal to an intelligent designer, why our pleasures so nicely incline us toward activities that are beneficial to survival and reproduction — why pleasure is good for the genes.

This is an adaptationist theory of pleasure. It is quite successful for non-human animals. They like what evolutionary biology says that they should like, such as food, water, and sex. To a large extent, this is true of humans as well. But many human pleasures are more mysterious. I begin How Pleasure Works with some examples of this:

Some teenage girls enjoy cutting themselves with razors. Some men pay good money to be spanked by prostitutes. The average American spends over four hours a day watching television. The thought of sex with a virgin is intensely arousing to many men. Abstract art can sell for millions of dollars. Young children enjoy playing with imaginary friends and can be comforted by security blankets. People slow their cars to look at gory accidents and go to movies that make them cry.

Consider also the pleasures of music, sentimental objects (like a child’s security blanket), and religious ritual. Now, one shouldn’t be too quick to abandon adaptationist explanations, and there are some serious proposals about the selective advantages of certain puzzling pleasures: The universal love of stories might evolve as a form of mental practice to build up vicarious experience with the world, and to safely explore alternative realities. Art and sports might exist as displays of fitness. Animals constantly assess one another as allies and mates these human activities might be our equivalent of the peacock’s tail, evolved to show off our better selves. Music and dance might have evolved as a coordinating mechanism to enhance social cooperation and good feelings toward one another.

Still, this approach is limited. Many of our special pleasures are useless or maladaptive, both in our current environment and the environment in which our species has evolved. There is no reproductive benefit in enjoying security blankets, paintings by Kandinsky, or sexual masochism.

Many psychologists are wary of adaptationist explanations and would defend the alternative that our uniquely human pleasures are cultural inventions. They don’t doubt that human brains have evolved, but they argue that what humans have come to possess is an enhanced capacity for flexibility we can acquire ideas, practices, and tastes that are arbitrary from a biological perspective.

This plasticity theory has to be right to some extent. Nobody could deny that culture can shape and structure human pleasure even those pleasures that we share with other animals, such as food and sex, manifest themselves in different ways across societies. Taken to an extreme, then, one might conclude that although natural selection played some limited role in shaping what we like — we have evolved hunger and thirst, a sex drive, curiosity, some social instincts — it had little to do with the specifics. In the words of the critic Louis Menand, “every aspect of life has a biological foundation in exactly the same sense, which is that unless it was biologically possible, it wouldn’t exist. After that, it’s up for grabs.”

I spend much of How Pleasure Works arguing that this is mistaken. Most pleasures have early developmental origins — they are not acquired through immersion into a society. And they are shared by all humans the variety that one sees can be understood as variation on a universal theme. Painting is a cultural invention, but the love of art is not. Societies have different stories, but all stories share certain themes. Tastes in food and sex differ — but not by all that much. It is true that we can imagine cultures in which pleasure is very different, where people rub food in feces to improve its taste and have no interest in salt or sugar, or where they spend fortunes on forgeries and throw originals into the trash, or spend happy hours listening to static, cringing at the sound of a melody. But this is science fiction, not reality.

I think that humans start off with a fixed list of pleasures and we can’t add to that list. This might sound like an insanely strong claim, given the inventions of chocolate, video games, cocaine, dildos, saunas, crossword puzzles, reality television, novels, and so on. But I would suggest that these are enjoyable because they connect — in a reasonably direct way — to pleasures that humans already possess. Hot fudge sundaes and barbecued ribs are modern inventions, but they appeal to our prior love of sugar and fat. There are novel forms of music created all the time, but a creature that was biologically unprepared for rhythm would never grow to like any of them they will always be noise.

Some pleasures, then, are neither biological adaptations nor arbitrary cultural inventions. This brings us to a third approach, explored in my book, which is that many of our most interesting pleasures are evolutionary accidents.

The most obvious cases here are those in which something has evolved for function X but later comes to be used for function Y — what Darwin called “preadaptations.” As a simple example, many people enjoy pornography but this isn’t because our porn-loving ancestors had more offspring than the porn-abstainers. Rather, certain images have their effect, at least in part, because they tickle the same part of the mind that responds to actual sex. This arousal is neither an adaptation nor an arbitrary learned response — it’s a byproduct, an accident. I have argued elsewhere that the same holds for the human capacity for word learning. Children are remarkable at learning words, but they do so, not through a capacity specifically evolved for that purpose, but through systems that have evolved for other functions, such as monitoring the intentions of others. Word learning is a lucky accident.

More specifically, many of our pleasures may be accidental byproducts of our essentialism. Different sorts of essentialism have been proposed by psychologists. There is category essentialism, which is the belief that members of a given category share a deep hidden nature. This includes belief in the physical essences of natural things like animals and plants, where the essence is internal to the object, as well as belief in the psychological essences of human-made things such as tools and artwork, where the essence is the object’s history, including the intentions of the person who created it. Then there is individual essentialism, which is the belief that a given individual has an essence that distinguishes it from other members of its category, even from perfect duplicates.

Our essentialist psychology shapes our pleasure. Sometimes the relevant essence is category essence, such as in the domain of sex, where the assumed essences of categories such as male and female turn out to powerfully constrain what people like. Sometimes the relevant essence is individual essence, which helps capture how certain consumer products get their value — such as an original painting by Marc Chagall or John F. Kennedy’s tape measure (which sold for about $50,000). More generally, the proposal is that our likes and dislikes are powerfully influences by our beliefs about the essences of things.

I hope my book sparks debate over these different theories of why we like what we like. In a recent discussion, Paul Rozin has worried about the narrowness of the modern sciences of the mind and points out that if you look through a psychology textbook you will find little or nothing about sports, art, music, drama, literature, play, and religion. These are wonderful and important domains of human life, and we won’t fully understand any of them until we understand pleasure.


How learned helplessness is acquired

In a review paper summarizing fifty years of research on the topic, two of the leading researchers in the field—Martin Seligman and Steven Maier—who conducted the premier experiments on the topic, describe, as follows, the mechanisms that were originally assumed to lead to learned helplessness, in the context of animal experiments:

  • First, DETECT. Animals DETECT the dimension of controllability and uncontrollability. (This is also referred to sometimes as the dimension of contingency and noncontingency)
  • Second, EXPECT. Animals that DETECT uncontrollability EXPECT shock or other events to once again be uncontrollable in new situations, which undermines attempts to escape in those situations.

Essentially, based on this theory, when individuals realize that they cannot control the situation that they’re in, they later expect to be unable to control similar situations too.

However, the researchers suggest that based on the fifty years of evidence that has been accumulated since the initial studies on the topic, and particularly in light of the neuroscientific evidence on the topic, the original theory got the mechanisms of learned helplessness backward. As the researchers state:

“Passivity in response to shock is not learned. It is the default, unlearned response to prolonged aversive events and it is mediated by the serotonergic activity of the dorsal raphe nucleus, which in turn inhibits escape. This passivity can be overcome by learning control, with the activity of the medial prefrontal cortex, which subserves the detection of control leading to the automatic inhibition of the dorsal raphe nucleus. So animals learn that they can control aversive events, but the passive failure to learn to escape is an unlearned reaction to prolonged aversive stimulation.”

Accordingly, they suggest the following mechanism for the acquisition of learned helplessness:

  • First: PASSIVITY/ANXIETY. “The intense activation of the dorsal raphe nucleus by shock sensitizes these neurons and this sensitization lasts for a few days and results in poor escape (passivity) and heightened anxiety… The detection of uncontrollability is not necessary nor is it sufficient for passivity. This is caused by prolonged exposure to aversive stimulation per se.”
  • Second: DETECT and ACT. “When shock is initially escapable, the presence of control is DETECTed… After detection of control, a separate and distinct population of prelimbic neurons are activated that here we call ACT. These neurons project to the dorsal raphe nucleus and inhibit the 5-HT cells that are activated by aversive stimulation, thus preventing dorsal raphe nucleus activation and thereby preventing sensitization of these cells, eliminating passivity and exaggerated fear. So it is the presence of control, not the absence of control, that is detected…”
  • Third: EXPECT. “After the prelimbic-dorsal raphe nucleus ACT circuit is activated, a set of changes that require several hours occurs in this pathway and involves the formation of new proteins related to plasticity. This is now a circuit that EXPECTS control… However, it should be clearly understood that this EXPECTATION may not be a cognitive process or entity as psychologists tend to view them. It is a circuit that provides an expectational function, in the sense that it changes or biases how organisms respond in the future as a consequence of the events that occur in the present.”

In summary, the researchers state that “as the original theory claimed, organisms are sensitive to the dimension of control, and this dimension is critical. However, the part of the dimension that is detected or expected seems now to be the presence of control, not the absence of control“. Crucially, however, they also state the following:

“At the psychological level, there are several other loose ends. As a general statement, neural processes in the prefrontal cortex become narrowed by stress (Arnsten, 2015). Thus, the fact that in an aversive situation the brain seems to detect control as the active ingredient rather than a lack of control, does not mean that the brain cannot detect lack of control in other types of circumstances, such as uncontrollable food or unsolvable cognitive problems, or even loud noise.

That is, the findings that we have reviewed do not imply that the brain does not have circuitry to detect noncontingency between events that include actions and outcomes. Rather, it may be that this processing can occur, but is not deployed in situations that are highly aversive such as the original helplessness experiments. So it is important to distinguish between what the brain does under a given set of conditions, and what the brain is capable of under different conditions. This possibility is in need of further research.”

The complexity of this phenomenon is supported via other research on the topic, such as research showing that learned helplessness can be acquired vicariously, by viewing someone else’s experiences, even if you did not have those experiences yourself.

Overall, the mechanisms behind learned helplessness are the subject of much research.

When focusing on learned helplessness as it’s acquired in the context of the initial experiments on the topic, and particularly on situations where animals were exposed to shock that they cannot control, the original theory was that animals who experience uncontrollable situations detect that uncontrollability, and expect it in future situations.

A newer theory, that is based on the neuroscientific research on the topic, suggests that passivity in response to shock is the default, unlearned behavior, and that animals can learn to overcome it by detecting the response of controllability.

However, this does not necessarily explain how learned helplessness is acquired in all cases, as there can be variability in terms of how it’s acquired by different organisms in different situations. For example, a mouse exposed to shock could develop learned helplessness in a different manner than a student developing learned helplessness as a result of negative feedback in school.

Nevertheless, from a practical perspective, when it comes to understanding why people, including yourself, display learned helplessness, the key factor is generally the inability to control the outcomes of situations that they’re in. Accordingly, individuals who experience situations where they do not have an ability to control outcomes are expected to display more learned helplessness than individuals who experience situations where they do have an ability to control the outcomes.

Objective vs. subjective helplessness

When considering the concept of learned helplessness, it can help to understand the difference between two types of helplessness:

  • Objective helplessness.Objective helplessness is a state where someone can do nothing to affect the outcome of a situation.
  • Subjective helplessness.Subjective helplessness is a state of mind where someone believes that they can do nothing to affect the outcome of a situation.

Studies on learned helplessness are primarily concerned with situations where individuals who experienced objective helplessness end up developing subjective helplessness, which carries over to other situations where they are not objectively helpless.


Results

Manipulation of decision bias affects sensory evidence accumulation

In three EEG recording sessions, human participants (N = 16) viewed a continuous stream of horizontal, vertical and diagonal line textures alternating at a rate of 25 textures/second. The participants’ task was to detect an orientation-defined square presented in the center of the screen and report it via a button press (Figure 2A). Trials consisted of a fixed-order sequence of textures embedded in the continuous stream (total sequence duration 1 s). A square appeared in the fifth texture of a trial in 75% of the presentations (target trials), while in 25% a homogenous diagonal texture appeared in the fifth position (nontarget trials). Although the onset of a trial within the continuous stream of textures was not explicitly cued, the similar distribution of reaction times in target and nontarget trials suggests that participants used the temporal structure of the task even when no target appeared (Figure 2—figure supplement 1A). Consistent and significant EEG power modulations after trial onset (even for nontarget trials) further confirm that subjects registered trial onsets in the absence of an explicit cue, plausibly using the onset of a fixed order texture sequence as an implicit cue (Figure 2—figure supplement 1B).

Strategic decision bias shift toward liberal biases evidence accumulation.

(A) Schematic of the visual stimulus and task design. Participants viewed a continuous stream of full-screen diagonally, horizontally and vertically oriented textures at a presentation rate of 40 ms (25 Hz). After random inter-trial intervals, a fixed-order sequence was presented embedded in the stream. The fifth texture in each sequence either consisted of a single diagonal orientation (target absent), or contained an orthogonal orientation-defined square (either 45° or 135° orientation). Participants decided whether they had just seen a target, reporting detected targets by button press. Liberal and conservative conditions were administered in alternating nine-min blocks by penalizing either misses or false alarms, respectively, using aversive tones and monetary deductions. Depicted square and fixation dot sizes are not to scale. (B) Average detection rates (hits and false alarms) during both conditions. Miss rate is equal to 1 – hit rate since both are computed on stimulus present trials, and correct-rejection rate is equal to 1 – false alarm rate since both are computed on stimulus absent trials, together yielding the four SDT stimulus-response categories. (C) SDT parameters for sensitivity and criterion. (D) Schematic and simplified equation of drift diffusion model accounting for reaction time distributions for actively reported target-present and implicit target-absent decisions. Decision bias in this model can be implemented by either shifting the starting point of the evidence accumulation process (Z), or by adding an evidence-independent constant (‘drift bias’, db) to the drift rate. See text and Figure 1 for details. Notation: dy, change in decision variable y per unit time dt v·dt, mean drift (multiplied with one for signal + noise (target) trials, and −1 for noise-only (nontarget) trials) db·dt, drift bias and cdW, Gaussian white noise (mean = 0, variance = c2·dt). (E) Difference in Bayesian Information Criterion (BIC) goodness of fit estimates for the drift bias and the starting point models. A lower delta BIC value indicates a better fit, showing superiority of the drift bias model to account for the observed results. (F) Estimated model parameters for drift rate and drift bias in the drift bias model. Error bars, SEM across 16 participants. ***p<0.001 n.s., not significant. Panel D. is modified and reproduced with permission from de Gee et al. (2017) (Figure 4A, published under a CC BY 4.0 license).

Figure 2—source data 1

This csv table contains the data for Figure 2 panels B, C, E and F.

In alternating nine-minute blocks of trials, we actively biased participants’ perceptual decisions by instructing them either to report as many targets as possible (‘Detect as many targets as possible!” liberal condition), or to only report high-certainty targets ("Press only if you are really certain!" conservative condition). Participants were free to respond at any time during a block whenever they detected a target. A trial was considered a target present response when a button press occurred before the fixed-order sequence ended (i.e. within 0.84 s after onset of the fifth texture containing the (non)target, see Figure 2A). We provided auditory feedback and applied monetary penalties following missed targets in the liberal condition and following false alarms in the conservative condition (Figure 2A see Materials and methods for details). The median number of trials for each SDT category across participants was 1206 hits, 65 false alarms, 186 misses and 355 correct rejections in the liberal condition, and 980 hits, 12 false alarms, 419 misses and 492 correct rejections in the conservative condition.

Participants reliably adopted the intended decision bias shift across the two conditions, as shown by both the hit rate and the false alarm rate going down in tandem as a consequence of a more conservative bias (Figure 2B). The difference between hit rate and false alarm rate was not significantly modulated by the experimental bias manipulations (p=0.81, two-sided permutation test, 10,000 permutations, see Figure 2B). However, target detection performance computed using standard SDT d’ (perceptual sensitivity, reflecting the distance between the noise and signal distributions in Figure 1A) (Green and Swets, 1966) was slightly higher during conservative (liberal: d’=2.0 (s.d. 0.90) versus conservative: d’=2.31 (s.d. 0.82), p=0.0002, see Figure 2C, left bars). We quantified decision bias using the standard SDT criterion measure c, in which positive and negative values reflect conservative and liberal biases, respectively (see the blue and red vertical lines in Figure 1A). This uncovered a strong experimentally induced bias shift from the conservative to the liberal condition (liberal: c = – 0.13 (s.d. 0.4), versus conservative: c = 0.73 (s.d. 0.36), p=0.0001, see Figure 2C), as well as a conservative average bias across the two conditions (c = 0.3 (s.d. 0.31), p=0.0013).

Because the SDT framework is static over time, we further investigated how bias affected various components of the dynamic decision process by fitting different variants of the drift diffusion model (DDM) to the behavioral data (Figure 1B,C) (Ratcliff and McKoon, 2008). The DDM postulates that perceptual decisions are reached by accumulating noisy sensory evidence toward one of two decision boundaries representing the choice alternatives. Crossing one of these boundaries can either trigger an explicit behavioral report to indicate the decision (for target-present responses in our experiment), or remain implicit (i.e. without active response, for target-absent decisions in our experiment). The DDM captures the dynamic decision process by estimating parameters reflecting the rate of evidence accumulation (drift rate), the separation between the boundaries, as well as the time needed for stimulus encoding and response execution (non-decision time) (Ratcliff and McKoon, 2008). The DDM is able to estimate these parameters based on the shape of the RT distributions for actively reported (target-present) decisions along with the total number of trials in which no response occurred (i.e. implicit target-absent decisions) (Ratcliff et al., 2018).

We fitted two variants of the DDM to distinguish between two possible mechanisms that can bring about a change in choice bias: one in which the starting point of evidence accumulation moves closer to one of the decision boundaries (‘starting point model’, Figure 1B) (Mulder et al., 2012), and one in which the drift rate itself is biased toward one of the boundaries (de Gee et al., 2017) (‘drift bias model’, see Figure 1C, referred to as drift criterion by Ratcliff and McKoon (2008)). The drift bias parameter is determined by estimating the contribution of an evidence-independent constant added to the drift (Figure 2D). In the two respective models, we freed either the drift bias parameter (db, see Figure 2D) for the two conditions while keeping starting point (z) fixed across conditions (for the drift bias model), or vice versa (for the starting point model). Permitting only one parameter at a time to vary freely between conditions allowed us to directly compare the models without having to penalize either model for the number of free parameters. These alternative models make different predictions about the shape of the RT distributions in combination with the response ratios: a shift in starting point results in more target-present choices particularly for short RTs, whereas a shift in drift bias grows over time, resulting in more target-present choices also for longer RTs (de Gee et al., 2017 Ratcliff and McKoon, 2008 Urai et al., 2018). The RT distributions above and below the evidence accumulation graphs in Figure 1B and C illustrate these different effects. In both models, all of the non-bias related parameters (drift rate v, boundary separation a and non-decision time u + w, see Figure 2D) were also allowed to vary by condition.

We found that the starting point model provided a worse fit to the data than the drift bias model (starting point model, Bayesian Information Criterion (BIC) = 7938 drift bias model, BIC = 7926, Figure 2E, see Materials and methods for details). Specifically, for 15/16 participants, the drift bias model provided a better fit than the starting point model, for 12 of which delta BIC >6, indicating strong evidence in favor of the drift bias model (Kass and Raftery, 1995). Despite the lower BIC for the drift bias model, however, we note that to the naked eye both models provide similarly reasonable fits to the single participant RT distributions (Figure 2—figure supplement 2). Finally, we compared these two models to a model in which both drift bias and starting point were fixed across the conditions, while still allowing the non-bias-related parameters to vary per condition. This model provided the lowest goodness of fit (delta BIC >6 for both models for all participants).

Given the superior performance of the drift bias model (in terms of BIC), we further characterized decision making under the bias manipulation using parameter estimates from this model (see below where we revisit the implausibility of the starting point model when inspecting the lack of pre-stimulus baseline effects in sensory or motor cortex). Drift rate, reflecting the participants’ ability to discriminate targets and nontargets, was somewhat higher in the conservative compared to the liberal condition (liberal: v = 2.39 (s.d. 1.07), versus conservative: v = 3.06 (s.d. 1.16), p=0.0001, permutation test, Figure 2F, left bars). Almost perfect correlations across participants in both conditions between DDM drift rate and SDT d’ provided strong evidence that the drift rate parameter captures perceptual sensitivity (liberal, r = 0.98, p=1e –10 conservative, r = 0.96, p=5e –9 , see Figure 2—figure supplement 3A). Regarding the DDM bias parameters, the condition-fixed starting point parameter in the drift bias model was smaller than half the boundary separation (i.e. closer to the target-absent boundary (z = 0.24 (s.d. 0.06), p<0.0001, tested against 0.5)), indicating an overall conservative starting point across conditions (Figure 2—figure supplement 3D), in line with the overall positive SDT criterion (see Figure 2C, right panel). Strikingly, however, whereas the drift bias parameter was on average not different from zero in the conservative condition (db = –0.04 (s.d. 1.17), p=0.90), drift bias was strongly positive in the liberal condition (db = 2.08 (s.d. 1.0), p=0.0001 liberal vs conservative: p=0.0005 Figure 2F, right bars). The overall conservative starting point combined with a condition-specific neutral drift bias explained the conservative decision bias (as quantified by SDT criterion) in the conservative condition (Figure 2C). Likewise, in the liberal condition, the overall conservative starting point combined with a condition-specific positive drift bias (pushing the drift toward the target-present boundary) explained the neutral bias observed with SDT criterion (c around zero for liberal, see Figure 2C).

Convergent with these modeling results, drift bias was strongly anti-correlated across participants with both SDT criterion (r = –0.89 for both conditions, p=4e –6 ) and average reaction time (liberal, r = –0.57, p=0.02 conservative, r = –0.82, p=1e –4 , see Figure 2—figure supplement 3B C). The strong correlations between drift rate and d’ on the one hand, and drift bias and c on the other, provide converging evidence that the SDT and DDM frameworks capture similar underlying mechanisms, while the DDM additionally captures the dynamic nature of perceptual decision making by linking the decision bias manipulation to the evidence accumulation process itself. As a control, we also correlated starting point with criterion, and found that the correlations were somewhat weaker in both conditions (liberal, r = –0.75. conservative, r = –0.77), suggesting that the drift bias parameter better captured decision bias as instantiated by SDT.

Finally, the bias manipulation also affected two other parameters in the drift bias model that were not directly related to sensory evidence accumulation: boundary separation was slightly but reliably higher during the liberal compared to the conservative condition (p<0.0001), and non-decision time (comprising time needed for sensory encoding and motor response execution) was shorter during liberal (p<0.0001) (Figure 2—figure supplement 3D). In conclusion, the drift bias variant of the drift diffusion model best explained how participants adjusted to the decision bias manipulations. In the next sections, we used spectral analysis of the concurrent EEG recordings to identify a plausible neural mechanism that reflects biased sensory evidence accumulation.

Task-relevant textures induce stimulus-related responses in visual cortex

Sensory evidence accumulation in a visual target detection task presumably relies on stimulus-related signals processed in visual cortex. Such stimulus-related signals are typically reflected in cortical population activity exhibiting a rhythmic temporal structure (Buzsáki and Draguhn, 2004). Specifically, bottom-up processing of visual information has previously been linked to increased high-frequency (>40 Hz, i.e. gamma) electrophysiological activity over visual cortex (Bastos et al., 2015 Michalareas et al., 2016 Popov et al., 2017 van Kerkoerle et al., 2014). Figure 3 shows significant electrode-by-time-by-frequency clusters of stimulus-locked EEG power, normalized with respect to the condition-specific pre-trial baseline period (–0.4 to 0 s). We observed a total of four distinct stimulus-related modulations, which emerged after target onset and waned around the time of response: two in the high-frequency range (>36 Hz, Figure 3A (top) and Figure 3B) and two in the low-frequency range (<36 Hz, Figure 3A (bottom) and Figure 3C). First, we found a spatially focal modulation in a narrow frequency range around 25 Hz reflecting the steady state visual evoked potential (SSVEP) arising from entrainment by the visual stimulation frequency of our experimental paradigm (Figure 3A, bottom panel), as well as a second modulation from 42 to 58 Hz comprising the SSVEP’s harmonic (Figure 3A, top panel). Both SSVEP frequency modulations have a similar topographic distribution (see left panels of Figure 3A).

EEG spectral power modulations related to stimulus processing and motor response.

Each panel row depicts a three-dimensional (electrodes-by-time-by-frequency) cluster of power modulation, time-locked both to trial onset (left two panels) and button press (right two panels). Power modulation outside of the significant clusters is masked out. Modulation was computed as the percent signal change from the condition-specific pre-stimulus period (–0.4 to 0 s) and averaged across conditions. Topographical scalp maps show the spatial extent of clusters by integrating modulation over time-frequency bins. Time-frequency representations (TFRs) show modulation integrated over electrodes indicated by black circles in the scalp maps. Circle sizes indicate electrode weight in terms of proportion of time-frequency bins contributing to the TFR. P-values above scalp maps indicate multiple comparison-corrected cluster significance using a permutation test across participants (two-sided, N = 14). Solid vertical lines indicate the time of trial onset (left) or button press (right), dotted vertical lines indicate time of (non)target onset. Integr. M., integrated power modulation. SSVEP, steady state visual evoked potential. (A) (Top) 42–58 Hz (SSVEP harmonic) cluster. (A) (Bottom). Posterior 23–27 Hz (SSVEP) cluster. (B) Posterior 59–100 Hz (gamma) cluster. The clusters in A (Top) and B were part of one large cluster (hence the same p-value), and were split based on the sharp modulation increase precisely in the 42–58 Hz range. (C) 12–35 Hz (beta) suppression cluster located more posteriorly aligned to trial onset, and more left-centrally when aligned to button press.

Third, we observed a 59—100 Hz (gamma) power modulation (Figure 3B), after carefully controlling for high-frequency EEG artifacts due to small fixational eye movements (microsaccades) by removing microsaccade-related activity from the data (Hassler et al., 2011 Hipp and Siegel, 2013 Yuval-Greenberg et al., 2008), and by suppressing non-neural EEG activity through scalp current density (SCD) transformation (Melloni et al., 2009 Perrin et al., 1989) (see Materials and methods for details). Importantly, the topography of the observed gamma modulation was confined to posterior electrodes, in line with a role of gamma in bottom-up processing in visual cortex (Ni et al., 2016). Finally, we observed suppression of low-frequency beta (11—22 Hz) activity in posterior cortex, which typically occurs in parallel with enhanced stimulus-induced gamma activity (Donner and Siegel, 2011 Kloosterman et al., 2015a Meindertsma et al., 2017 Werkle-Bergner et al., 2014) (Figure 3C). Response-locked, this cluster was most pronounced over left motor cortex (electrode C4), plausibly due to the right-hand button press that participants used to indicate target detection (Donner et al., 2009). In the next sections, we characterize these signals separately for the two conditions, investigating stimulus-related signals within a pooling of 11 occipito-parietal electrodes based on the gamma enhancement in Figure 3B (Oz, POz, Pz, PO3, PO4, and P1 to P6), and motor-related signals in left-hemispheric beta (LHB) suppression in electrode C4 (Figure 3C) (O'Connell et al., 2012).

EEG power modulation time courses consistent with the drift bias model

Our behavioral results suggest that participants biased sensory evidence accumulation in the liberal condition, rather than changing their starting point. We next sought to provide converging evidence for this conclusion by examining pre-stimulus activity, post-stimulus activity, and motor-related EEG activity. Following previous studies, we hypothesized that a starting point bias would be reflected in a difference in pre-motor baseline activity between conditions before onset of the decision process (Afacan-Seref et al., 2018 de Lange et al., 2013), and/or in a difference in pre-stimulus activity such as seen in bottom up stimulus-related SSVEP and gamma power signals (Figure 4A shows the relevant clusters as derived from Figure 3). Thus, we first investigated the timeline of raw power in the SSVEP, gamma and LHB range between conditions (see Figure 4B). None of these markers showed a meaningful difference in pre-stimulus baseline activity. Statistically comparing the raw pre-stimulus activity between liberal and conservative in a baseline interval between –0.4 and 0 s prior to trial onset yielded p=0.52, p=0.51 and p=0.91, permutation tests, for the respective signals. This confirms a highly similar starting point of evidence accumulation in all these signals. Next, we predicted that a shift in drift bias would be reflected in a steeper slope of post-stimulus ramping activity (leading up to the decision). We reasoned that the best way of ascertaining such an effect would be to baseline the activity to the interval prior to stimulus onset (using the interval between –0.4 to 0 s), such that any post-stimulus effect we find cannot be explained by pre-stimulus differences (if any). The time course of post-stimulus and response-locked activity after baselining can be found in Figure 4C. All three signals showed diverging signals between the liberal and conservative condition after trial onset, consistent with adjustments in the process of evidence accumulation. Specifically, we observed higher peak modulation levels for the liberal condition in all three stimulus-locked signals (p=0.08, p=0.002 and p=0.023, permutation tests for SSVEP, gamma and LHB, respectively), and found a steeper slope toward the button press for LHB (p=0.04). Finally, the event related potential in motor cortex also showed a steeper slope toward report for liberal (p=0.07, Figure 4, bottom row, baseline plot is not meaningful for time-domain signals due to mean removal during preprocessing). Taken together, these findings provide converging evidence that participants implemented a liberal decision bias by adjusting the rate of evidence accumulation toward the target-present choice boundary, but not its starting point. In the next sections, we sought to identify a neural mechanism that could underlie these biases in the rate of evidence accumulation.

Experimental task manipulations affect the time course of stimulus- and motor-related EEG signals, but not its starting point.

Raw power throughout the baseline period and time courses of power modulation time-locked to trial start and button press. (A) Relevant electrode clusters and frequency ranges (from Figure 3): Posterior SSVEP, Posterior gamma and Left-hemispheric beta (LHB). (B) The time course of raw power in a wide interval around the stimulus –0.8 to 0.8 s ms for these clusters. (C) Stimulus locked and response locked percent signal change from baseline (baseline period: –0.4 to 0 s). Error bars, SEM. Black horizontal bar indicates significant difference between conditions, cluster-corrected for multiple comparison (p<0.05, two sided). SSVEP, steady state visual evoked potential LHB, left hemispheric beta ERP, event-related potential SCD, scalp current density.

Liberal bias is reflected in pre-stimulus midfrontal theta enhancement and posterior alpha suppression

Given a lack of pre-stimulus (starting-point) differences in specific frequency ranges involved in stimulus processing or motor responses (Figure 4B), we next focused on other pre-stimulus differences that might be the root cause of the post-stimulus differences we observed in Figure 4C. To identify such signals at high frequency resolution, we computed spectral power in a wide time window from –1 s until trial start. We then ran a cluster-based permutation test across all electrodes and frequencies in the low-frequency domain (1–35 Hz), looking for power modulations due to our experimental manipulations. Pre-stimulus spectral power indeed uncovered two distinct modulations in the liberal compared to the conservative condition: (1) theta modulation in midfrontal electrodes and (2) alpha modulation in posterior electrodes. Figure 5A depicts the difference between the liberal and conservative condition, confirming significant clusters (p<0.05, cluster-corrected for multiple comparisons) of enhanced theta (2–6 Hz) in frontal electrodes (Fz, Cz, FC1,and FC2), as well as suppressed alpha (8—12 Hz) in a group of posterior electrodes, including all 11 electrodes selected previously based on post-stimulus gamma modulation (Figure 3). The two modulations were uncorrelated across participants (r = 0.06, p=0.82), suggesting they reflect different neural processes related to our experimental task manipulations. These findings are consistent with literature pointing to a role of midfrontal theta as a source of cognitive control signals originating from pre-frontal cortex (Cohen and Frank, 2009 van Driel et al., 2012) and alpha in posterior cortex reflecting spontaneous trial-to-trial fluctuations in decision bias (Iemi et al., 2017). The fact that these pre-stimulus effects occur as a function of our experimental manipulation suggests that they are a hallmark of strategic bias adjustment, rather than a mere correlate of spontaneous shifts in decision bias. Importantly, this finding implies that humans are able to actively control pre-stimulus alpha power in visual cortex (possibly through top-down signals from frontal cortex), plausibly acting to bias sensory evidence accumulation toward the response alternative that maximizes reward.

Adopting a liberal decision bias is reflected in increased midfrontal theta and suppressed pre-stimulus alpha power.

(A) Significant clusters of power modulation between liberal and conservative in a pre-stimulus window between −1 and 0 s before trial onset. When performing a cluster-based permutation test over all frequencies (1–35 Hz) and electrodes, two significant clusters emerged: theta (2–6 Hz, top), and alpha (8–12 Hz, bottom). Left panels: raw power spectra of pre-stimulus neural activity for conservative and liberal separately in the significant clusters (for illustration purposes), Middle panels: Liberal – conservative raw power spectrum. Black horizontal bar indicates statistically significant frequency range (p<0.05, cluster-corrected for multiple comparisons, two-sided). Right panels: Corresponding liberal – conservative scalp topographic maps of the pre-stimulus raw power difference between conditions for EEG theta power (2–6 Hz) and alpha power (8–12 Hz). Plotting conventions as in Figure 3. Error bars, SEM across participants (N = 15). (B) Probability density distributions of single trial alpha power values for both conditions, averaged across participants.

Pre-stimulus alpha power is linked to cortical gamma responses

Next, we asked how suppression of pre-stimulus alpha activity might bias the process of sensory evidence accumulation. One possibility is that alpha suppression influences evidence accumulation by modulating the susceptibility of visual cortex to sensory stimulation, a phenomenon termed ‘neural excitability’ (Iemi et al., 2017 Jensen and Mazaheri, 2010). We explored this possibility using a theoretical response gain model formulated by Rajagovindan and Ding (2011). This model postulates that the relationship between the total synaptic input that a neuronal ensemble receives and the total output activity it produces is characterized by a sigmoidal function (Figure 6A) – a notion that is biologically plausible (Destexhe et al., 2001 Freeman, 1979). In this model, the total synaptic input into visual cortex consists of two components: (1) sensory input (i.e. due to sensory stimulation) and (2) ongoing fluctuations in endogenously generated (i.e. not sensory-related) neural activity. In our experiment, the sensory input into visual cortex can be assumed to be identical across trials, because the same sensory stimulus was presented in each trial (see Figure 2A). The endogenous input, in contrast, is thought to vary from trial to trial reflecting fluctuations in top-down cognitive processes such as attention. These fluctuations are assumed to be reflected in the strength of alpha power suppression, such that weaker alpha is associated with increased attention (Figure 6B). Given the combined constant sensory and variable endogenous input in each trial (see horizontal axis in Figure 6A), the strength of the output responses of visual cortex are largely determined by the trial-to-trial variations in alpha power (see vertical axis in Figure 6A). Furthermore, the sigmoidal shape of the input-output function results in an effective range in the center of the function’s input side which yields the strongest stimulus-induced output responses since the sigmoid curve there is steepest. Mathematically, the effect of endogenous input on stimulus-induced output responses (see marked interval in Figure 6A) can be expressed as the first order derivative or slope of the sigmoid in Figure 6A, which is referred to as the response gain by Rajagovindan and Ding (2011). This derivative is plotted in Figure 6B (blue and red solid lines) across levels of pre-stimulus alpha power, predicting an inverted-U shaped relationship between alpha and response gain in visual cortex.

Pre-stimulus alpha power is linked to cortical gamma responses.

(A) Theoretical response gain model describing the transformation of stimulus-induced and endogenous input activity (denoted by Sx and SN respectively) to the total output activity (denoted by O(Sx +SN)) in visual cortex by a sigmoidal function. Different operational alpha ranges are associated with input-output functions with different slopes due to corresponding changes in the total output. (B) Alpha-linked output responses (solid lines) are formalized as the first derivative (slope) of the sigmoidal functions (dotted lines), resulting in inverted-U (Gaussian) shaped relationships between alpha and gamma, involving stronger response gain in the liberal than in the conservative condition. (C) Corresponding empirical data showing gamma modulation (same percent signal change units as in Figure 3) as a function of alpha bin. The location on the x-axis of each alpha bin was taken as the median alpha of the trials assigned to each bin and averaged across subjects. (D-F) Model prediction tests. (D) Raw pre-stimulus alpha power for both conditions, averaged across subjects. (E) Post-stimulus gamma power modulation for both conditions averaged across the two middle alpha bins (5 and 6) in panel C. (F) Liberal – conservative difference between the response gain curves shown in panel C, centered on alpha bin. Error bars, within-subject SEM across participants (N = 14).

Figure 6—source data 1

SPSS .sav file containing the data used in panels C, E, and F.

Regarding our experimental conditions, the model not only predicts that the suppression of pre-stimulus alpha observed in the liberal condition reflects a shift in the operational range of alpha (see Figure 5B), but also that it increases the maximum output of visual cortex (a shift from the red to the blue line in Figure 6A). Therefore, the difference between stimulus conditions is not modeled using a single input-output function, but necessitates an additional mechanism that changes the input-output relationship itself. The exact nature of this mechanism is not known (also see Discussion). Rajagovindan and Ding suggest that top-down mechanisms modulate ongoing prestimulus neural activity to increase the slope of the sigmoidal function, but despite the midfrontal theta activity we observed, evidence for this hypothesis is somewhat elusive. We have no means to establish directly whether this relationship exists, and can merely reflect on the fact that this change in the input-output function is necessary to capture condition-specific effects of the input-output relationship, both in the data of Rajagovindan and Ding (2011) and in our own data. Thus, as the operational range of alpha shifts leftwards from conservative to liberal, the upper asymptote in Figure 6A moves upwards such that the total maximum output activity increases. This in turn affects the inverted-U-shaped relationship between alpha and gain in visual cortex (blue line in Figure 6B), leading to a steeper response curve in the liberal condition resembling a Gaussian (bell-shaped) function.

To investigate sensory response gain across different alpha levels in our data, we used the post-stimulus gamma activity (see Figure 3B) as a proxy for alpha-linked output gain in visual cortex (Bastos et al., 2015 Michalareas et al., 2016 Ni et al., 2016 Popov et al., 2017 van Kerkoerle et al., 2014). We exploited the large number of trials per participant per condition (range 543 to 1391 trials) by sorting each participant’s trials into ten equal-sized bins ranging from weak to strong alpha, separately for the two conditions. We then calculated the average gamma power modulation within each alpha bin and finally plotted the participant-averaged gamma across alpha bins for each condition in Figure 6C (see Materials and methods for details). This indeed revealed an inverted-U shaped relationship between alpha and gamma in both conditions, with a steeper curve for the liberal condition.

To assess the model’s ability to explain the data, we statistically tested three predictions derived from the model. First, the model predicts overall lower average pre-stimulus alpha power for liberal than for conservative due to the shift in the operational range of alpha. This was confirmed in Figure 6D (p=0.01, permutation test, see also Figure 5). Second, the model predicts a stronger gamma response for liberal than for conservative around the peak of the gain curve (the center of the effective alpha range, see Figure 6B), which we indeed observed (p=0.024, permutation test on the average of the middle two alpha bins) (Figure 6E). Finally, the model predicts that the difference between the gain curves (when they are aligned over their effective ranges on the x-axis using alpha bin number, as shown in Figure 6—figure supplement 1A) also resembles a Gaussian curve (Figure 6—figure supplement 1B). Consistent with this prediction, we observed an interaction effect between condition (liberal, conservative) and bin number (1-10) using a standard Gaussian contrast in a two-way repeated measures ANOVA (F(1,13) = 4.6, p=0.051, partial η 2 = 0.26). Figure 6F illustrates this finding by showing the difference between the two curves in Figure 6C as a function of alpha bin number (see Figure 6—figure supplement 1C for the curves of both conditions as a function of alpha bin number). Subsequent separate tests for each condition indeed confirmed a significant U-shaped relationship between alpha and gamma in the liberal condition with a large effect size (F(1,13) = 7.7, p=0.016, partial η 2 = 0.37), but no significant effect in the conservative condition with only a small effect size (F(1,13) = 1.7, p=0.22, partial η 2 = 0.12), using one-way repeated measures ANOVA’s with alpha bin (Gaussian contrast) as the factor of interest.

Taken together, these findings suggest that the alpha suppression observed in the liberal compared to the conservative condition boosted stimulus-induced activity, which in turn might have indiscriminately biased sensory evidence accumulation toward the target-present decision boundary. In the next section, we investigate a direct link between drift bias and stimulus-induced activity as measured through gamma.

Visual cortical gamma activity predicts strength of evidence accumulation bias

The findings presented so far suggest that behaviorally, a liberal decision bias shifts evidence accumulation toward target-present responses (drift bias in the DDM), while neurally it suppresses pre-stimulus alpha and enhances poststimulus gamma responses. In a final analysis, we asked whether alpha-binned gamma modulation is directly related to a stronger drift bias. To this end, we again applied the drift bias DDM to the behavioral data of each participant, while freeing the drift bias parameter not only for the two conditions, but also for the 10 alpha bins for which we calculated gamma modulation (see Figure 6C). We directly tested the correspondence between DDM drift bias and gamma modulation using repeated measures correlation (Bakdash and Marusich, 2017), which takes all repeated observations across participants into account while controlling for non-independence of observations collected within each participant (see Materials and methods for details). Gamma modulation was indeed correlated with drift bias in both conditions (liberal, r(125) = 0.49, p=5e-09 conservative, r(125) = 0.38, p=9e-06) (Figure 7). We tested the robustness of these correlations by excluding the data points that contributed most to the correlations (as determined with Cook’s distance) and obtained qualitatively similar results, indicating these correlations were not driven by outliers (Figure 7, see Materials and methods for details). To rule out that starting point could explain this correlation, we repeated this analysis while controlling for the starting point of evidence accumulation estimated per alpha bin within the starting point model. To this end, we regressed both bias parameters on gamma. Crucially, we found that in both conditions starting point bias did not uniquely predict gamma when controlling for drift bias (liberal: F(1,124) = 5.8, p=0.017 for drift bias, F(1,124) = 0.3, p=0.61 for starting point conservative: F(1,124) = 8.7, p=0.004 for drift bias, F(1,124) = 0.4, p=0.53 for starting point. This finding suggests that the drift bias model outperforms the starting point model when correlated to gamma power. As a final control, we also performed this analysis for the SSVEP (23–27 Hz) power modulation (see Figure 3B, bottom) and found a similar inverted-U shaped relationship between alpha and the SSVEP for both conditions (Figure 7—figure supplement 1A), but no correlation with drift bias (liberal, r(125) = 0.11, p=0.72, conservative, r(125) = 0.22, p=0.47) (Figure 7—figure supplement 1B) or with starting point (liberal, r(125) = 0.08, p=0.02, conservative, r(125) = 0.22, p=0.95). This suggests that the SSVEP is similarly coupled to alpha as the stimulus-induced gamma, but is less affected by the experimental conditions and not predictive of decision bias shifts. Taken together, these results suggest that alpha-binned gamma modulation underlies biased sensory evidence accumulation.

Alpha-binned gamma modulation correlates with evidence accumulation bias.

Repeated measures correlation between gamma modulation and drift bias for the two conditions. Each circle represents a participant’s gamma modulation within one alpha bin. Drift bias and gamma modulation scalars were residualized by removing the average within each participant and condition, thereby removing the specific range in which the participants values operated. Crosses indicate data points that were most influential for the correlation, identified using Cook’s distance. Correlations remained qualitatively unchanged when these data points were excluded (liberal, r(120) = 0.46, p=8e-07 conservative, r(121) = 0.27, p=0.0009) Error bars, 95% confidence intervals after averaging across participants.

Figure 7—source data 1

MATLAB .mat file containing the data used.

Finally, we asked to what extent the enhanced tonic midfrontal theta may have mediated the relationship between alpha-binned gamma and drift bias. To answer this question, we entered drift bias in a two-way repeated measures ANOVA with factors theta and gamma power (all variables alpha-binned), but found no evidence for mediation of the gamma-drift bias relationship by midfrontal theta (liberal, F(1,13) = 1.3, p=0.25 conservative, F(1,13) = 0.003, p=0.95). At the same time, the gamma-drift bias relationship was qualitatively unchanged when controlling for theta (liberal, F(1,13) = 48.4, p<0.001 conservative, F(1,13) = 19.3, p<0.001). Thus, the enhanced midfrontal theta in the liberal condition plausibly reflects a top-down, attention-related signal indicating the need for cognitive control to avoid missing targets, but its amplitude seemed not directly linked to enhanced sensory evidence accumulation, as found for gamma. This latter finding suggests that the enhanced theta in the liberal condition served as an alarm signal indicating the need for a shift in response strategy, without specifying exactly how this shift was to be implemented (Cavanagh and Frank, 2014).


Motor Control

2.1 Movement Units and Their Limits

One of the basic challenges in the study of motor control is the dissection of fundamental movement units. In this terminology a unit is a relatively invariant pattern of muscular contractions that are typically elicited together. Reference to one of the most complex forms of human movement control, speech, is illustrative (Neville 1995 ). When we speak we articulate a broadly invariant set of phonemes (basic sound patterns) in sequence. This allows for the articulation of speech to be relatively ‘thought free’ in normal discourse, and allows the listener to decode the message through a complex array of perceptual and higher-order cognitive processes (see Syntactic Aspects of Language, Neural Basis of Motor Control Sign Language: Psychological and Neural Aspects Lexical Processes (Word Knowledge): Psychological and Neural Aspects Speech Production, Neural Basis of ). The example is useful in that in the smooth flow of speech individual phonemes are often ‘co-articulated’ with the fine details of one sound unit reflecting the production of preceding and subsequent sound patterns. So even here the ‘intrinsic’ properties of a speech unit can be sensitive to extrinsic influences as defined by other speech units. This is what gives speech its smooth and flowing nature (as opposed, for example, to most computer generated ‘speech’).

Thus, in some sense movement and their motor control ‘units’ are abstractions that frequently blur at the edges, and which can be defined from a number of complementary perspectives (Golani 1992 ). In complex forms of movement, such as playing a sport or musical instrument, or dancing with a moving partner, many neural pathways are orchestrated together in dazzling patterns that are widely distributed, serially ordered, and parallel in their operations. It is for this reason that the very distinction between motor control systems and other properties of the nervous system are often difficult to untangle (see Vision for Action: Neural Mechanisms Cognitive Control (Executive Functions): Role of Prefrontal Cortex ).


Intelligible speech despite noisy surroundings

Prof Dr Dorothea Kolossa and Mahdie Karbasi from the research group Cognitive Signal Processing at Ruhr-Universität Bochum (RUB) have developed a method for predicting speech intelligibility in noisy surroundings. The results of their experiments are more precise than those gained through the standard methods applied hitherto. They might thus facilitate the development process of hearing aids. The research was carried out in the course of the EU-funded project "Improved Communication through Applied Hearing Research," or "I can hear" for short.

Specific algorithms in hearing aids filter out background noises to ensure that wearers are able to understand speech in every situation -- regardless if they are in a packed restaurant or near a busy road. The challenge for the researchers is to maintain high speech transmission quality while filtering out background noises. Before an optimised hearing aid model is released to the market, new algorithms are subject to time-consuming tests.

Researchers and industrial developers run hearing tests with human participants to analyse to what extent the respective new algorithms will ensure speech intelligibility. If they were able to assess speech intelligibility reliably in an automated process, they could cut down on time-consuming test practices.

New algorithm developed

To date, the standard approaches for predicting speech intelligibility have included the so-called STOI method (short time objective speech intelligibility measure) and other reference-based methods. These methods require a clear original signal, i.e. an audio track that's been recorded without any background noises. Based on the differences between original and filtered sound, the value of speech intelligibility is estimated. Kolossa and Karbasi have found a way to predict intelligibility without needing a clear reference signal, which is still more precise than the STOI method. Consequently, Kolossa and Karbasi's findings might help reduce test processes in the product development phase of hearing aids.

The RUB researchers have tested their method with 849 individuals with normal hearing. To this end, the participants were asked to assess audio files via an online platform. With the aid of their algorithm, Kolossa and Karbasi estimated which percentage of a sentence from the respective file would be understood by the participants. Subsequently, they compared their predicted value with the test results.

Research outlook

In the next step, Kolossa and Karbasi intend to run the same tests with hearing-impaired participants. They are working on algorithms that can assess and optimise speech intelligibility in accordance with the individual perception threshold or type of hearing impairment. In the best case scenario, the study will thus provide methods for engineering an intelligent hearing aid. Such hearing aids could automatically recognise the wearer's current surroundings and situation. If he or she steps from a quiet street into a restaurant, the hearing aid would register an increase in background noises. Accordingly, it would filter out the ambient noises -- if possible without impairing the quality of the speech signal.

About the project

The main objective of the project "Improved Communication through Applied Hearing Research" was to optimise hearing aids and cochlear implants to ensure that they fulfil their function for their wearer even in very noisy surroundings. RUB researchers worked in an international team together with researchers from the UK, Switzerland, Denmark, and Belgium. Prof Dr Rainer Martin from the RUB Faculty of Electrical Engineering and Information Technology headed the EU-funded project. Industrial partners were hearing aid manufacturer Sivantos and cochlear implant company Cochlear. "I can hear" ended in December 2016.


Molly Crockett: "The Neuroscience of Moral Decision Making"

Imagine we could develop a precise drug that amplifies people's aversion to harming others on this drug you won't hurt a fly, everyone taking it becomes like Buddhist monks. Who should take this drug? Only convicted criminals—people who have committed violent crimes? Should we put it in the water supply? These are normative questions. These are questions about what should be done. I feel grossly unprepared to answer these questions with the training that I have, but these are important conversations to have between disciplines. Psychologists and neuroscientists need to be talking to philosophers about this. These are conversations that we need to have because we don't want to get to the point where we have the technology but haven't had this conversation, because then terrible things could happen.

MOLLY CROCKETT is an associate professor in the Department of Experimental Psychology, University of Oxford Wellcome Trust Postdoctoral Fellow, Wellcome Trust Centre for Neuroimaging. Molly Crockett's Edge Bio Page

THE NEUROSCIENCE OF MORAL DECISION MAKING

I'm a neuroscientist at the University of Oxford in the UK. I'm interested in decision making, specifically decisions that involve tradeoffs for example, tradeoffs between my own self-interest and the interests of other people, or tradeoffs between my present desires and my future goals.

One case study for this is moral decision making. When we can see that there's a selfish option and we can see that there's an altruistic or a cooperative option, we can reason our way through the decision, but there are also gut feelings about what's right and what's wrong. I've studied the neurobiology of moral decision making, specifically how different chemicals in our brains—neuromodulators—can shape the process of making moral decisions and push us one way or another when we're reasoning and deciding.

Neuromodulators are chemicals in the brain. There are a bunch of different neuromodulator systems that serve different functions. Events out in the world activate these systems and then they perfuse into different regions of the brain and influence the way that information is processed in those regions. All of you have experience with neuromodulators. Some of you are drinking cups of coffee right now. Many of you probably had wine with dinner last night. Maybe some of you have other experiences that are a little more interesting.

But you don't need to take drugs or alcohol to influence your neurochemistry. You can also influence your neurochemistry through natural events: Stress influences your neurochemistry, sex, exercise, changing your diet. There are all these things out in the world that feed into our brains through these chemical systems. I've become interested in studying if we change these chemicals in the lab, can we cause changes in people's behavior and their decision making?

One thing to keep in mind about the effects of these different chemicals on our behavior is that the effects here are subtle. The effect sizes are really small. This has two consequences for doing research in this area. The first is because the effect sizes are so small, the published literature on this is likely to be underpowered. There are probably a lot of false positives out there. We heard earlier that there is a lot of thought on this in science, not just in psychology but in all of science about how we can do better powered experiments, and how we can create a set of data that will tell us what's going on.

The other thing—and this is what I've been interested in—is because the effects of neuromodulators are so subtle, we need precise measures in the lab of the behaviors and decision processes that we're interested in. It's only with precise measures that we're going to be able to pick up these subtle effects of brain chemistry, which maybe at the individual level aren't going to make a dramatic difference in someone's personality, but at the aggregate level, in collective behaviors like cooperation and public goods problems, these might become important on a global scale.

How can we measure moral decision making in the lab in a precise way, and also in a way that we can agree is actually moral? This is an important point. One big challenge in this area is there's a lot of disagreement about what constitutes a moral behavior. What is moral? We heard earlier about cooperation—maybe some people think that's a moral decision but maybe other people don't. That's a real issue for getting people to cooperate.

First we have to pick a behavior that we can all agree is moral, and secondly we need to measure it in a way that tells us something about the mechanism. We want to have these rich sets of data that tell us about these different moving parts—these different pieces of the puzzle—and then we can see how they map onto different parts of the brain and different chemical systems.

What I'm going to do over the next 20 minutes is take you through my thought process over the past several years. I tried a bunch of different ways of measuring the effects of neurochemistry on what at one point I think is moral decision making, but then turns out maybe is not the best way to measure morality. And I'll show you how I tried to zoom in on more advanced and sophisticated ways of measuring the cognitions and emotions that we care about in this context.

When I started this work several years ago, I was interested in punishment and economic games that you can use to measure punishment—if someone treats you unfairly then you can spend a bit of money to take money away from them. I was interested specifically in the effects of a brain chemical called serotonin on punishment. The issues that I'll talk about here aren't specific to serotonin but apply to this bigger question of how can we change moral decision making.

When I started this work the prevailing view about punishment was that punishment was a moral behavior—a moralistic or altruistic punishment where you're suffering a cost to enforce a social norm for the greater good. It turned out that serotonin was an interesting chemical to be studying in this context because serotonin has this long tradition of being associated with prosocial behavior. If you boost serotonin function, this makes people more prosocial. If you deplete or impair serotonin function, this makes people antisocial. If you go by the logic that punishment is a moral thing to do, then if you enhance serotonin, that should increase punishment. What we actually see in the lab is the opposite effect. If you increase serotonin people punish less, and if you decrease serotonin people punish more.

That throws a bit of a spanner in the works of the idea that punishment is this exclusively prosocially minded act. And this makes sense if you just introspect into the kinds of motivations that you go through if someone treats you unfairly and you punish them. I don't know about you, but when that happens to me I'm not thinking about enforcing a social norm or the greater good, I just want that guy to suffer I just want him to feel bad because he made me feel bad.

The neurochemistry adds an interesting layer to this bigger question of whether punishment is prosocially motivated, because in some ways it's a more objective way to look at it. Serotonin doesn't have a research agenda it's just a chemical. We had all this data and we started thinking differently about the motivations of so-called altruistic punishment. That inspired a purely behavioral study where we give people the opportunity to punish those who behave unfairly towards them, but we do it in two conditions. One is a standard case where someone behaves unfairly to someone else and then that person can punish them. Everyone has full information, and the guy who's unfair knows that he's being punished.

Then we added another condition, where we give people the opportunity to punish in secret— hidden punishment. You can punish someone without them knowing that they've been punished. They still suffer a loss financially, but because we obscure the size of the stake, the guy who's being punished doesn't know he's being punished. The punisher gets the satisfaction of knowing that the bad guy is getting less money, but there's no social norm being enforced.

What we find is that people still punish a lot in the hidden punishment condition. Even though people will punish a little bit more when they know the guy who's being punished will know that he's being punished—people do care about norm enforcement—a lot of punishment behavior can be explained by a desire for the norm violator to have a lower payoff in the end. This suggests that punishment is potentially a bad way to study morality because the motivations behind punishment are, in large part, spiteful.

Another set of methods that we've used to look at morality in the lab and how it's shaped by neurochemistry is trolley problems—the bread and butter of moral psychology research. These are hypothetical scenarios where people are asked whether it's morally acceptable to harm one person in order to save many others.

We do find effects of neuromodulators on these scenarios and they're very interesting in their own right. But I've found this tool unsatisfying for the question that I'm interested in, which is: How do people make moral decisions with real consequences in real time, rather than in some hypothetical situation? I'm equally unsatisfied with economic games as a tool for studying moral decision making because it's not clear that there's a salient moral norm in something like cooperation in a public goods game, or charitable giving in a dictator game. It's not clear that people feel guilty if they choose the selfish option in these cases.

After all this I've gone back to the drawing board and thought about what is the essence of morality? There's been some work on this in recent years. One wonderful paper by Kurt Gray, Liane Young, and Adam Waytz argues that the essence of morality is harm, specifically intentional interpersonal harm—an agent harming a patient. Of course morality is more than this absolutely morality is more than this. It will be hard to find a moral code that doesn't include some prohibition against harming someone else unless you have a good reason.

What I wanted to do was create a measure in the lab that can precisely quantify how much people dislike causing interpersonal harms. What we came up with was getting people to make tradeoffs between personal profits—money—and pain in the form of electric shocks that are given to another person.

What we can do with this method is calculate, in monetary terms, how much people dislike harming others. And we can fit computational models to their decision process that give us a rich picture of how people make these decisions -- not just how much harm they're willing to deliver or not -- but what is the precise value they place on the harm of others relative to, for example, harm to themselves? What is the relative certainty or uncertainty with which they're making those decisions? How noisy are their choices? If we're dealing with monetary gains or losses, how does loss aversion factor into this?

We can get a more detailed picture of the data and of the decision process from using methods like these, which are largely inspired by work on non-social decision making and computational neuroscience where a lot of progress has been made in recent years. For example, in foraging environments how do people decide whether to go left or right when there are fluctuating reward contingencies in the environment?

What we're doing is importing those methods to the study of moral decision making and a lot of interesting stuff has come out of it. As you might expect there is individual variation in decision making in this setting. Some people care about avoiding harm to others and other people are like, "Just show me the money, I don't care about the other person." I even had one subject who was almost certainly on the psychopathy scale. When I explained to him what he had to do he said, "Wait, you're going to pay me to shock people? This is the best experiment ever!" Whereas other people are uncomfortable and are even distressed by this. This is capturing something real about moral decision making.

One thing that we're seeing in the data is that people who seem to be more averse to harming others are slower when they're making their decisions. This is an interesting contrast to Dave's work where the more prosocial people are faster. Of course there are issues that we need to work out about correlation versus causation in response times and decision making, but there are some questions here in thinking about the differences between a harm context and helping context. It may be that the heuristics that play out in a helping context come from learning about what is good and latch onto neurobiological systems that approach rewards and get invigorated when there are awards around, in contrast to neurobiological systems that avoid punishments and slow down or freeze when there are punishments around.

In the context of tradeoffs between profit for myself and pain for someone else, it makes sense that people who are maximizing the profit for themselves are going to be faster because if you're considering the harm to someone else, that's an extra computational step you have to take. If you're going to factor in someone else's suffering—the negative externality of your decisions—you have to do that computation and that's going to take a little time.

In this broader question of the time course of moral decision making, there might be a sweet spot where on the one hand you have an established heuristic of helping that's going to make you faster, but at the same time considering others is also a step that requires some extra processing. This makes sense.

When I was developing this work in London I was walking down the street one day checking my phone, as we all do, and this kid on a bike in a hoodie came by and tried to steal my phone. He luckily didn't get it, it just crashed to the floor -- he was an incompetent thief. In thinking about what his thought process was during that time, he wasn't thinking about me at all. He had his eye on the prize. He had his eye on the phone, he was thinking about his reward. He wasn't thinking about the suffering that I would feel if I lost my phone. That's a broader question to think about in terms of the input of mentalizing to moral decision making.

Another observation is that people who are nicer in this setting seem to be more uncertain in their decision making. If you look at the parameters that describe uncertainty, you can see that people who are nicer seem to be more noisy around their indifference point. They waver more in these difficult decisions.

So I've been thinking about uncertainty and its relationship to altruism and social decision making, more generally. One potentially fruitful line of thought is that social decisions—decisions that affect other people—always have this inherent element of uncertainty. Even if we're a good mentalizer, even if we're the best possible mentalizer, we're never going to fully know what it is like to be someone else and how another person is going to experience the effects of our actions on them.

One thing that it might make sense to do if we want to co-exist peacefully with others is we simulate how our behavior is going to effect others, but we err on the side of caution. We don't want to impose an unbearable cost on someone else so we think, "Well, I might dislike this outcome a certain amount but maybe my interaction partner is going to dislike it a little more so I'm just going to add a little extra safety—a margin of error—that's going to move me in the prosocial direction." We're seeing this in the context of pain but this could apply to any cost—risk or time cost.

Imagine that you have a friend who is trying to decide between two medical procedures. One procedure produces the most desirable outcome, but it also has a high complication or a high mortality rate. Another procedure doesn't achieve as good of an outcome but it's much safer. Suppose your friend says to you, "I want you to choose which procedure I'm going to have. I want you to choose for me." First of all, most of us would be very uncomfortable making that decision for someone else. Second, my intuition is that I would definitely go for the safer option because if something bad happened in the risky decision, I would feel terrible.

This idea that we can't access directly someone else's utility function is a rather old idea and it goes back to the 1950s with the work of John Harsanyi, who did some work on what he called interpersonal utility comparisons. How do you compare one person's utility to another person's utility? This problem is important, particularly in utilitarian ethics, because if you want to maximize the greatest good for the greatest number, you have to have some way of measuring the greatest good for each of those numbers.

The challenge of doing this was recognized by the father of utilitarianism, Jeremy Bentham, who said, "'Tis vain to talk of adding quantities which after the addition will continue to be as distinct as they were before one man's happiness will never be another man's happiness: a gain to one man is no gain to another: you might as well pretend to add 20 apples to 20 pears."

This problem has still not been solved. Harsanyi has done a lot of great work on this but what he ended up with—his final solution—was still an approximation that assumes that people have perfect empathy, which we know is not the case. There's still room in this area for exploration.

The other thing about uncertainty is that, on one hand it could lead us towards prosocial behavior, but on the other hand there's evidence that uncertainty about outcomes and about how other people react to those outcomes can license selfish behavior. Uncertainty can also be exploited for personal gain for self-serving interests.

Imagine you're the CEO of a company. You're trying to decide whether to lay off some workers in order to increase shareholder value. If you want to do the cost benefit analysis, you have to calculate what's the negative utility for the workers of losing their jobs and how does that compare to the positive utility of the shareholders for getting these profits? Because you can't directly access how the workers are going to feel, and how the shareholders are going to feel, there's space for self-interest to creep in, particularly if there are personal incentives to push you one direction or the other.

There's some nice work that has been done on this by Roberto Weber and Jason Dana who have shown that if you put people in situations where outcomes are ambiguous, people will use this to their advantage to make the selfish decision but still preserve their self-image as being a moral person. This is going to be an important question to address. When does uncertainty lead to prosocial behavior because we don't want to impose an unbearable cost on someone else? And when does it lead to selfish behavior because we can convince ourselves that it's not going to be that bad?

These are things we want to be able to measure in the lab and to map different brain processes—different neurochemical systems—onto these different parameters that all feed into decisions. We're going to see progress over the next several years because in this non-social computational neuroscience there are smart people who are mapping how basic decisions work. All people like me have to do is import those methods to studying more complex social decisions. There's going to be a lot of low-hanging fruit in this area over the next few years.

Once we figure out how all this works—and I do think it's going to be a while—I've been misquoted sometimes about saying morality pills are just around the corner, and I assure you that this is not the case. It's going to be a very long time before we're able to intervene in moral behavior and that day may never even come. The reason why this is such a complicated problem is because working out how the brain does this is the easy part. The hard part is what to do with that. This is a philosophical question. If we figure out how all the moving parts work, then the question is should we intervene and if so how should we intervene?

Imagine we could develop a precise drug that amplifies people's aversion to harming others on this drug you won't hurt a fly, everyone taking it becomes like Buddhist monks. Who should take this drug? Only convicted criminals—people who have committed violent crimes? Should we put it in the water supply? These are normative questions. These are questions about what should be done. I feel grossly unprepared to answer these questions with the training that I have, but these are important conversations to have between disciplines. Psychologists and neuroscientists need to be talking to philosophers about this. These are conversations that we need to have because we don't want to get to the point where we have the technology but haven't had this conversation, because then terrible things could happen.

The last thing that I'll say is it's also interesting to think about the implications of this work, the fact that we can shift around people's morals by giving them drugs. What are the implications of this data for our understanding of what morality is?

There's increasing evidence now that if you give people testosterone or influence their serotonin or oxytocin, this is going to shift the way they make moral decisions. Not in a dramatic way, but in a subtle yet significant way. And because the levels and function of our neuromodulators are changing all the time in response to events in our environment, that means that external circumstances can play a role in what you think is right and what you think is wrong.

Many people may find this to be deeply uncomfortable because we like to think of our morals as being core to who we are and one of the most stable things about us. We like to think of them as being written in stone. If this is not the case, then what are the implications for our understanding of who we are and what we should think about in terms of enforcing norms in society? Maybe you might think the solution is we should just try to make our moral judgments from a neutral stance, like the placebo condition of life. That doesn't exist. Our brain chemistry is shifting all the time so it's this very unsteady ground that we can't find our footing on.

At the end of the day that's how I try to avoid being an arrogant scientist who's like, "I can measure morality in the lab." I have deep respect for the instability of these things and these are conversations that I find deeply fascinating.

THE REALITY CLUB

L.A. PAUL: I had a question about how you want to think about these philosophical issues. Sometimes they get described as autonomy. You said if we could discover some chemical that would improve people’s moral capacities, do we put it in the water? The question I have is a little bit related to imaginability. In other words, the guy who tried to steal your phone. The thought was: If he were somehow better able to imagine how I would respond, he would somehow make maybe a better moral judgment. There’s an interesting normative versus descriptive question there because on the one hand, it might be easier to justify putting the drug in the water if it made people better at grasping true moral facts.

What if it just made them better at imagining various scenarios so that they acted in a morally better way, but in fact it had no connection at all to reality, it just made their behavior better. It seems like it’s important to make that distinction even with the work that you’re doing. Namely, are you focusing on how people actually act or are you focusing on the psychological facts? Which one are we prioritizing and which one are we using to justify whatever kinds of policy implications?

CROCKETT: This goes back to the question of do we want to be psychologists or economists if we're confronted with a worldly, all-powerful being. I am falling squarely in the psychologist camp in that it's so important to understand the motivations behind why people do the things they do -- because if you change the context, then people might behave differently. If you're just observing behavior and you don't know why that behavior occurs, then you could make incorrect predictions.

Back to your question, one thought that pops up is it's potentially less controversial to enhance capabilities that people think about as giving them more competence in the world.

PAUL: There's interesting work on organ donors in particular. When people are recruiting possible organ donors and they’re looking at the families who have to make the decision, it turns out that that you get better results by encouraging the families of potential donors to imagine that the daughter was killed in a car accident, the recipient of the organ will be 17 and also loves horses. It could just be some dude with a drug problem who’s going to get the organ, but the measured results of the donating family are much better if that family engages in this fictitious imagining even though it has no connection at all to the truth. It's not always simple. In other words, the moral questions sometimes come apart from the desired empirical result.

CROCKETT: One way that psychologists and neuroscientists can contribute to this discussion is to be as specific and precise as possible in understanding how to shape motivation versus how to shape choices. I don't have a good answer about the right thing to do in this case, but I agree that it is an important question.

DAVID PIZARRO: I have a good answer. This theme was something that was emerging at the end with Dave’s talk about promoting behavior versus understanding the mechanisms. There is—even if you are a psychologist and you have an interest in this—a way in which, in the mechanisms, you could say, "I’m going to take B.F. Skinner’s learning approach and say what I care about is essentially the frequency of the behavior. What are the things that I have to do to promote the behavior that I want to promote?"

You can get these nice, manipulated contingencies in the environment between reward and punishment. Does reward work better than punishment?

I want to propose that we have two very good intuitions, one, which should be discarded when we’re being social scientists, is what do we want our kids to be like? I want my kid to be good for the right reasons. In other words, I want her to develop a character that I can be proud of and that she can be proud of. I want her to donate to charity not because she’s afraid that if she doesn’t people will judge her poorly but because she genuinely cares about other people.

When I’m looking at society, and the more and more work that we do that might have implications for society, we should set aside those concerns. That is, we should be comfortable saying that there is one question about what the right reasons are and what the right motivations are in a moral sense. There’s another question that should ask from a public policy perspective: what will maximize the welfare of my society? I don’t give a rat’s ass why people are doing it!

It shouldn't make a difference if you’re doing it because you’re ashamed (like Jennifer might be talking about later): "I want to sign up for the energy program because I will get mocked by my peers," or if you’re doing it because you realize this is a calling that God gave to you­—to insert this little temperature reducer during California summers. That "by any means necessary" approach that seems so inhuman to us as individuals is a perfectly appropriate strategy to use when we’re making decisions for the public.

CROCKETT: Yes, that makes sense and it's a satisficing approach rather than a maximizing approach. One reason why we care about the first intuition so much is because in the context in which we evolved, which was small group interactions, someone who does a good thing for the right reasons is going to be more reliable and more trustworthy over time than someone who does it for an externally incentivized reason.

PIZARRO: And it may not be true. right? It may turn out to be wrong.

DAVID RAND: That's right, but I think it’s still true that it’s not just about when you were in a small group—hunter-gatherer—but in general: if you believe something for the right reason, then you’ll do it even if no one is watching. That creates a more socially optimal outcome than if you only do it when someone is watching.

PIZARRO: It’s an empirical question though. I don't know if it’s been answered. For instance, the fear of punishment.

RAND: We have data, of a flavor. If you look at people that cooperate in repeated prisoners dilemmas, they’re no more or less likely to cooperate in one shot, or they’re no more likely to give in a dictator game. When the rule is in place, everybody cooperates regardless of whether they’re selfish or not. When no incentive is there, selfish people go back to being selfish.

SARAH-JAYNE BLAKEMORE: There’s also data that in newsagents in the UK, where sometimes you can take a newspaper and put money in the slot, and if you put a couple of eyes above the money slot, people are more likely to pay their dues than if you don’t put any eyes there.

PIZARRO: That’s certainly not acting for the right reason. That can’t be the right reason.

RAND: You were bringing up the issue of thinking about the consequences for yourself versus the other person. When we’re thinking about how these decisions get made, there are two stages that are distinct but get lumped together a lot conceptually and measurement-wise. You have to understand what the options are, and then once you know what the options are, you have to choose which one you prefer. It seems to me that automatic versus deliberative processing has opposite roles in those two domains. Obviously to understand the problem you have to think about it. If you’re selfish, you don’t need to spend time to think about the decision because it’s obviously what to do. We try to separate those things by explaining the decision beforehand when you’re not constrained. Then when it comes time to make the decision, you put people under time pressure.

CROCKETT: That can explain what's going on and that's a good point because these ideas about uncertainty and moral wiggle room, those are going to play the biggest role in the first part—in the construing of the problem. Is this a moral decision or is this not a moral decision? Potentially also playing the biggest role is this idea you were talking about earlier about how do people internalize what is the right thing to do? How do you establish that this is the right thing to do?

We should talk more about this because, methodologically, this is important to separate out.

HUGO MERCIER: Can I say something about this issue of mentalizing? You're right in drawing attention to the importance of mentalizing in making moral decisions or moral judgments. It seems that the data indicates that we’re not very good at it, that we have biases and we tend to not be very good when we think about what might have caused other people’s behavior.

The reason is that in everyday life, as contrasted with many experimental settings, we can talk to people. If you do something that I think is bad, and we know from data about how people explain themselves, that spontaneously you’re going to tell me why you did this and you’re going to try to justify yourself. I don’t have to do the work of trying to figure out why you did this, what kind of excuse you might have had because you’re going to do it for me. Then we set up these experiments in which you don’t have this feedback and it’s just weird. It's not irrelevant because there are many situations in which that happens as well, but we still have to keep in mind that it is unnatural. In most of these games and most of these experiments, if you could just let people talk, they would find a good solution. The thing with the shocks, if the people could talk with each other, you could say "Well I’m happy to take the shock if you want to share the money." Then again I’m not saying it's not interesting to do the experiments at all, but we have to keep in mind that it’s kind of weird.

CROCKETT: That's true to a certain extent. A lot of moral decisions, particularly in the cooperation domain out in the real world, do usually involve some sort of communication. Increasingly, however, a lot of moral decisions are individual in the sense they involve someone that's not there. If you're deciding whether to buy a product that is fair trade or not, or if you're a politician making a decision about a health policy that's going to affect hundreds, thousands of people, millions of people who are not there. Some of the most wide-reaching moral decisions are made by an individual who does not see those who are going to bear the consequences of that decision. It's important to study both.

MERCIER: Maybe by realizing that the context in which these mechanisms of mentalizing evolved was one in which you had a huge amount of feedback can help us to better understand what happens when we don’t have this feedback.

CROCKETT: Maybe that's why we see selfish behavior is that we're used to having an opportunity to justify it when now there are many cases in which you don't have to justify it.

FIERY CUSHMAN: One of the things that’s unique and cool about your research is the focus on neuromodulators, whereas most research on how the brain processes morality has been on neural computation. Obviously, those things are inter-related. I guess I’ve always been, I don't know if confused is the right word, about what neuromodulators are for. It seems like neural computation can be incredibly precise. You can get a Seurat or a Vermeer out of neural computation, whereas neuromodulators give you Rothkos and Pollocks.

Why does the brain have such blunt tools? How does thinking about neuromodulators in particularly as a very blunt tool but also a very wide ranging one, inform your thought about their role in moral judgment as opposed again to neural computation?

CROCKETT: It's important to distinguish between the tools we have as researchers for manipulating neuromodulators, which are incredibly blunt, versus the way that these systems work in the brain, which are extremely precise. The serotonin system, for example, has at least 17 different kinds of receptors. Those receptors do different things and they're distributed differentially in the brain. Some types of receptors are only found subcortically and other receptors have their highest concentration in the medial prefrontal cortex. There's a high degree of precision in how these chemicals can influence brain processing in more local circuits.

To answer the first part of your question, the function of these systems is because cognition is not a one-size-fits-all kind of program. Sometimes you want to be more focused on local details at the exclusion of the bigger picture. Other times you want to be able to look at the bigger picture at the exclusion of small details. Whether you want to be processing in one way or the other is going to depend profoundly on the environmental context.

If you're in a very stressful situation, you want to be focusing your attention on how to get out of that situation. You don't want to be thinking about what you're going to have for breakfast tomorrow. Conversely if things are chilled out, that's the time when you can engage in long-term planning. There's evidence that things like stress, environmental events, events that have some important consequence for the survival of the organism are going to activate these systems which then shape cognition in such a way that's adaptive. That's the way that I think about neuromodulators.

Serotonin is interesting in this context because it's one of the least well understood in terms of how this works. The stress example that I was talking about, noradrenaline and cortisol and those neuromodulators are understood fairly well. Noradrenaline is stimulated by stress and it increases the signal to noise ratio in the prefrontal cortex and it focuses your attention.

Serotonin does tons of different things but it is one of the very few, if not the only major neuromodulator that can only be synthesized if you continually have nutritional input. You make serotonin from tryptophan, which is an amino acid that you can only get from the diet. You can only get it from eating foods that have tryptophan, which is most foods, but especially high protein foods. If you're in a famine, you're not going to be making as much serotonin.

This is interesting in an evolutionary context because when does it make sense to cooperate and care about the welfare of your fellow beings? When resources are abundant, then that's when you should be building relationships. When resources are scarce, maybe you want to be looking out for yourself, although there are some interesting wrinkles in there that Dave and I have talked about before where there could be an inverted U-shaped function where cooperation is critical in times of stress.

Perhaps one function of serotonin is to shape our social preferences in such a way that's adaptive to the current environmental context.


Watch the video: Microsoft Excel Expert Tips Πίνακας 2 μεταβλητών με τύπο (August 2022).