Information

How can automatic processing be estimated in an editing task?

How can automatic processing be estimated in an editing task?

Is there a way of measuring/estimating the amount of controlled vs. automatic processes that takes place during a text editing task?

I know that asking participants to verbalise their actions and regard non-verbalised processes as automatic is a flawed approach. I came across the process-dissociation procedure, but this seems to refer to automatic/unconscious recollection after a learning task has taken place, and not automatic processes during the task itself.

Any suggestions of references or ideas on how to go about this?


What do you mean exactly with 'processes'? If you're thinking about actual movements made during editing (i.e. tiping), this science paper might help. Very simply put, they showed skilled typist have two kinds of control processes, one conscious and one unconscious.

(Science 29 October 2010: Vol. 330 no. 6004 pp. 683-686 DOI: 10.1126/science.1190483)


How to generate a Time Tracking Report in Jira

These are the simple steps to creating a time tracking report in Jira for a specific project. Which is available as standard in Jira.

  1. Go to ‘Reports’, then ‘Forecast & management’ then ‘Time Tracking Report’.
  2. Select the fix version that you want a report on via the dropdown menu
  3. and select how you want the issues to be sorted. You can sort by:
    1. ‘least completed issues’ – issues with the highest estimated time remaining
    2. ‘most completed issues’ – which have the lowest Estimated Time Remaining.
    1. ‘Only include sub-tasks with the selected version’
    2. ‘Also include sub-tasks without a version set’
    3. ‘Include all sub-tasks’ (which includes all sub-tasks, regardless of version).

    Automatic versus Controlled Cognition

    A good part of both cognition and social cognition is spontaneous or automatic. Automatic cognition refers to thinking that occurs out of our awareness, quickly, and without taking much effort (Ferguson & Bargh, 2003 Ferguson, Hassin, & Bargh, 2008). The things that we do most frequently tend to become more automatic each time we do them, until they reach a level where they don’t really require us to think about them very much. Most of us can ride a bike and operate a television remote control in an automatic way. Even though it took some work to do these things when we were first learning them, it just doesn’t take much effort anymore. And because we spend a lot of time making judgments about others, many of these judgments, which are strongly influenced by our schemas, are made quickly and automatically (Willis & Todorov, 2006).

    Because automatic thinking occurs outside of our conscious awareness, we frequently have no idea that it is occurring and influencing our judgments or behaviors. You might remember a time when you returned home, unlocked the door, and 30 seconds later couldn’t remember where you had put your keys! You know that you must have used the keys to get in, and you know you must have put them somewhere, but you simply don’t remember a thing about it. Because many of our everyday judgments and behaviors are performed automatically, we may not always be aware that they are occurring or influencing us.

    It is of course a good thing that many things operate automatically because it would be extremely difficult to have to think about them all the time. If you couldn’t drive a car automatically, you wouldn’t be able to talk to the other people riding with you or listen to the radio at the same time—you’d have to be putting most of your attention into driving. On the other hand, relying on our snap judgments about Bianca—that she’s likely to be expressive, for instance—can be erroneous. Sometimes we need to—and should—go beyond automatic cognition and consider people more carefully. When we deliberately size up and think about something, for instance, another person, we call it controlled cognition. Although you might think that controlled cognition would be more common and that automatic thinking would be less likely, that is not always the case. The problem is that thinking takes effort and time, and we often don’t have too much of those things available.

    In the following Research Focus, we consider an example of automatic cognition in a study that uses a common social cognitive procedure known as priming, a technique in which information is temporarily brought into memory through exposure to situational events, which can then influence judgments entirely out of awareness.

    Research Focus

    Behavioral Effects of Priming

    In one demonstration of how automatic cognition can influence our behaviors without us being aware of them, John Bargh and his colleagues (Bargh, Chen, & Burrows, 1996) conducted two studies, each with the exact same procedure. In the experiments, they showed college students sets of five scrambled words. The students were to unscramble the five words in each set to make a sentence. Furthermore, for half of the research participants, the words were related to the stereotype of elderly people. These participants saw words such as “in Florida retired live people” and “bingo man the forgetful plays.”

    The other half of the research participants also made sentences but did so out of words that had nothing to do with the elderly stereotype. The purpose of this task was to prime (activate) the schema of elderly people in memory for some of the participants but not for others.

    The experimenters then assessed whether the priming of elderly stereotypes would have any effect on the students’ behavior—and indeed it did. When each research participant had gathered all his or her belongings, thinking that the experiment was over, the experimenter thanked him or her for participating and gave directions to the closest elevator. Then, without the participant knowing it, the experimenters recorded the amount of time that the participant spent walking from the doorway of the experimental room toward the elevator. As you can see in Figure 2.8, “Automatic Priming and Behavior,” the same results were found in both experiments—the participants who had made sentences using words related to the elderly stereotype took on the behaviors of the elderly—they walked significantly more slowly (in fact, about 12% more slowly across the two studies) as they left the experimental room.

    Figure 2.8 Automatic Priming and Behavior. In two separate experiments, Bargh, Chen, and Borroughs (1996) found that students who had been exposed to words related to the elderly stereotype walked more slowly than those who had been exposed to more neutral words.

    To determine if these priming effects occurred out of the conscious awareness of the participants, Bargh and his colleagues asked a third group of students to complete the priming task and then to indicate whether they thought the words they had used to make the sentences had any relationship to each other or could possibly have influenced their behavior in any way. These students had no awareness of the possibility that the words might have been related to the elderly or could have influenced their behavior.

    The point of these experiments, and many others like them, is clear—it is quite possible that our judgments and behaviors are influenced by our social situations, and this influence may be entirely outside of our conscious awareness. To return again to Bianca, it is even possible that we notice her nationality and that our beliefs about Italians influence our responses to her, even though we have no idea that they are doing so and really believe that they have not.


    Emotion and cognition: The case of automatic vigilance

    In St. Louis we are celebrating the bicentennial of the Lewis and Clark expedition. Captain Lewis made an entry in his journal that nicely illustrates the interaction between emotion and cognition. He describes how he was traveling alone one day, well ahead of his corps, to determine the best route. Suddenly he was surprised by an aggressive grizzly bear charging at him from out of the bush. Lewis narrowly escapes by jumping into a river. After the bear withdraws, Lewis makes his way back to his troops, a distance of about 12 miles. Along the way, he notices a variety of other animals, most of which he perceives as threatening and several of which he shoots preemptively. In his journal he describes feeling surrounded by danger: "It now seemed to me that all the beasts of the neighborhood had made a league to destroy me." (Bakeless, 2002, p. 187). The editor of the Lewis and Clark journals, in a footnote to this passage, notes that the animals Lewis encountered along the way were not typically considered aggressive or dangerous, and opines that Lewis was probably nervous after his frightening encounter with the grizzly.

    Automatic Vigilance Following Threatening Information
    The example from Lewis' journal illustrates the phenomenon of automatic vigilance, where emotional cues in the environment bias subsequent information processing. More precisely, the detection of threatening information can interrupt ongoing cognitive activity in ways that tune subsequent perception, attention, judgment, and even memory towards threat-related outcomes. One experimental analog of automatic vigilance is affective priming (Klauer, 2003), particularly priming with threatening stimuli. Here a threatening image or word is briefly presented (the prime) and quickly followed by another stimulus (the target) to which the subject responds (e.g., makes a lexical decision, categorizes as a good or bad object, etc.). Automatic vigilance occurs when a negatively valenced target stimulus (e.g., an image of a COCKROACH) is categorized faster and/or more accurately when it is preceded by a threatening prime stimulus (e.g., the word DISEASE) than a hedonically neutral prime stimulus (e.g., the word DISHPAN) (Hermans, DeHouwer, & Eelen, 2001).

    Researchers suggest that the presentation of an evaluative or threatening prime may automatically activate biased perceptions of emotionally-congruent targets (Fazio, Jackson, Dunton & Williams, 1995). The explanation for this effect is that, when confronted with a threatening stimulus, people typically devote increased attentional resources to that stimulus, raising the accessibility of evaluatively-similar information in memory, and biasing subsequent perceptions and judgments toward a threatening evaluation (Klauer, 2003 Wentura & Rothermund, 2003).

    Other Examples of Automatic Vigilance Effects
    Some researchers have identified the emotional Stroop task as an example of automatic vigilance (e.g., Pratto & John, 1991 Wentura, Rothermund, & Bak, 2000). In this task, subjects are asked to quickly name the colors of various words, some of which are threatening (e.g., DISEASE) and others are neutral (e.g., DISHPAN). In general, people are slower to name the colors of threatening words than the neutral words. However, a crucial problem with many of these studies is that the two word lists - threat and control words - often differ with respect to critical linguistic parameters known to contribute to reaction time differences in word recognition. For example, Larsen, Mercer, and Balota (2004a) showed that, across 34 emotion Stroop studies, the threatening words used were more infrequent, of greater length, or had a larger orthographic neighborhood than the control words. All of these purely linguistic features contribute to slower recognition of the threatening words, casting doubt on the validity of the emotional Stroop effect being due to automatic vigilance to the threat value of the word.

    In a recent paper, Algom, Chajut and Lev (2004) reasoned that automatic vigilance should not be limited to color naming of words but should apply to any cognitive activity. In a series of very carefully done experiments they demonstrated that both color naming and word reading were slower for threatening than control words. Larsen, Mercer, and Balota (2004b) recently analyzed lexical decision time and word reading time for a list of over 1,000 words that had been previously normed for valence (Bradley & Lang, 1996). After controlling for important linguistic parameters (e.g., frequency, length, orthographic neighborhood), Larsen et al. (2004b) found that word negativity was still a significant predictor of longer reaction times, both for lexical decisions and word reading time. Cothran, Larsen, Zelenski, Prizmic, & Chein (2004) demonstrated automatic vigilance effects in the recognition of facial displays of emotion.

    There is converging evidence from a number of literatures on the existence of automatic vigilance effects. The general interpretation is that a dedicated preattentive system operates in an automatic fashion to screen the perceptual stream for threatening information (Ohman, 1993). When such information is detected, ongoing cognitive activity is interrupted (accounting for the generic slowing) and reprioritized to be biased for future threatening information (accounting for the priming effects). Such a system would have obvious evolutionary advantages, in that humans who lacked such a system would be less likely to become ancestors.

    Utility of Automatic Vigilance for Studying other Phenomena
    As an aspect of psychological functioning, the automatic vigilance effect is interesting in its own right. However, it also has utility for studying other psychological phenomena. In the remainder of this article I describe two areas where the concept of automatic vigilance may contribute to our understanding of other phenomena.

    Understanding stereotype activation. In a series of important experiments, Payne (2001 Payne, Lambert, & Jacoby, 2002) tried to clarify what factors contributed to the killing of Amidou Diallo, an unarmed Black immigrant from West Africa who was shot 19 times by several White New York police officers one night as he was retrieving his wallet from his pocket. In Payne's studies he used a priming procedure, where participants were primed for 200 ms with a photo of either a Black face or a White face, then immediately shown a drawing of a handgun or a hand tool for 100 ms. Participants then have 400 ms to decide if the second object they saw was a hand tool or a handgun. Of course they make a lot of errors due to the speeded nature of the response. However, among predominantly White participants the errors are not random. Instead, participants are more likely to confuse the hand tool for a handgun following the Black face prime than the White face prime. The dominant interpretation of this finding is that the prime activates stereotypes beliefs about what is associated with being Black or White (Judd, Blair, & Chapleau, 2004). Consequently, this stereotype makes it more likely to make a "gun" response following the Black prime than the White prime.

    Recently, we (Larsen, Chan, & Lambert, 2004) reasoned that automatic vigilance may have played a role in both Payne's results as well as the shooting of Amidou Diallo. If Blacks (or other outgroup members) are threatening to majority participants, then priming with a Black face may activate automatic vigilance for future threat, making the gun response more likely. In an extension of Payne's gun/tool paradigm, we (Larsen, Chan, & Lambert, 2004) replaced the Black and White primes by photos of threatening animals (snakes, spiders) and non-threatening animals (bunnies, kittens). We found the same pattern of gun/tool bias after being primed with a threatening animal, participants were more likely to confuse hand tools as handguns compared to being primed with the non-threatening animal.

    In a second experiment we put the Black and White faces back in as primes, and moved the good and bad animals to the target position. Participants were given 400 ms to categorize the animals as either good or bad. We again found a bias consistent with an automatic vigilance effect participants were more likely to confuse a good animal as a bad one following a Black prime than a White prime. Obviously, the animals have nothing to do with stereotype associations to Blacks or Whites. However, they do have threat value and so our subjects' biased processing following Black facial primes is consistent with an automatic vigilance effect. Besides clarifying the underlying mechanism for the gun/tool bias, the results have implications about interventions to counteract such biases. Efforts to change stereotyped beliefs about outgroup members would be a lot different than efforts to change prejudiced emotional reactions toward them.

    Understanding why bad is stronger than good. From a number of quite different literatures there is converging evidence that stimuli of equal hedonic weight, but opposite in hedonic sign, will evoke non-equivalent affective reactions (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001)). For example, people are more distressed by the loss of $50 than they are made happy by finding $50. A few years ago I applied a psychophysics framework to this issue (Larsen, 2002). After all, emotion is a lot like perception, where some aspect of the outer world (an emotional event, a sensory stimulus) is transformed to an inner representation (an affect, a sensation). Applying this framework to several data sets, I reviewed evidence that unpleasant events or stimuli, compared to equivalently pleasant events or stimuli, evoke larger emotional responses, longer duration responses, and have a broader impact on the cognitive system. Moreover, I estimated the impact of negative stimuli as being about three times that of the impact of positive stimuli.

    Why should bad be stronger than good? It seems likely that one function of the automatic vigilance system is to act as a signal gain mechanism for threatening information. The automatic vigilance system functions to amplify threatening information by directing cognitive resources, such as perception and attention, toward such information. There is no specialized counterpart that acts in such an automatic and preattentive fashion for positive stimuli. In fact, in studies of cognitive interference from affective meaning using the affective Simon task, we found that the interference effect size for negative stimuli was approximately three times as large as the interference effect size for positive stimuli (Larsen & Yarkoni, 2004). The automatic vigilance system may be an explanation for the ubiquitous finding that bad is stronger than good.

    Summary
    Cognition and emotion interact in various ways, and one of the more interesting and increasingly documented ways is the automatic vigilance effect. This phenomenon highlights differences between automatic and controlled psychological processes, in that the effect is purely automatic. Much like a reflex, it occurs very fast, happens without our awareness or effort, and runs to completion without conscious monitoring. And yet the effects may be far-reaching, as when automatic vigilance impacts on cognitive resources such as attention and memory. And the effects may be especially far-reaching when the elicitors of the vigilance, or the objects of its effect, are other people.

    Acknowledgments

    Preparation of this article, and some of the research reported, was supported in part by grant RO1-MH63732 from the National Institute of Mental Health.


    References

    Algom, D., Chajut, E., Lev, S. (2004). A rational look at the emotional Stroop phenomenon: A generic slowdown, not a Stroop effect. Journal of Experimental Psychology: General, 133, 323-338.

    Bakeless, J. (2004). The journals of Lewis and Clark. New York: Signet.

    Baumeister, R. F., Bratslavsky, E., Finkenauer, C., and Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5, 323-370.

    Bradley, M. M., & Lang, P. J. (1996). Picture media and emotion: Effects of a sustained affective context. Psychophysiology, 33, 662-670.

    Cothran, D. L., Larsen, R. J., Zelenski, J., Prizmic, Z., & Chien, B. (2004). Do Emotion Words Interfere with Processing Emotion Faces? Stroop-Like Interference versus Automatic Vigilance for Negative Information. Manuscript under review.

    Fazio, R. H., Jackson, J. R., Dunton, B. C., & Williams, C. J. (1995). Variability in automatic activation as an unobtrusive measure of racial attitudes: A bona fide pipeline? Jounral of Personality and Social Psychology, 69, 1013-1027.

    Hermans, D., De Houwer, J., & Eelen, P. (2001). A time course analysis of affective priming task. Cognition and Emotion, 15, 143-165.

    Judd, C. M., Blair, I. V., & Chapleau, K. M. (2004). Automatic stereotypes vs. automatic prejudice: Sorting out the possibilities in the Payne (2001) weapon paradigm. Journal of Experimental Social Psyhcology, 40, 75-81.

    Klauer, K. C. (2003). Affective priming: Findings and theories. In J. Musch and K.C. Klauer (Eds.), The psychology of evaluation: Affective processes in cognition and emotion (pp. 7-50). Mahwah, NJ: Erlbaum.

    Larsen, R. J. (2002). Differential contributions of positive and negative affect to subjective well-being. In J. A. Da Silva, E. H. Matsushima, and N. P. Riberio-Filho (Eds). Annual meeting of the International Society for Psychophysics (vol. 18, pp. 186-190). Rio de Janeiro, Brazil: Editora Legis Summa Ltda.

    Larsen, R. J., Chan, P. Y., & Lambert, A. (2004). Perceptual consequences of threat and prejudice: Misperceiving weapons and other dangerous objects. Manuscript under review.

    Larsen, R. J., Mercer, K., & Balota, D. (2004a). Lexical characteristics of words used in emotion Stroop studies. Manuscript under review.

    Larsen, R. J., Mercer, K., & Balota, D. (2004b). Lexical Characteristics and Word Recognition Parameters for Emotion Words: Effects of Word Negativity. Manuscript under review.

    Larsen, R. J. & Yarkoni, T. (2004). Negative stimuli cause more interference than positive stimuli in the affective Simon task. Manuscript under review.

    Ohman, A. (1993). Fear and anxiety as emotional phenomena: Clinical phenomenology, evolutionary perspectives, and information-processing mechanisms. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 511-536). New York: Guilford Press.

    Payne, B. K. (2001). Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81, 181-192.

    Payne, B. K., Lambert, A. J., & Jacoby, L. L. (2002). Best laid plans: Effects of goals on accessibility bias and cognitive control in race-based misperceptions of weapons. Journal of Experimental Social Psychology, 38, 384-396.

    Pratto, F. & John, O. (1991). Automatic vigilance: The attention-grabbing power of negative social information. Journal of Personality and Social Psychology, 61, 380-391.

    Wentura, D. & Rothermund, K. (2003). The "meddling-in" of Affective Information: A general model of automatic evaluation. In J. Musch and K.C. Klauer (Eds.), The psychology of evaluation: Affective processes in cognition and emotion (pp. 51-86). Mahwah, NJ: Erlbaum.

    Wentura, D., Rothermund, K. & Bak, P. (2000). Automatic vigilance: The attention-grabbing power of approach- and avoidance-related information. Journal of Personality and Social Psychology, 78, 1024-1037.

    About the Author
    Randy J. Larsen earned his PhD in Personality Psychology from the University of Illinois-Champaign in 1984. He has served on the faculty at Purdue University (1984-1989), the University of Michigan (1989-1998), and Washington University in St. Louis (since 1998), where he is currently the William R. Stuckenberg Professor of Human Values and Chairman of the Psychology Department. His research interests focus on personality and emotion, with particular interests in emotional reactivity and mood regulation, process models of daily mood, and cognitive consequences of affective states. He has published over 80 scientific articles and book chapters and has co-authored (with David Buss) a text in Personality Psychology. Professor Larsen was awarded the 1991 APA Distinguished Scientific Award for Early Career Contribution to Personality Psychology, received a Research Scientist Development Award from the National Institute of Mental Health, and is a past president of the Midwestern Psychological Association.


    Comparing Self-Regulated Learning Models

    Next, the models will be compared in the following categories. First, the models’ number of cites. Second, all the models are divided into different SRL phases and subprocesses. They are compared here, to extract conclusions. Thirdly, there are three main areas that SRL explores (meta)cognition, motivation, and emotion therefore, their positioning in each of the six models is analyzed. And, fourth, the SRL models present significant differences in three major aspects of conceptualization: top𠄽own/bottom–up, automaticity, and context.

    Citations and Importance in the Field

    One word of advice before starting this section: the number of citations garnered is an indicator that can be influenced by aspects not related exclusively to the quality of the model. Important innovations can actually be made by models that have not received so many cites. Nevertheless, it is an interesting indicator to extract some conclusions from.

    In Table ​ Table1 1 , the number of citations per model is presented. The Efklides and SSRL models have a lower total number, as they were published recently. Nevertheless, they show promising numbers in citations per year, which indicates their relevance. The models of Boekaerts and of Winne and Hadwin’s models form a second group according to their number of citations. It is important to point out that Boekaerts and Corno’s (2005) study includes not only Boekaerts’ model, but also information about Corno’s and Kuhl’s models and, especially, a reflection on SRL measurement. Therefore, it is only a partial representation of citations of Boekaerts’ model, but it is her most cited paper where her model is presented. Winne and Hadwin’s (1998) book is the most cited work regarding their model, but it is not the original presentation of their work, as earlier discussed. Finally, Pintrich’s and Zimmerman’s models, both presented in the 2000 handbook, have the highest number of citations, with Zimmerman as the most cited.

    Table 1

    Number of citations of the different SRL models main publication.

    ModelPublicationTotal citationsCitations year ∗
    BoekaertsBoekaerts and Corno, 2005101184.25
    EfklidesEfklides, 201125141.83
    Hadwin et al.Hadwin et al., 201119632.67
    PintrichPintrich, 20003416200.94
    Winne and HadwinWinne and Hadwin, 1998103754.58
    ZimmermanZimmerman, 20004169245.24

    If we compare the number of the four older models, Pintrich’s and Zimmerman’s models have been more widely used in comparison to Boekaerts’ and that of Winne and Hadwin. There are two probable causes. One is that the first ones are more comprehensive and easier to understand and apply in classrooms (Dignath et al., 2008). With regards to the first cause, both Pintrich’s and Zimmerman’s models include a more complete vision of different types of subprocesses. If we compare these four models figures, it is salient that Zimmerman and Pintrich (a) present more specific subprocesses than Boekaerts and (b) include motivational and emotional aspects that are not directly presented by Winne and Hadwin. The second cause is that Boekaerts’ model and Winne and Hadwin’s are slightly less intuitive, and a deeper understanding of the underpinning theory is needed for a correct application. This is not to say that these two models are less relevant than the others on the contrary, both cover in depth two critical aspects for SRL: emotion regulation and metacognition. To finalize, Moos and Ringdal’s (2012) review of the teacher’s role in SRL in the classroom found that Zimmerman’s model has been the predominant in that line of research, as it offers 𠇊 robust explanatory lens” which might help the most when working with teachers as proposed by these authors.

    Phases and Subprocesses

    All of the model authors agree that SRL is cyclical, composed of different phases and subprocesses. However, the models present different phases and subprocesses, and by identifying them we can extract some conclusions. In general terms, Puustinen and Pulkkinen’s (2001) review concluded that the models they analyzed had three identifiable phases: (a) preparatory, which includes task analysis, planning, activation of goals, and setting goals (b) performance, in which the actual task is done while monitoring and controlling the progress of performance and (c) appraisal, in which the student reflects, regulates, and adapts for future performances. What is the conceptualization of SRL phases in the two added models? (see Table ​ Table2 2 ). First, Efklides (2011) does not clearly state an appraisal phase in her model, although she considers that the Person level is influenced after repeated performances of a task. Second, the SSRL model in its version from 2011, although strongly influenced by Winne and Hadwin’s, presents four phases that are similar to Pintrich’s but using different labels. Therefore, the SSRL model classification in the table is the same one that Puustinen and Pulkkinen (2001) proposed for Pintrich’s.

    Table 2

    ModelsSRL phases
    Preparatory phasePerformance phaseAppraisal phase
    BoekaertsIdentification, interpretation, primary and secondary appraisal, goal settingGoal strivingPerformance feedback
    EfklidesTask representationCognitive processing, performance
    Hadwin et al., 2011PlanningMonitoring, controlRegulating
    Hadwin et al. (in press) ∗Negotiating and awareness of the taskStrategic task engagementAdaptation
    PintrichForethought, planning, activationMonitoring, controlReaction and reflection
    Winne and HadwinTask definition, goal setting and planningApplying tactics and strategiesAdapting metacognition
    ZimmermanForethought (task analysis, self-motivation)Performance (self-control self-observation)Self-reflection (self-judgment, self-reaction)

    What can be concluded? Even if all of the models considered here except Efklides’, can be conceptualized around those three phases proposed by Puustinen and Pulkkinen, two conceptualizations of the SRL phases can be distinguished. First, some models emphasize a clearer distinction among the phases and the subprocesses that occur in each of them. Zimmerman’s and Pintrich’s models belong to this group, each having very distinct features for each phase. Those in the second group-the Winne and Hadwin, Boekaerts, Efklides, and SSRL (in its forthcoming version) models-transmit more explicitly that SRL is an “open” process, with recursive phases, and not as delimited as in the first group. For example, Winne and Hadwin’s figure does not make a clear distinction between the phases and the processes that belong to each: SRL is presented as a feedback loop that evolves over time. It is only through the text accompanying the figure that Winne and Hadwin (1998) clarified that they were proposing four phases.

    One implication from this distinctive difference could be in how to intervene according to the different models. The first group of models might allow for more specific interventions because the measurement of the effects might be more feasible. For example, if a teacher recognizes that one of her students has a motivation problem while performing a task, applying some of the subprocesses presented by Zimmerman at that particular phase (e.g., self-consequences) might have a positive outcome. On the other hand, the second group of models might suggest more holistic interventions, as they perceive the SRL as a more continuous process composed of more inertially related subprocesses. This hypothesis, though, would need to be explored in the future.

    (Meta)cognition, Motivation, and Emotion

    Next, the three main areas of SRL activity and how each model conceptualizes them will be explored. The interpretation is guided by the models’ figures as they reveal the most important SRL aspects for each author. A classification based on different levels for the three aforementioned areas is proposed (Table ​ Table3 3 ). It is important to clarify that the levels were conceptualized, not as being close in nature, but rather, as being positions on a continuum.

    Table 3

    Models figures comparison on cognition, motivation, and emotion.

    Levels of relevanceCognitionMotivationEmotion
    First (more emphasis)Winne Efklides SSRLZimmerman Boekaerts PintrichBoekaerts
    SecondPintrich ZimmermanSSRL Efklides WinneZimmerman/Pintrich SSRL
    Third (less emphasis)Boekaerts Efklides Winne

    (Meta)cognition

    Three levels are considered with regard to (meta)cognition. The first level includes models with a strong emphasis on (meta)cognition. The first model at this level is Winne and Hadwin’s, in which the predominant processes are metacognitive: “Metacognitive monitoring is the gateway to self-regulating one’s learning” (Winne and Perry, 2000, p. 540). Efklides’ model includes motivational and affective aspects, but the metacognitive ones are defined in more detail at the Task × Person level and are the ones with more substance. Finally, the SSRL model includes in the forthcoming version the COPES architecture from Winne and Hadwin. However, due to the fact that the SSRL 2011 version did not emphasize (meta)cognition, it was decided to locate it after the two more metacognitive models. At the second level are Pintrich’s and Zimmerman’s models. Pintrich (2000) incorporates the “regulation of cognition,” which has a central role along with aspects of metacognitive theory such as FOKs and FOLs. Zimmerman (2000) presents a number of leading cognitive/metacognitive strategies, but they are not emphasized over the motivational ones, as is the case for the models just discussed. At the third level, Boekaerts includes the use of (meta)cognitive strategies in her figures, but does not explicitly refer to specific strategies.

    Motivation

    A two-level classification is proposed. The Zimmerman, Boekaerts, and Pintrich models are at the first level. Zimmerman’s own definition of SRL explicitly states the importance of goals and presents SRL as a goal-driven activity. In his model, in the forethought phase, self-motivation beliefs are a crucial component the performance phase was originally described (Zimmerman, 2000) as performance/volitional control, which indicates how important volition is and at the self-reflection phase, self-reactions affect the motivation to perform the task in the future. According to Boekaerts, the students “interpret” the learning task and context, and then activate two different goal paths. Those pathways are the ones that lead the regulatory actions that the students do (or do not) activate (e.g., Boekaerts and Niemivirta, 2000). In addition, Boekaerts also included motivational beliefs in her models as a key aspect of SRL (see Figure ​ Figure7 7 ). Finally, Pintrich (2000) also included a motivation/affect area in his model that considers aspects similar to those in Zimmerman’s, but Pintrich’s places a greater emphasis on metacognition. It is also important to mention that Pintrich conducted the first research that explored the role of goal orientation in SRL (Pintrich and de Groot, 1990).

    The second level includes the SSRL, Efklides, and Winne and Hadwin models. SSRL included motivation in the 2011 version figure and emphasized its role in collaborative learning situations, but without differentiating motivational components in detail. Nevertheless, the authors have conducted a significant amount of research regarding motivation and its regulation at the group level (e.g., Järvelä et al., 2013). Finally, Winne and Hadwin (1998) and Efklides (2011) included motivation in their models, but it is not their main focus of analysis.

    Emotion

    Three levels are proposed. In the first one (Boekaerts, 1991 Boekaerts and Niemivirta, 2000) emphasizes the influence of emotions in students’ goals and how this activates two possible pathways and different strategies. For Boekaerts, ego protection plays a crucial role in the well-being pathway, and for that reason it is essential for students to have strategies to regulate their emotions, so that they will instead activate the learning pathway. At the second level, Pintrich (2000) and Zimmerman (2000) shared similar interpretations of emotions. They both put the most emphasis on the reactions (i.e., attributions and affective reactions) that occur when students self-evaluate their work during the last SRL phase. In addition, both mentioned strategies to control and monitor emotions during performance: Pintrich discusses 𠇊wareness and monitoring” and “selection and adaptation of strategies to manage” (Pintrich, 2000), and Zimmerman stated that imagery and self-consequences can be used by students to self-induce positive emotions (Zimmerman and Moylan, 2009). Nevertheless, in the preparatory phases, neither of them mentions emotions directly. Yet, Zimmerman argues that self-efficacy, which is included in his forethought phase, is a better predictor of performance at that phase than emotions or emotion regulation (Zimmerman, B. J. personal communication with the author, 28/02/2014). The SSRL model includes emotion in its 2011 version figure (Hadwin et al., 2011), but the subprocesses that underlie the regulation of emotion are not specified. Nonetheless, these authors clearly argue that collaborative learning situations present significant emotional challenges, and they have conducted empirical studies exploring this matter (e.g., Järvenoja and Järvelä, 2009 Koivuniemi et al., 2017). Finally, Efklides (2011) and Winne and Hadwin (e.g., Winne, 2011) mention the role of emotions in SRL [e.g., �t may directly impact metacognitive experiences as in the case of the mood” (Efklides, 2011, p. 19, and she included it in her model at two levels]. However, they do not place a major emphasis on emotion-regulation strategies.

    Three Additional Areas for a Comparison

    As mentioned earlier, three additional areas in which the models present salient differences were identified.

    Top𠄽own/Bottom–Up (TD/BU)

    The first model that included this categorization of self-regulation was Boekaerts and Niemivirta (2000). Top𠄽own is the mastery/growth pathway in which the learning/task goals are more relevant for the student. On the other hand, bottom–up is the well-being pathway in which students activate goals to protect their self-concept (i.e., self-esteem) from being damaged, also known as ego protection. Efklides (2011) also uses this categorization, but with different implications. For her, top𠄽own regulation occurs when goals are set in accordance with the person’s characteristics (e.g., cognitive ability, self-concept, attitudes, emotions, etc.), and self-regulation is guided based on those personal goals. Bottom–up occurs when the regulation is data-driven, i.e., when the specifics of performing the task (e.g., the monitoring of task progress) direct and regulate the student’s actions. In other words, the cognitive processes are the main focus when the student is trying to perform a task.

    The other models do not explore this categorization explicitly, although some implicit interpretations can be extracted. This way, there could be a third vision of TD/BU that is based on the interactive nature of Zimmerman’s model and Winne and Hadwin’s. Zimmerman (personal communication to author 27/02/2014) explained:

    Historically, top𠄽own theories have been cognitive and have emphasized personal beliefs and mental processes as primary (e.g., Information Processing theories). By contrast, bottom–up theories have been behavioral and have emphasized actions and environments as primary (e.g., Behavior Modification theories). When Bandura (1977) developed social cognitive theory, he concluded that both positions were half correct: both were important. His theory integrates both viewpoints using a triadic depiction. I contend that his formulation is neither top𠄽own [n]or bottom–up but rather interactionist where cognitive processes bi-directionally cause and are caused by behavior and environment. My cyclical model of SRL elaborates these triadic components and describes their interaction in repeated terms of cycles of feedback. Thus, any variable in this model (e.g., a student’s self-efficacy beliefs) is subject to change during the next feedback cycle…. There are countless examples of people without goals who experience success in sport, music, art, or academia and subsequently develop strong goals in the process. Interactionist theories emphasize developing one’s goals as much as following them.

    Winne (personal communication to the author 27/02/2014) stated:

    I didn’t introduce this terminology because it is limiting. A vital characteristic of SRL is cycles of information flow rather than one-directional flow of information. Some cycles are internal to the person and others cross the boundary between person and environment.

    In sum, Zimmerman and Winne do not consider TD/BU to be applicable to their models, as the recursive cycles of feedback during performance generate self-regulation and changes in the specificity of the goals.

    As Pintrich’s (2000) model is goal-driven, it could be assumed that it conceptualizes top𠄽own motivation as coming from personal characteristics, as proposed by Efklides (2011). Nevertheless, Pintrich also included goal orientation, which implicates performance and avoidance goals, which has a connection to Boekaerts’ well-being pathway, especially avoidance goals. Therefore, it is difficult to discern with any precision what the interpretation of TD/BU would be for his model. The SSRL model (Hadwin et al., 2011) has not yet clarified this issue, though a stance similar to that of Winne and Hadwin could be presupposed.

    Automaticity

    In SRL, automaticity usually refers to underlying processes that have become an automatic response pattern (Bargh and Barndollar, 1996 Moors and De Houwer, 2006 Winne, 2011). It is frequently used to refer to (meta)cognitive processes: some authors maintain that, for SRL to occur, some processes must become automatic so that the student can have less cognitive load and can then activate strategies (e.g., Zimmerman and Kitsantas, 2005 Winne, 2011). However, it can also refer to motivational and emotional processes that occur without student’s awareness (e.g., Boekaerts, 2011). Next, some quotations from the models on this topic will be presented to illustrate the different perspectives of automaticity. Winne (2011) stated:

    Most cognition is carried out without learners needing either to deliberate about doing it or to control fine-grained details of how it unfolds…Some researchers describe such cognition as “unconscious” but I prefer the label implicit�use so much of cognitive activity is implicit, learners are infrequently aware of their cognition. There are two qualifications. First, cognition can change from implicit to explicit when errors and obstacles arise. But, second, unless learners trace cognitive products as tangible representations -“notes to self” or underlines that signal discriminations about key ideas, for example-the track [of] cognitive of events across time can be unreliable, a fleeting memory (p. 18).

    This conception of the SRL functioning at the Task × Person level presupposes a cognitive architecture in which there are conscious analytic processes and explicit knowledge as well as non-conscious automatic processes and implicit knowledge that have a direct effect on behavior (p. 13).

    Boekaerts also assumed that automaticity can play a crucial role in the different pathways that students might activate: �rgh’s (1990) position is that goal activation can be automatic or deliberate and Bargh and Barndollar (1996) demonstrated that some goals may be activated or triggered directly by environmental cues, outside the awareness of the individual” (Boekaerts and Niemivirta, 2000, p. 422). Pintrich (2000) specified: “At some level, this process of activation of prior knowledge can and does happen automatically and without conscious thought” (p. 457). Finally, Zimmerman and Moylan (2009) asserted:

    In terms of their impact on forethought, process goals are designed to incorporate strategic planning-combining two key task analysis processes. With studying and/or practice, students will eventually use the strategy automatically. Automization occurs when a strategy can be executed without close metacognitive monitoring. At the point of automization, students can benefit from outcome feedback because it helps them to adapt their performance based on their own personal capabilities, such as when a basketball free throw shooter adjusts their throwing strategy based on the results of their last shot 3 . However, even experts will encounter subsequent difficulties after a strategy becomes automatic, and this will require them to shift their monitoring back from outcomes to processes (p. 307).

    Thus, automaticity is an important aspect in the majority of the models. Here, there are three aspects for reflection. First, there are automatic actions that affect SRL for example, Pintrich (2000) mentioned access to prior knowledge and Boekaerts (2011) discussed goal activation. Second, we can assume that even self-regulation, when it is understood to be the enactment of a number of learning strategies to reach students’ goals, can happen implicitly, as proposed by Winne (2011). This means that students can be so advanced in their use of SRL strategies that they do not need an explicit, conscious, purposive action to act strategically. Nevertheless, this takes practice. Third, some automatic reactions, particularly some emotions, and even some complex emotion-regulation strategies may not be positive for learning (Bargh and Williams, 2007). For example, Boekaerts (2011) mentions that the well-being pathways can be activated even when students are not aware. Therefore, assisting students to become aware of those negative automatic processes could have the potential to enhance self-regulation that is oriented toward learning.

    Context

    The SSRL model emphasizes not only the role of context, but also the ability of different external sources (group members, teachers, etc.) to promote individual self-regulation by exerting social influence (CrRL) or of groups of students to regulate jointly while they are collaborating (SSRL) (Järvelä and Hadwin, 2013). Zimmerman (2000) did not include context in his Cyclical Phases model, only a minor reference to the specific strategy 𠇎nvironmental structuring.” However, in his Triadic and Multi-level models, the influence of context and vicarious learning is key to the development of self-regulatory skills (Zimmerman, 2013). Boekaerts and Niemivirta (2000) posits that students’ interpretation of the context activates different goal pathways and that previous experiences affect the different roles that students adopt in their classrooms (e.g., joker, geek). For Winne (1996), Pintrich (2000), and Efklides (2011) models, context is: (1) important to adapt to the task demands, and (2) part of the loops of feedback as students receive information from the context and adapt their strategies accordingly. In sum, all of the models include context as a significant variable to SRL. Nevertheless, with the exception of Hadwin, Järvelä, and Miller’s work, not much research has been conducted by the others in exploring how significantly other contexts or the task context affect SRL.


    DUAL PROCESS MODELS OF INFORMATION PROCESSING

    In the field of psychology, dual-process models are used to explain the dynamics and development of broad domains of functioning. These domains include attention, cognition, emotion, and social behavior (eg, Barrett et al, 2004 Eisenberg et al, 1994 MacDonald, 2008 Norman and Shallice, 1986 Rothbart and Bates, 2006 Rothbart and Derryberry, 1981 Strack and Deutsch, 2004). The overarching theme of these models is that human information processing involves at least two complementary strategies. The first strategy involves the processing of information in an automatic, stimulus-driven, and reflexive way. The second involves more controlled, goal-directed, and contemplative approaches. These systems are engaged by different stimulus properties and demands, have unique neural underpinnings, support different forms of learning, and provide potentially competing response pathways (Corbetta and Shulman, 2002). Engagement of these systems occurs on a relative rather than absolute scale, such that few behaviors are completely dominated by one or the other mode of processing. Rather, differences in behavior are explained by the relative balance of these two modes of processing in any given context. Several disparate lines of research suggest that individual differences in health and adaptation reflect the way in which these dual modes functionally integrate in the service of adaptation (eg, Carver et al, 2009 Derryberry and Rothbart, 1997).

    For BI children, the deployment of automatic and controlled modes of processing in motivationally and emotionally significant contexts appears particularly relevant. Such contexts contain signals of reward and punishment, stimuli for which organisms will extend effort to approach or avoid. In such contexts, motivationally significant cues engage automatic modes of processing and trigger reflexive and rapidly deployed responses. As such, automatic information-processing modes are central to evolutionary theories emphasizing the adaptive function of rapid approach- and avoidance-related strategies. When children with a history of BI enter novel contexts, they tend to remain on the periphery, carefully watching but not engaging with novel objects or people. In such contexts, a state of hypervigilance supports detailed processing of stimulus features but limits the more flexible and integrative processing of the broader context, which is necessary for fluid, reciprocal social interactions. From a neural perspective, automatic modes of processing engage a network of brain regions centered on subcortical, medial temporal structures, particularly the amygdala and anterior hippocampus, as well as components of the ventral prefrontal cortex (PFC) that are most heavily connected to these structures (Braver et al, 2007 Posner, 2012). These subcortical structures are brain regions that are relatively old from an evolutionary perspective and relatively conserved across mammals, reflecting the adaptive advantage of this automatic, rapid mode of responding.

    Whereas the automatic mode narrows attention to remain responsive to immediately present threats and rewards, the controlled mode is recruited when behavior is goal directed and dependent on the active maintenance of task-related goals, even if these goals are far removed from the immediate context. This control mode is described as reflective, endogenous, strategic, logical, and effortful. The control mode incorporates information beyond that which is immediately present, supporting more planful, reasoned and goal-directed behavior in comparison with behaviors regulated by the automatic mode. For example, engagement of controlled processing in novel contexts may allow BI children to more flexibly attend to and process novel situations and to access and implement previously learned social scripts. Moreover, controlled processing maintains a prolonged influence on behavior relative to the quick and short-acting influence of the automatic mode of processing. Controlled processes place extensive cognitive demands on the organism including working memory and self-monitoring and are therefore more resource demanding, less efficient and more slowly engaged than automatic modes of processing. Consistent with such a demanding, complex nature, this processing mode shows a later, more prolonged developmental time course, relative to automatic, reflexive modes of processing that guide behaviors from birth.

    Controlled processing is further distinguished from automatic processing based on underlying neural systems. Controlled processes engage a network centered on the dorsolateral PFC (DLPFC). The DLPFC in turn draws on other regions that have a role in both controlled and automatic processing. These include the dorsal anterior cingulate gyrus, anterior insula with expanses onto the ventro-lateral PFC, and basal ganglia. Of note, this DLPFC-centered network encompasses regions, particularly so-called ‘granular’ components of PFC, which evolved relatively late, compared with the brain regions that support automatic modes of processing. Considerable debate remains on the precise adaptive function conferred by these evolutionary changes in brain anatomy. Nevertheless, many compelling theories emphasize the role of this network in flexible maintenance of goal-directed behaviors in contexts where stimulus contingencies change rapidly. Thus, for humans, the complex and rapidly changing nature of social interactions could represent one instance where flexible maintenance of goals in changing contexts confers a particularly important adaptive advantage.


    Method

    Participants

    Mechanical Turk was used to sample 666 participants who were older than 18 years. Consistent with cultural analyses of Mechanical Turk 16 , the majority of users came from the USA (n = 418) and India (n = 225) with the remainder coming from 20 other countries (n = 23). All participants were paid USD

    Discussion

    This study examined whether own-age biases affect the initial interpretation of an image at a subconscious level. To test this, the classic young/old lady ambiguous figure was administered to a group of participants of varying ages using Mechanical Turk. Although the estimated age data are bimodal, there is a bias towards reporting a younger woman. It is possible that this bias relates to a default ‘younger’ response. As noted by Georgiades and Harris 15 , participants are biased towards reporting a younger woman by 70%. This bias of response may be the default interpretation by the brain, which is only overcome when the social in-group favours an ‘older’ response.

    A median split was used to sort the participants into groups of younger and older respondents. Analyses of the different groups revealed that younger participants estimated the woman’s age to be 6.3 years younger than the older participants. This difference in estimated age increased to 12.1 when the very-youngest and -oldest participants were selected. Both split analyses were supported by a simple correlational analysis, which showed that, as the age of the observer increased, so too did the estimated age of the woman. The consistency of the association between estimated age and participants’ age across the different types of split and the correlation analysis demonstrates that the effect is not an artefact of the way we analysed the data. The effect of the observer’s age on the estimated age of the woman is consistent with an own-age social group bias. Within the respective age-groups, participants have a bias towards processing faces of a similar age. A strong delineation between younger and older people in Western society in general and within the USA in particular 17,18 may have precipitated social in- and out-groups, which is known to affect face processing.

    The own-age bias may have been stronger for younger- compared to older-participants as reflected in lower standard deviations for the younger and very-young groups (SD = 13.55 & 14.16, respectively) compared to the older and very-old groups (SD = 18.73 & 21.63, respectively). A larger variation in estimated age for the older participants is in line with an exposure effect 7 which may reduce the own age bias for this group 6 .

    When participants engaged in the task, they were naïve in relation to the age-related aims of the study and did not expect the young/old ambiguous figure. The image was also displayed briefly for 500 ms. Both procedures ensured that any biases in the reported age of the woman reflected the operation of a preconscious perceptual process. Bearing this in mind, we believe that our data demonstrate that high-level social/group processes have a subconscious effect on low-level face detection mechanisms. Bar 21 describes a neural mechanism to explain the effect of top-down facilitation of object recognition. In this model, a partially analysed version of the image is sent from early visual centres to the prefrontal cortex. This image then interacts with higher-level expectations of the image and is then sent as an ‘initial guess’ to the temporal cortex where it integrates with bottom-up mechanisms. In the current study, we believe that a partially analysed version of the ambiguous figure is passed through to frontal regions where social predispositions bias the interpretation towards an in-group outcome, which is subsequently fed-back to the decision-making mechanism.

    Future research could rule out the possibility that the effect of the observer’s age on perceived age is specific to the bi-stable image used in this study. It is possible that participants simply estimate an age for the illusion that is closer to their own. This could be tested by simply picking a middle-aged face and asking participants to estimate the age of the face. Alternatively, the discrimination could be made orthogonal to the dimension of interest by asking participants to determine whether the face is looking to the side (old lady) or away (young lady) from the viewer.

    .30 for their time.

    While Mechanical Turk has several distinct advantages for data collection 19 , there are also reports that users pay less attention to experimental materials 20 . To select attentive participants, we included two attention-check questions so that participants could be selected on an a priori basis (see procedure for details). Participants were also required to provide valid answers to the demographic questions as well as estimate the lady’s age to be older than 18 years.

    Initial analyses of compliance revealed marked differences between the USA and India. For people from India, 55% failed the attention-check test whereas only 6% failed from the USA. Given the poor attention-check results for participants from India and the possibility that many of them may not have understood the task instructions, or that the young/old lady illusion is culturally specific, the current sample was limited to participants from the USA. There were therefore 393 participants (m = 242, f = 151) from the USA in the final sample. The mean age of the sample was 32.87 years (SD = 10.07) with a range of 18 to 68 years. The distribution of age was positively skewed with a strong bias towards younger participants. This bias, which most likely reflects familiarity with computers, meant that only five participants were over 60 years of age. The method and experimental procedure of the present research was approved by, and carried out in accordance with, the guidelines of the Social and Behavioural Research Ethics Committee at Flinders University.

    Stimuli and Procedure

    Participants were recruited using Mechanical Turk. Informed consent was obtained from all participants prior to participation. After agreeing to participate, demographic data were collected, including the participant’s age (in years), sex, and country of residence. Participants were then readied for the presentation of the young/old lady bistable image - copied from the one used by Boring 14 (see Fig. 1). The ambiguous image was subsequently presented for 500 ms, after which the display was cleared. To verify that participants had seen the image in one of its forms, two questions were then asked: “Did you see a person or an animal?” (possible responses: person/animal/neither) and, if this was answered correctly, “What was the sex of the person?” (possible responses: male/female/don’t know). Participants who answered both questions correctly were then asked to estimate the age of the woman in years. The testing session was terminated for participants who gave incorrect responses to either of the attention-check questions. The entire testing session took less than five minutes.


    Emotion and cognition: The case of automatic vigilance

    In St. Louis we are celebrating the bicentennial of the Lewis and Clark expedition. Captain Lewis made an entry in his journal that nicely illustrates the interaction between emotion and cognition. He describes how he was traveling alone one day, well ahead of his corps, to determine the best route. Suddenly he was surprised by an aggressive grizzly bear charging at him from out of the bush. Lewis narrowly escapes by jumping into a river. After the bear withdraws, Lewis makes his way back to his troops, a distance of about 12 miles. Along the way, he notices a variety of other animals, most of which he perceives as threatening and several of which he shoots preemptively. In his journal he describes feeling surrounded by danger: "It now seemed to me that all the beasts of the neighborhood had made a league to destroy me." (Bakeless, 2002, p. 187). The editor of the Lewis and Clark journals, in a footnote to this passage, notes that the animals Lewis encountered along the way were not typically considered aggressive or dangerous, and opines that Lewis was probably nervous after his frightening encounter with the grizzly.

    Automatic Vigilance Following Threatening Information
    The example from Lewis' journal illustrates the phenomenon of automatic vigilance, where emotional cues in the environment bias subsequent information processing. More precisely, the detection of threatening information can interrupt ongoing cognitive activity in ways that tune subsequent perception, attention, judgment, and even memory towards threat-related outcomes. One experimental analog of automatic vigilance is affective priming (Klauer, 2003), particularly priming with threatening stimuli. Here a threatening image or word is briefly presented (the prime) and quickly followed by another stimulus (the target) to which the subject responds (e.g., makes a lexical decision, categorizes as a good or bad object, etc.). Automatic vigilance occurs when a negatively valenced target stimulus (e.g., an image of a COCKROACH) is categorized faster and/or more accurately when it is preceded by a threatening prime stimulus (e.g., the word DISEASE) than a hedonically neutral prime stimulus (e.g., the word DISHPAN) (Hermans, DeHouwer, & Eelen, 2001).

    Researchers suggest that the presentation of an evaluative or threatening prime may automatically activate biased perceptions of emotionally-congruent targets (Fazio, Jackson, Dunton & Williams, 1995). The explanation for this effect is that, when confronted with a threatening stimulus, people typically devote increased attentional resources to that stimulus, raising the accessibility of evaluatively-similar information in memory, and biasing subsequent perceptions and judgments toward a threatening evaluation (Klauer, 2003 Wentura & Rothermund, 2003).

    Other Examples of Automatic Vigilance Effects
    Some researchers have identified the emotional Stroop task as an example of automatic vigilance (e.g., Pratto & John, 1991 Wentura, Rothermund, & Bak, 2000). In this task, subjects are asked to quickly name the colors of various words, some of which are threatening (e.g., DISEASE) and others are neutral (e.g., DISHPAN). In general, people are slower to name the colors of threatening words than the neutral words. However, a crucial problem with many of these studies is that the two word lists - threat and control words - often differ with respect to critical linguistic parameters known to contribute to reaction time differences in word recognition. For example, Larsen, Mercer, and Balota (2004a) showed that, across 34 emotion Stroop studies, the threatening words used were more infrequent, of greater length, or had a larger orthographic neighborhood than the control words. All of these purely linguistic features contribute to slower recognition of the threatening words, casting doubt on the validity of the emotional Stroop effect being due to automatic vigilance to the threat value of the word.

    In a recent paper, Algom, Chajut and Lev (2004) reasoned that automatic vigilance should not be limited to color naming of words but should apply to any cognitive activity. In a series of very carefully done experiments they demonstrated that both color naming and word reading were slower for threatening than control words. Larsen, Mercer, and Balota (2004b) recently analyzed lexical decision time and word reading time for a list of over 1,000 words that had been previously normed for valence (Bradley & Lang, 1996). After controlling for important linguistic parameters (e.g., frequency, length, orthographic neighborhood), Larsen et al. (2004b) found that word negativity was still a significant predictor of longer reaction times, both for lexical decisions and word reading time. Cothran, Larsen, Zelenski, Prizmic, & Chein (2004) demonstrated automatic vigilance effects in the recognition of facial displays of emotion.

    There is converging evidence from a number of literatures on the existence of automatic vigilance effects. The general interpretation is that a dedicated preattentive system operates in an automatic fashion to screen the perceptual stream for threatening information (Ohman, 1993). When such information is detected, ongoing cognitive activity is interrupted (accounting for the generic slowing) and reprioritized to be biased for future threatening information (accounting for the priming effects). Such a system would have obvious evolutionary advantages, in that humans who lacked such a system would be less likely to become ancestors.

    Utility of Automatic Vigilance for Studying other Phenomena
    As an aspect of psychological functioning, the automatic vigilance effect is interesting in its own right. However, it also has utility for studying other psychological phenomena. In the remainder of this article I describe two areas where the concept of automatic vigilance may contribute to our understanding of other phenomena.

    Understanding stereotype activation. In a series of important experiments, Payne (2001 Payne, Lambert, & Jacoby, 2002) tried to clarify what factors contributed to the killing of Amidou Diallo, an unarmed Black immigrant from West Africa who was shot 19 times by several White New York police officers one night as he was retrieving his wallet from his pocket. In Payne's studies he used a priming procedure, where participants were primed for 200 ms with a photo of either a Black face or a White face, then immediately shown a drawing of a handgun or a hand tool for 100 ms. Participants then have 400 ms to decide if the second object they saw was a hand tool or a handgun. Of course they make a lot of errors due to the speeded nature of the response. However, among predominantly White participants the errors are not random. Instead, participants are more likely to confuse the hand tool for a handgun following the Black face prime than the White face prime. The dominant interpretation of this finding is that the prime activates stereotypes beliefs about what is associated with being Black or White (Judd, Blair, & Chapleau, 2004). Consequently, this stereotype makes it more likely to make a "gun" response following the Black prime than the White prime.

    Recently, we (Larsen, Chan, & Lambert, 2004) reasoned that automatic vigilance may have played a role in both Payne's results as well as the shooting of Amidou Diallo. If Blacks (or other outgroup members) are threatening to majority participants, then priming with a Black face may activate automatic vigilance for future threat, making the gun response more likely. In an extension of Payne's gun/tool paradigm, we (Larsen, Chan, & Lambert, 2004) replaced the Black and White primes by photos of threatening animals (snakes, spiders) and non-threatening animals (bunnies, kittens). We found the same pattern of gun/tool bias after being primed with a threatening animal, participants were more likely to confuse hand tools as handguns compared to being primed with the non-threatening animal.

    In a second experiment we put the Black and White faces back in as primes, and moved the good and bad animals to the target position. Participants were given 400 ms to categorize the animals as either good or bad. We again found a bias consistent with an automatic vigilance effect participants were more likely to confuse a good animal as a bad one following a Black prime than a White prime. Obviously, the animals have nothing to do with stereotype associations to Blacks or Whites. However, they do have threat value and so our subjects' biased processing following Black facial primes is consistent with an automatic vigilance effect. Besides clarifying the underlying mechanism for the gun/tool bias, the results have implications about interventions to counteract such biases. Efforts to change stereotyped beliefs about outgroup members would be a lot different than efforts to change prejudiced emotional reactions toward them.

    Understanding why bad is stronger than good. From a number of quite different literatures there is converging evidence that stimuli of equal hedonic weight, but opposite in hedonic sign, will evoke non-equivalent affective reactions (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001)). For example, people are more distressed by the loss of $50 than they are made happy by finding $50. A few years ago I applied a psychophysics framework to this issue (Larsen, 2002). After all, emotion is a lot like perception, where some aspect of the outer world (an emotional event, a sensory stimulus) is transformed to an inner representation (an affect, a sensation). Applying this framework to several data sets, I reviewed evidence that unpleasant events or stimuli, compared to equivalently pleasant events or stimuli, evoke larger emotional responses, longer duration responses, and have a broader impact on the cognitive system. Moreover, I estimated the impact of negative stimuli as being about three times that of the impact of positive stimuli.

    Why should bad be stronger than good? It seems likely that one function of the automatic vigilance system is to act as a signal gain mechanism for threatening information. The automatic vigilance system functions to amplify threatening information by directing cognitive resources, such as perception and attention, toward such information. There is no specialized counterpart that acts in such an automatic and preattentive fashion for positive stimuli. In fact, in studies of cognitive interference from affective meaning using the affective Simon task, we found that the interference effect size for negative stimuli was approximately three times as large as the interference effect size for positive stimuli (Larsen & Yarkoni, 2004). The automatic vigilance system may be an explanation for the ubiquitous finding that bad is stronger than good.

    Summary
    Cognition and emotion interact in various ways, and one of the more interesting and increasingly documented ways is the automatic vigilance effect. This phenomenon highlights differences between automatic and controlled psychological processes, in that the effect is purely automatic. Much like a reflex, it occurs very fast, happens without our awareness or effort, and runs to completion without conscious monitoring. And yet the effects may be far-reaching, as when automatic vigilance impacts on cognitive resources such as attention and memory. And the effects may be especially far-reaching when the elicitors of the vigilance, or the objects of its effect, are other people.

    Acknowledgments

    Preparation of this article, and some of the research reported, was supported in part by grant RO1-MH63732 from the National Institute of Mental Health.


    References

    Algom, D., Chajut, E., Lev, S. (2004). A rational look at the emotional Stroop phenomenon: A generic slowdown, not a Stroop effect. Journal of Experimental Psychology: General, 133, 323-338.

    Bakeless, J. (2004). The journals of Lewis and Clark. New York: Signet.

    Baumeister, R. F., Bratslavsky, E., Finkenauer, C., and Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5, 323-370.

    Bradley, M. M., & Lang, P. J. (1996). Picture media and emotion: Effects of a sustained affective context. Psychophysiology, 33, 662-670.

    Cothran, D. L., Larsen, R. J., Zelenski, J., Prizmic, Z., & Chien, B. (2004). Do Emotion Words Interfere with Processing Emotion Faces? Stroop-Like Interference versus Automatic Vigilance for Negative Information. Manuscript under review.

    Fazio, R. H., Jackson, J. R., Dunton, B. C., & Williams, C. J. (1995). Variability in automatic activation as an unobtrusive measure of racial attitudes: A bona fide pipeline? Jounral of Personality and Social Psychology, 69, 1013-1027.

    Hermans, D., De Houwer, J., & Eelen, P. (2001). A time course analysis of affective priming task. Cognition and Emotion, 15, 143-165.

    Judd, C. M., Blair, I. V., & Chapleau, K. M. (2004). Automatic stereotypes vs. automatic prejudice: Sorting out the possibilities in the Payne (2001) weapon paradigm. Journal of Experimental Social Psyhcology, 40, 75-81.

    Klauer, K. C. (2003). Affective priming: Findings and theories. In J. Musch and K.C. Klauer (Eds.), The psychology of evaluation: Affective processes in cognition and emotion (pp. 7-50). Mahwah, NJ: Erlbaum.

    Larsen, R. J. (2002). Differential contributions of positive and negative affect to subjective well-being. In J. A. Da Silva, E. H. Matsushima, and N. P. Riberio-Filho (Eds). Annual meeting of the International Society for Psychophysics (vol. 18, pp. 186-190). Rio de Janeiro, Brazil: Editora Legis Summa Ltda.

    Larsen, R. J., Chan, P. Y., & Lambert, A. (2004). Perceptual consequences of threat and prejudice: Misperceiving weapons and other dangerous objects. Manuscript under review.

    Larsen, R. J., Mercer, K., & Balota, D. (2004a). Lexical characteristics of words used in emotion Stroop studies. Manuscript under review.

    Larsen, R. J., Mercer, K., & Balota, D. (2004b). Lexical Characteristics and Word Recognition Parameters for Emotion Words: Effects of Word Negativity. Manuscript under review.

    Larsen, R. J. & Yarkoni, T. (2004). Negative stimuli cause more interference than positive stimuli in the affective Simon task. Manuscript under review.

    Ohman, A. (1993). Fear and anxiety as emotional phenomena: Clinical phenomenology, evolutionary perspectives, and information-processing mechanisms. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 511-536). New York: Guilford Press.

    Payne, B. K. (2001). Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81, 181-192.

    Payne, B. K., Lambert, A. J., & Jacoby, L. L. (2002). Best laid plans: Effects of goals on accessibility bias and cognitive control in race-based misperceptions of weapons. Journal of Experimental Social Psychology, 38, 384-396.

    Pratto, F. & John, O. (1991). Automatic vigilance: The attention-grabbing power of negative social information. Journal of Personality and Social Psychology, 61, 380-391.

    Wentura, D. & Rothermund, K. (2003). The "meddling-in" of Affective Information: A general model of automatic evaluation. In J. Musch and K.C. Klauer (Eds.), The psychology of evaluation: Affective processes in cognition and emotion (pp. 51-86). Mahwah, NJ: Erlbaum.

    Wentura, D., Rothermund, K. & Bak, P. (2000). Automatic vigilance: The attention-grabbing power of approach- and avoidance-related information. Journal of Personality and Social Psychology, 78, 1024-1037.

    About the Author
    Randy J. Larsen earned his PhD in Personality Psychology from the University of Illinois-Champaign in 1984. He has served on the faculty at Purdue University (1984-1989), the University of Michigan (1989-1998), and Washington University in St. Louis (since 1998), where he is currently the William R. Stuckenberg Professor of Human Values and Chairman of the Psychology Department. His research interests focus on personality and emotion, with particular interests in emotional reactivity and mood regulation, process models of daily mood, and cognitive consequences of affective states. He has published over 80 scientific articles and book chapters and has co-authored (with David Buss) a text in Personality Psychology. Professor Larsen was awarded the 1991 APA Distinguished Scientific Award for Early Career Contribution to Personality Psychology, received a Research Scientist Development Award from the National Institute of Mental Health, and is a past president of the Midwestern Psychological Association.


    Automatic versus Controlled Cognition

    A good part of both cognition and social cognition is spontaneous or automatic. Automatic cognition refers to thinking that occurs out of our awareness, quickly, and without taking much effort (Ferguson & Bargh, 2003 Ferguson, Hassin, & Bargh, 2008). The things that we do most frequently tend to become more automatic each time we do them, until they reach a level where they don’t really require us to think about them very much. Most of us can ride a bike and operate a television remote control in an automatic way. Even though it took some work to do these things when we were first learning them, it just doesn’t take much effort anymore. And because we spend a lot of time making judgments about others, many of these judgments, which are strongly influenced by our schemas, are made quickly and automatically (Willis & Todorov, 2006).

    Because automatic thinking occurs outside of our conscious awareness, we frequently have no idea that it is occurring and influencing our judgments or behaviors. You might remember a time when you returned home, unlocked the door, and 30 seconds later couldn’t remember where you had put your keys! You know that you must have used the keys to get in, and you know you must have put them somewhere, but you simply don’t remember a thing about it. Because many of our everyday judgments and behaviors are performed automatically, we may not always be aware that they are occurring or influencing us.

    It is of course a good thing that many things operate automatically because it would be extremely difficult to have to think about them all the time. If you couldn’t drive a car automatically, you wouldn’t be able to talk to the other people riding with you or listen to the radio at the same time—you’d have to be putting most of your attention into driving. On the other hand, relying on our snap judgments about Bianca—that she’s likely to be expressive, for instance—can be erroneous. Sometimes we need to—and should—go beyond automatic cognition and consider people more carefully. When we deliberately size up and think about something, for instance, another person, we call it controlled cognition. Although you might think that controlled cognition would be more common and that automatic thinking would be less likely, that is not always the case. The problem is that thinking takes effort and time, and we often don’t have too much of those things available.

    In the following Research Focus, we consider an example of automatic cognition in a study that uses a common social cognitive procedure known as priming, a technique in which information is temporarily brought into memory through exposure to situational events, which can then influence judgments entirely out of awareness.

    Research Focus

    Behavioral Effects of Priming

    In one demonstration of how automatic cognition can influence our behaviors without us being aware of them, John Bargh and his colleagues (Bargh, Chen, & Burrows, 1996) conducted two studies, each with the exact same procedure. In the experiments, they showed college students sets of five scrambled words. The students were to unscramble the five words in each set to make a sentence. Furthermore, for half of the research participants, the words were related to the stereotype of elderly people. These participants saw words such as “in Florida retired live people” and “bingo man the forgetful plays.”

    The other half of the research participants also made sentences but did so out of words that had nothing to do with the elderly stereotype. The purpose of this task was to prime (activate) the schema of elderly people in memory for some of the participants but not for others.

    The experimenters then assessed whether the priming of elderly stereotypes would have any effect on the students’ behavior—and indeed it did. When each research participant had gathered all his or her belongings, thinking that the experiment was over, the experimenter thanked him or her for participating and gave directions to the closest elevator. Then, without the participant knowing it, the experimenters recorded the amount of time that the participant spent walking from the doorway of the experimental room toward the elevator. As you can see in Figure 2.8, “Automatic Priming and Behavior,” the same results were found in both experiments—the participants who had made sentences using words related to the elderly stereotype took on the behaviors of the elderly—they walked significantly more slowly (in fact, about 12% more slowly across the two studies) as they left the experimental room.

    Figure 2.8 Automatic Priming and Behavior. In two separate experiments, Bargh, Chen, and Borroughs (1996) found that students who had been exposed to words related to the elderly stereotype walked more slowly than those who had been exposed to more neutral words.

    To determine if these priming effects occurred out of the conscious awareness of the participants, Bargh and his colleagues asked a third group of students to complete the priming task and then to indicate whether they thought the words they had used to make the sentences had any relationship to each other or could possibly have influenced their behavior in any way. These students had no awareness of the possibility that the words might have been related to the elderly or could have influenced their behavior.

    The point of these experiments, and many others like them, is clear—it is quite possible that our judgments and behaviors are influenced by our social situations, and this influence may be entirely outside of our conscious awareness. To return again to Bianca, it is even possible that we notice her nationality and that our beliefs about Italians influence our responses to her, even though we have no idea that they are doing so and really believe that they have not.


    Anchoring and Adjustment Lead Us to Accept Ideas That We Should Revise

    In some cases, we may be aware of the danger of acting on our expectations and attempt to adjust for them. Perhaps you have been in a situation where you are beginning a course with a new professor and you know that a good friend of yours does not like him. You may be thinking that you want to go beyond your negative expectation and prevent this knowledge from biasing your judgment. However, the accessibility of the initial information frequently prevents this adjustment from occurring—leading us to anchor on the initial construct and not adjust sufficiently. This is called the problem of anchoring and adjustment .

    Tversky and Kahneman (1974) asked some of the student participants in one of their studies to solve this multiplication problem quickly and without using a calculator:

    They asked other participants to solve this problem:

    They found that students who saw the first problem gave an estimated answer of about 512, whereas the students who saw the second problem estimated about 2,250. Tversky and Kahneman argued that the students couldn’t solve the whole problem in their head, so they did the first few multiplications and then used the outcome of this preliminary calculation as their starting point, or anchor. Then the participants used their starting estimate to find an answer that sounded plausible. In both cases, the estimates were too low relative to the true value of the product (which is 40,320)—but the first set of guesses were even lower because they started from a lower anchor.

    Of course, savvy marketers have long used the anchoring phenomenon to help them. You might not be surprised to hear that people are more likely to buy more products when they are listed as four for $1.00 than when they are listed as

    The Importance of Cognitive Biases in Everyday Life

    Perhaps you are thinking that the use of heuristics and the tendency to be influenced by salience and accessibility don’t seem that important—who really cares if we buy an iPod when the Zune is better, or if we think there are more words that begin with the letter R than there actually are? These aren’t big problems in the overall scheme of things. But it turns out that what seem perhaps to be pretty small errors and biases on the surface can have profound consequences for people.

    For one, if the errors occur for a lot of people, they can really add up. Why would so many people continue to buy lottery tickets or to gamble their money in casinos when the likelihood of them ever winning is so low? One possibility, of course, is the representative heuristic—people ignore the low base rates of winning and focus their attention on the salient likelihood of winning a huge prize. And the belief in astrology, which all scientific evidence suggests is not accurate, is probably driven in part by the salience of the occasions when the predictions do occur—when a horoscope is correct (which it will of course be sometimes), the correct prediction is highly salient and may allow people to maintain the (overall false) belief.

    People may also take more care to prepare for unlikely events than for more likely ones because the unlikely ones are more salient or accessible. For instance, people may think that they are more likely to die from a terrorist attack or as the result of a homicide than they are from diabetes, stroke, or tuberculosis. But the odds are much greater of dying from the health problems than from the terrorism or homicide. Because people don’t accurately calibrate their behaviors to match the true potential risks, the individual and societal costs are quite large (Slovic, 2000).

    Salience and accessibility also color how we perceive our social worlds, which may have a big influence on our behavior. For instance, people who watch a lot of violent television shows also tend to view the world as more dangerous in comparison to those who watch less violent TV (Doob & Macdonald, 1979). This follows from the idea that our judgments are based on the accessibility of relevant constructs. We also overestimate our contribution to joint projects (Ross & Sicoly, 1979), perhaps in part because our own contributions are so obvious and salient, whereas the contributions of others are much less so. And the use of cognitive heuristics can even affect our views about global warming. Joireman, Barnes, Truelove, and Duell (2010) found that people were more likely to believe in the existence of global warming when they were asked about it on hotter rather than colder days and when they had first been primed with words relating to heat. Thus the principles of salience and accessibility, because they are such an important part of our social judgments, can create a series of biases that can make a difference.

    Research has found that even people who should know better—and who need to know better—are subject to cognitive biases. Economists, stock traders, managers, lawyers, and even doctors have been found to make the same kinds of mistakes in their professional activities that people make in their everyday lives (Byrne & McEleney, 2000 Gilovich, Griffin, & Kahneman, 2002 Hilton, 2001). And the use of cognitive heuristics is increased when people are under time pressure (Kruglanski & Freund, 1983) or when they feel threatened (Kassam, Koslov, & Mendes, 2009), exactly the situations that may occur when professionals are required to make their decisions.

    Although biases are common, they are not impossible to control, and psychologists and other scientists are working to help people make better decisions. One possibility is to provide people with better feedback. Weather forecasters, for instance, are quite accurate in their decisions, in part because they are able to learn from the clear feedback that they get about the accuracy of their predictions. Other research has found that accessibility biases can be reduced by leading people to consider multiple alternatives rather than focusing only on the most obvious ones, and particularly by leading people to think about exactly the opposite possible outcomes than the ones they are expecting (Hirt, Kardes, & Markman, 2004). And people can also be trained to make better decisions. For instance, Lehman, Lempert, and Nisbett (1988) found that graduate students in medicine, law, and chemistry, but particularly those in psychology, all showed significant improvement in their ability to reason correctly over the course of their graduate training.

    Social Psychology in the Public Interest

    The Validity of Eyewitness Testimony

    As we have seen in the story of Rickie Johnson that opens this chapter, one social situation in which the accuracy of our person-perception skills is vitally important is the area of eyewitness testimony (Charman & Wells, 2007 Toglia, Read, Ross, & Lindsay, 2007 Wells, Memon, & Penrod, 2006). Every year, thousands of individuals such as Rickie Johnson are charged with and often convicted of crimes based largely on eyewitness evidence. In fact, more than 100 people who were convicted prior to the existence of forensic DNA have now been exonerated by DNA tests, and more than 75% of these people were victims of mistaken eyewitness identification (Wells, Memon, & Penrod, 2006 Fisher, 2011).

    The judgments of eyewitnesses are often incorrect, and there is only a small correlation between how accurate and how confident an eyewitness is. Witnesses are frequently overconfident, and one who claims to be absolutely certain about his or her identification is not much more likely to be accurate than one who appears much less sure, making it almost impossible to determine whether a particular witness is accurate or not (Wells & Olson, 2003).

    To accurately remember a person or an event at a later time, we must be able to accurately see and store the information in the first place, keep it in memory over time, and then accurately retrieve it later. But the social situation can influence any of these processes, causing errors and biases.

    In terms of initial encoding of the memory, crimes normally occur quickly, often in situations that are accompanied by a lot of stress, distraction, and arousal. Typically, the eyewitness gets only a brief glimpse of the person committing the crime, and this may be under poor lighting conditions and from far away. And the eyewitness may not always focus on the most important aspects of the scene. Weapons are highly salient, and if a weapon is present during the crime, the eyewitness may focus on the weapon, which would draw his or her attention away from the individual committing the crime (Steblay, 1997). In one relevant study, Loftus, Loftus, and Messo (1987) showed people slides of a customer walking up to a bank teller and pulling out either a pistol or a checkbook. By tracking eye movements, the researchers determined that people were more likely to look at the gun than at the checkbook and that this reduced their ability to accurately identify the criminal in a lineup that was given later.

    People may be particularly inaccurate when they are asked to identify members of a race other than their own (Brigham, Bennett, Meissner, & Mitchell, 2007). In one field study, for example, Meissner and Brigham (2001) sent White, Black, and Hispanic students into convenience stores in El Paso, Texas. Each of the students made a purchase, and the researchers came in later to ask the clerks to identify photos of the shoppers. Results showed that the White, Black, and Mexican American clerks demonstrated the own-race bias: They were all more accurate at identifying customers belonging to their own racial or ethnic group than they were at identifying people from other groups. There seems to be some truth to the adage that “They all look alike”—at least if an individual is looking at someone who is not of his or her race.

    One source of error in eyewitness testimony is the relative difficulty of accurately identifying people who are not of one’s own race.

    Kira Westland – sisters – CC BY-NC-ND 2.0 Dillan K – Sisters – CC BY-NC-ND 2.0 Bill Lile – Robertos Brothers – CC BY-NC-ND 2.0.

    Even if information gets encoded properly, memories may become distorted over time. For one thing, people might discuss what they saw with other people, or they might read information relating to it from other bystanders or in the media. Such postevent information can distort the original memories such that the witnesses are no longer sure what the real information is and what was provided later. The problem is that the new, inaccurate information is highly cognitively accessible, whereas the older information is much less so. Even describing a face makes it more difficult to recognize the face later (Dodson, Johnson, & Schooler, 1997).

    In an experiment by Loftus and Palmer (1974), participants viewed a film of a traffic accident and then, according to random assignment to experimental conditions, answered one of three questions:

    1. “About how fast were the cars going when they hit each other?”
    2. “About how fast were the cars going when they smashed each other?”
    3. “About how fast were the cars going when they contacted each other?”

    As you can see in in the following figure, although all the participants saw the same accident, their estimates of the speed of the cars varied by condition. People who had seen the “smashed” question estimated the highest average speed, and those who had seen the “contacted” question estimated the lowest.

    Figure 2.6 Reconstructive Memory

    Participants viewed a film of a traffic accident and then answered a question about the accident. According to random assignment, the blank was filled by either “hit,” “smashed,” or “contacted” each other. The wording of the question influenced the participants’ memory of the accident. Data are from Loftus and Palmer (1974).

    The situation is particularly problematic when the eyewitnesses are children, because research has found that children are more likely to make incorrect identifications than are adults (Pozzulo & Lindsay, 1998) and are also subject to the own-race identification bias (Pezdek, Blandon-Gitlin, & Moore, 2003). In many cases, when sex abuse charges have been filed against babysitters, teachers, religious officials, and family members, the children are the only source of evidence. The likelihood that children are not accurately remembering the events that have occurred to them creates substantial problems for the legal system.

    Another setting in which eyewitnesses may be inaccurate is when they try to identify suspects from mug shots or lineups. A lineup generally includes the suspect and five to seven other innocent people (the fillers), and the eyewitness must pick out the true perpetrator. The problem is that eyewitnesses typically feel pressured to pick a suspect out of the lineup, which increases the likelihood that they will mistakenly pick someone (rather than no one) as the suspect.

    Research has attempted to better understand how people remember and potentially misremember the scenes of and people involved in crimes and to attempt to improve how the legal system makes use of eyewitness testimony. In many states, efforts are being made to better inform judges, juries, and lawyers about how inaccurate eyewitness testimony can be. Guidelines have also been proposed to help ensure that child witnesses are questioned in a nonbiasing way (Poole & Lamb, 1998). Steps can also be taken to ensure that lineups yield more accurate eyewitness identifications. Lineups are more fair when the fillers resemble the suspect, when the interviewer makes it clear that the suspect might or might not be present (Steblay, Dysart, Fulero, & Lindsay, 2001), and when the eyewitness has not been shown the same pictures in a mug-shot book prior to the lineup decision. And several recent studies have found that witnesses who make accurate identifications from a lineup reach their decision faster than do witnesses who make mistaken identifications, suggesting that authorities must take into consideration not only the response but how fast it is given (Dunning & Perretta, 2002).

    In addition to distorting our memories for events that have actually occurred, misinformation may lead us to falsely remember information that never occurred. Loftus and her colleagues asked parents to provide them with descriptions of events that did (e.g., moving to a new house) and did not (e.g., being lost in a shopping mall) happen to their children. Then (without telling the children which events were real or made-up) the researchers asked the children to imagine both types of events. The children were instructed to “think real hard” about whether the events had occurred (Ceci, Huffman, Smith, & Loftus, 1994). More than half of the children generated stories regarding at least one of the made-up events, and they remained insistent that the events did in fact occur even when told by the researcher that they could not possibly have occurred (Loftus & Pickrell, 1995). Even college students are susceptible to manipulations that make events that did not actually occur seem as if they did (Mazzoni, Loftus, & Kirsch, 2001).

    The ease with which memories can be created or implanted is particularly problematic when the events to be recalled have important consequences. Therapists often argue that patients may repress memories of traumatic events they experienced as children, such as childhood sexual abuse, and then recover the events years later as the therapist leads them to recall the information—for instance, by using dream interpretation and hypnosis (Brown, Scheflin, & Hammond, 1998).

    But other researchers argue that painful memories such as sexual abuse are usually very well remembered, that few memories are actually repressed, and that even if they are, it is virtually impossible for patients to accurately retrieve them years later (McNally, Bryant, & Ehlers, 2003 Pope, Poliakoff, Parker, Boynes, & Hudson, 2007). These researchers have argued that the procedures used by the therapists to “retrieve” the memories are more likely to actually implant false memories, leading the patients to erroneously recall events that did not actually occur. Because hundreds of people have been accused, and even imprisoned, on the basis of claims about “recovered memory” of child sexual abuse, the accuracy of these memories has important societal implications. Many psychologists now believe that most of these claims of recovered memories are due to implanted, rather than real, memories (Loftus & Ketcham, 1994).

    Taken together, then, the problems of eyewitness testimony represent another example of how social cognition—the processes that we use to size up and remember other people—may be influenced, sometimes in a way that creates inaccurate perceptions, by the operation of salience, cognitive accessibility, and other information-processing biases.

    Key Takeaways

    • We use our schemas and attitudes to help us judge and respond to others. In many cases, this is appropriate, but our expectations can also lead to biases in our judgments of ourselves and others.
    • A good part of our social cognition is spontaneous or automatic, operating without much thought or effort. On the other hand, when we have the time and the motivation to think about things carefully, we may engage in thoughtful, controlled cognition.
    • Which expectations we use to judge others is based on both the situational salience of the things we are judging and the cognitive accessibility of our own schemas and attitudes.
    • Variations in the accessibility of schemas lead to biases such as the availability heuristic, the representativeness heuristic, the false consensus bias, and biases caused by counterfactual thinking.
    • The potential biases that are the result of everyday social cognition can have important consequences, both for us in our everyday lives but even for people who make important decisions affecting many other people. Although biases are common, they are not impossible to control, and psychologists and other scientists are working to help people make better decisions.
    • The operation of cognitive biases, including the potential for new information to distort information already in memory, can help explain the tendency for eyewitnesses to be overconfident and frequently inaccurate in their recollections of what occurred at crime scenes.

    Exercises and Critical Thinking

    1. Give an example of a time when you may have committed one of the cognitive errors listed in Table 2.1 “How Expectations Influence Our Social Cognition”. What factors (e.g., availability? salience?) caused the error, and what was the outcome of your use of the shortcut or heuristic?
    2. Go to the website http://thehothand.blogspot.com, which analyzes the extent to which people accurately perceive “streakiness” in sports. Consider how our sports perceptions are influenced by our expectations and the use of cognitive heuristics.
    .25 each (leading people to anchor on the four and perhaps adjust only a bit away) and when a sign says “buy a dozen” rather than “buy one.”

    And it is no accident that a car salesperson always starts negotiating with a high price and then works down. The salesperson is trying to get the consumer anchored on the high price with the hope that it will have a big influence on the final sale value.


    Understanding Implicit Bias

    As a profession, teaching is full of well-intentioned individuals deeply committed to seeing all children succeed. Touching innumerable lives in direct and indirect ways, educators uniquely recognize that our future rests on the shoulders of young people and that investing in their education, health, and overall well-being benefits society as a whole, both now and into the future.

    This unwavering desire to ensure the best for children is precisely why educators should become aware of the concept of implicit bias: the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner. Operating outside of our conscious awareness, implicit biases are pervasive, and they can challenge even the most well-intentioned and egalitarian-minded individuals, resulting in actions and outcomes that do not necessarily align with explicit intentions.

    In this article, I seek to shed light on the dynamics of implicit bias with an eye toward educators. After introducing the concept and the science undergirding it, I focus on its implications for educators and suggest ways they can mitigate its effects.

    The Unconscious Mind

    Psychologists estimate that our brains are capable of processing approximately 11 million bits of information every second. 1 Given the tremendous amount of information that inundates this startlingly complex organ in any given moment, many researchers have sought to understand the nuances of our remarkable cognitive functioning. In his 2011 tome on cognition, Thinking, Fast and Slow, Daniel Kahneman articulates a widely accepted framework for understanding human cognitive functioning by delineating our mental processing into two parts: System 1 and System 2. 2
    System 1 handles cognition that occurs outside of conscious awareness. This system operates automatically and extremely fast. For example, let's say you stop your car at a red light. When the light turns green, you know to proceed through the intersection. Thanks to the speed and efficiency of System 1, experienced drivers automatically understand that green means go, and so this mental association requires no conscious or effortful thought.

    In contrast, System 2 is conscious processing. It's what we use for mental tasks that require concentration, such as completing a tax form. Rather than being automatic and fast, this undertaking requires effortful, deliberate concentration.

    Together, these two systems help us make sense of the world. What is fascinating, though, is how much our cognition relies on System 1. Of the millions of possible pieces of information we can process each second, most neuroscientists agree that the vast majority of our cognitive processing occurs outside of our conscious awareness. 3 Besides its vastness, System 1 cognitive processing is also notable because it helps us understand that many of the mental associations that affect how we perceive and act are operating implicitly (i.e., unconsciously). As such, System 1 is responsible for the associations known as implicit biases.

    Because the implicit associations we hold arise outside of conscious awareness, implicit biases do not necessarily align with our explicit beliefs and stated intentions. This means that even individuals who profess egalitarian intentions and try to treat all individuals fairly can still unknowingly act in ways that reflect their implicit—rather than their explicit—biases. Thus, even well-intentioned individuals can act in ways that produce inequitable outcomes for different groups.

    Moreover, because implicit biases are unconscious and involuntarily activated as part of System 1, we are not even aware that they exist, yet they can have a tremendous impact on decision making. A large body of social science evidence has shown that implicit biases can be activated by any number of various identities we perceive in others, such as race, ethnicity, gender, or age. Since these robust associations are a critical component of our System 1 processing, everyone has implicit biases, regardless of race, ethnicity, gender, or age. No one is immune. Consequently, the range of implicit bias implications for individuals in a wide range of professions—not just education—is vast. For example, researchers have documented implicit biases in healthcare professionals, 4 law enforcement officers, 5 and even individuals whose careers require avowed commitments to impartiality, such as judges. 6 Indeed, educators are also susceptible to the influence of these unconscious biases.

    Implicit Bias in Education

    Research on implicit bias has identified several conditions in which individuals are most likely to rely on their unconscious System 1 associations. These include situations that involve ambiguous or incomplete information the presence of time constraints and circumstances in which our cognitive control may be compromised, such as through fatigue or having a lot on our minds. 7 Given that teachers encounter many, if not all, of these conditions through the course of a school day, it is unsurprising that implicit biases may be contributing to teachers' actions and decisions.

    Let's consider a few examples in the context of school discipline.

    First, classifying behavior as good or bad and then assigning a consequence is not a simple matter. All too often, behavior is in the eye of the beholder. Many of the infractions for which students are disciplined have a subjective component, meaning that the situation is a bit ambiguous. Thus, how an educator interprets a situation can affect whether the behavior merits discipline, and if so, to what extent.

    Infractions such as "disruptive behavior," "disrespect," and "excessive noise," for example, are ambiguous and dependent on context, yet they are frequently provided as reasons for student discipline. 8 That is not to say that some form of discipline is unwarranted in these situations, or that all disciplinary circumstances are subjective, as certainly many have objective components. However, these subjective infractions constitute a very large portion of disciplinary incidents.

    There are no standardized ways of assessing many infractions, such as disobedient or disruptive behavior, though schools do attempt to delineate some parameters through codes of conduct and by outlining associated consequences. Yet subjectivity can still come into play. Teachers' experiences and automatic unconscious associations can shape their interpretation of situations that merit discipline, and can even contribute to discipline disparities based on a student's race.

    One study of discipline disparities 9 found that students of color were more likely to be sent to the office and face other disciplinary measures for offenses such as disrespect or excessive noise, which are subjective, while white students were more likely to be sent to the office for objective infractions, such as smoking or vandalism. (For more about discipline disparities, see "From Reaction to Prevention" by Russell J. Skiba and Daniel J. Losen.) Thus, in disciplinary situations that are a bit ambiguous (What qualifies as disrespect? How loud is too loud?), educators should be aware that their implicit associations may be contributing to their decisions without their conscious awareness or consent.

    Second, implicit attitudes toward specific racial groups can unconsciously affect disciplinary decisions. For example, extensive research has documented pervasive implicit associations that link African Americans, particularly males, to stereotypes such as aggression, criminality, or danger, even when explicit beliefs contradict these views. 10

    In education, these implicit associations can taint perceptions of the discipline severity required to ensure that the misbehaving student understands what he or she did wrong. In short, these unconscious associations can mean the difference between one student receiving a warning for a confrontation and another student being sent to school security personnel. In the words of researcher Carla R. Monroe, "Many teachers may not explicitly connect their disciplinary reactions to negative perceptions of Black males, yet systematic trends in disproportionality suggest that teachers may be implicitly guided by stereotypical perceptions that African American boys require greater control than their peers and are unlikely to respond to nonpunitive measures." 11

    A recent study from Stanford University sheds further light on this dynamic by highlighting how racial disparities in discipline can occur even when black and white students behave similarly. 12 In the experiment, researchers showed a racially diverse group of female K–12 teachers the school records of a fictitious middle school student who had misbehaved twice both infractions were minor and unrelated. Requesting that the teachers imagine working at this school, researchers asked a range of questions related to how teachers perceived and would respond to the student's infractions. While the student discipline scenarios were identical, researchers manipulated the fictitious student's name some teachers reviewed the record of a student given a stereotypically black name (e.g., Deshawn or Darnell) while others reviewed the record of a student with a stereotypically white name (e.g., Jake or Greg).

    Results indicated that from the first infraction to the second, teachers were more likely to escalate the disciplinary response to the second infraction when the student was perceived to be black as opposed to white. Moreover, a second part of the study, with a larger, more diverse sample that included both male and female teachers, found that infractions by a black student were more likely to be viewed as connected, meaning that the black student's misbehavior was seen as more indicative of a pattern, than when the same two infractions were committed by a white student. 13

    Another way in which implicit bias can operate in education is through confirmation bias: the unconscious tendency to seek information that confirms our preexisting beliefs, even when evidence exists to the contrary. The following example is from the context of employee performance evaluations, which explored this dynamic. Relevant parallels also exist for K–12 teachers evaluating their students' work.

    A 2014 study explored how confirmation bias can unconsciously taint the evaluation of work that employees produce. Researchers created a fictitious legal memo that contained 22 different, deliberately planted errors. These errors included minor spelling and grammatical errors, as well as factual, analytical, and technical writing errors. The exact same memo was distributed to law firm partners under the guise of a "writing analysis study, "14 and they were asked to edit and evaluate the memo.

    Half of the memos listed the author as African American while the remaining portion listed the author as Caucasian. Findings indicated that memo evaluations hinged on the perceived race of the author. When the author was listed as African American, the evaluators found more of the embedded errors and rated the memo as lower quality than those who believed the author was Caucasian. Researchers concluded that these findings suggest unconscious confirmation bias despite the intention to be unbiased, "we see more errors when we expect to see errors, and we see fewer errors when we do not expect to see errors." 15

    While this study focused on the evaluation of a legal memo, it is not a stretch of the imagination to consider the activation of this implicit dynamic in grading student essays or evaluating other forms of subjective student performance. Confirmation bias represents yet another way in which implicit biases can challenge the best of explicit intentions.

    Finally, implicit biases can also shape teacher expectations of student achievement. For example, a 2010 study examined teachers' implicit and explicit ethnic biases, finding that their implicit—not explicit—biases were responsible for different expectations of achievement for students from different ethnic backgrounds. 16

    While these examples are a select few among many, together they provide a glimpse into how implicit biases can have detrimental effects for students, regardless of teachers' explicit goals. This raises the question: How can we better align our implicit biases with the explicit values we uphold?

    Mitigating the Influence of Implicit Bias

    Recognizing that implicit biases can yield inequitable outcomes even among well-intentioned individuals, a significant portion of implicit bias research has explored how individuals can change their implicit associations—in effect "reprogramming" their mental associations so that unconscious biases better align with explicit convictions. Thanks to the malleable nature of our brains, researchers have identified a few approaches that, often with time and repetition, can help inhibit preexisting implicit biases in favor of more egalitarian alternatives.

    With implicit biases operating outside of our conscious awareness and inaccessible through introspection, at first glance it might seem difficult to identify any that we may hold. Fortunately, researchers have identified several approaches for assessing these unconscious associations, one of which is the Implicit Association Test (IAT). Debuting in 1998, this free online test measures the relative strength of associations between pairs of concepts. Designed to tap into unconscious System 1 associations, the IAT is a response latency (i.e., reaction time) measure that assesses implicit associations through this key idea: when two concepts are highly associated, test takers will be faster at pairing those concepts (and make fewer mistakes doing so) than they will when two concepts are not as highly associated.*

    To illustrate, consider this example. Most people find the task of pairing flower types (e.g., orchid, daffodil, tulip) with positive words (e.g., pleasure, happy, cheer) easier than they do pairing flower types with negative words (e.g., rotten, ugly, filth). Because flowers typically have a positive connotation, people can quickly link flowers to positive terms and make few mistakes in doing so. In contrast, words such as types of insects (e.g., ants, cockroaches, mosquitoes) are likely to be easier for most people to pair with those negative terms than with positive ones. 17

    While this example is admittedly simplistic, these ideas laid the foundation for versions of the IAT that assess more complex social issues, such as race, gender, age, and sexual orientation, among others. Millions of people have taken the IAT, and extensive research has largely upheld the IAT as a valid and reliable measure of implicit associations. 18 There are IATs that assess both attitudes (i.e., positive or negative emotions toward various groups) and stereotypes (i.e., how quickly someone can connect a group to relevant stereotypes about that group at an implicit level).

    Educators can begin to address their implicit biases by taking the Implicit Association Test. Doing so will enable them to become consciously aware of some of the unconscious associations they may harbor. Research suggests that this conscious awareness of one's own implicit biases is a critical first step for counteracting their influence. 19 This awareness is especially crucial for educators to help ensure that their explicit intentions to help students learn and reach their full potential are not unintentionally thwarted by implicit biases.

    By identifying any discrepancies that may exist between conscious ideals and automatic implicit associations, individuals can take steps to bring those two into better alignment. One approach for changing implicit associations identified by researchers is intergroup contact: meaningfully engaging with individuals whose identities (e.g., race, ethnicity, religion) differ from your own. Certain conditions exist for optimal effects, such as equal status within the situation, a cooperative setting, and working toward common goals. 20 By getting to know people who differ from you on a real, personal level, you can begin to build new associations about the groups those individuals represent and break down existing implicit associations. 21

    Another approach that research has determined may help change implicit associations is exposure to counter-stereotypical exemplars: individuals who contradict widely held stereotypes. Some studies have shown that exposure to these exemplars may help individuals begin to automatically override their preexisting biases. 22 Examples of counter-stereotypical exemplars may include male nurses, female scientists, African American judges, and others who defy stereotypes.

    This approach for challenging biases is valuable not just for educators but also for the students they teach, as some scholars suggest that photographs and décor that expose individuals to counter-stereotypical exemplars can activate new mental associations. 23 While implicit associations may not change immediately, using counter-stereotypical images for classroom posters and other visuals may serve this purpose.
    Beyond changing cognitive associations, another strategy for mitigating implicit biases that relates directly to school discipline is data collection. Because implicit biases function outside of conscious awareness, identifying their influence can be challenging. Gathering meaningful data can bring to light trends and patterns in disparate treatment of individuals and throughout an institution that may otherwise go unnoticed.

    In the context of school discipline, relevant data may include the student's grade, the perceived infraction, the time of day it occurred, the name(s) of referring staff, and other relevant details and objective information related to the resulting disciplinary consequence. Information like this can facilitate a large-scale review of discipline measures and patterns and whether any connections to implicit biases may emerge. 24 Moreover, tracking discipline data over time and keeping implicit bias in mind can help create a school- or districtwide culture of accountability.

    Finally, in the classroom, educators taking enough time to carefully process a situation before making a decision can minimize implicit bias. Doing so, of course, is easier said than done, given that educators are constantly pressed for time, face myriad challenges, and need crucial support from administrators to effectively manage student behavior.

    As noted earlier, System 1 unconscious associations operate extremely quickly. As a result, in circumstances where individuals face time constraints or have a lot on their minds, their brains tend to rely on those fast and automatic implicit associations. Research suggests that reducing cognitive load and allowing more time to process information can lead to less biased decision making. 25 In terms of school discipline, this can mean allowing educators time to reflect on the disciplinary situation at hand rather than make a hasty decision. 26

    While implicit biases can affect any moment of decision making, these unconscious associations should not be regarded as character flaws or other indicators of whether someone is a "good person" or not. Having the ability to use our System 1 cognition to make effortless, lightning-fast associations, such as knowing that a green traffic light means go, is crucial to our cognition.

    Rather, when we identify and reflect on the implicit biases we hold, we recognize that our life experiences may unconsciously shape our perceptions of others in ways that we may or may not consciously desire, and if the latter, we can take action to mitigate the influence of those associations.

    In light of the compelling body of implicit bias scholarship, teachers, administrators, and even policymakers are increasingly considering the role of unconscious bias in disciplinary situations. For example, the federal school discipline guidance jointly released by the U.S. departments of Education and Justice in January 2014 not only mentions implicit bias as a factor that may affect the administration of school discipline, it also encourages school personnel to receive implicit bias training. (For more information on that guidance, see "School Discipline and Federal Guidance.") Speaking not only to the importance of identifying implicit bias but also to mitigating its effects, the federal guidance asserts that this training can "enhance staff awareness of their implicit or unconscious biases and the harms associated with using or failing to counter racial and ethnic stereotypes." 27 Of course, teachers who voluntarily choose to pursue this training and explore this issue on their own can also generate interest among their colleagues, leading to more conversations and awareness.

    Accumulated research evidence indicates that implicit bias powerfully explains the persistence of many societal inequities, not just in education but also in other domains, such as criminal justice, healthcare, and employment. 28 While the notion of being biased is one that few individuals are eager to embrace, extensive social science and neuroscience research has connected individuals' System 1 unconscious associations to disparate outcomes, even among individuals who staunchly profess egalitarian intentions.

    In education, the real-life implications of implicit biases can create invisible barriers to opportunity and achievement for some students—a stark contrast to the values and intentions of educators and administrators who dedicate their professional lives to their students' success. Thus, it is critical for educators to identify any discrepancies that may exist between their conscious ideals and unconscious associations so that they can mitigate the effects of those implicit biases, thereby improving student outcomes and allowing students to reach their full potential.

    Cheryl Staats is a senior researcher at the Kirwan Institute for the Study of Race and Ethnicity, housed at Ohio State University.

    *Implicit Association Tests are publicly available through Project Implicit. (back to the article)

    Endnotes

    1. Tor Nørretranders, The User Illusion: Cutting Consciousness Down to Size (New York: Penguin, 1999).

    2. Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011).

    3. See, for example, George A. Miller, "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information," Psychological Review 63, no. 2 (1956): 81–97.

    4. See, for example, Janice A. Sabin, Brian A. Nosek, Anthony G. Greenwald, and Frederick P. Rivara, "Physicians' Implicit and Explicit Attitudes about Race by MD Race, Ethnicity, and Gender," Journal of Health Care for the Poor and Underserved 20 (2009): 896–913.

    5. See, for example, Joshua Correll, Bernadette Park, Charles M. Judd, Bernd Wittenbrink, Melody S. Sadler, and Tracie Keesee, "Across the Thin Blue Line: Police Officers and Racial Bias in the Decision to Shoot," Journal of Personality and Social Psychology 92 (2007): 1006–1023.

    6. See, for example, Jeffrey J. Rachlinski, Sheri Lynn Johnson, Andrew J. Wistrich, and Chris Guthrie, "Does Unconscious Racial Bias Affect Trial Judges?," Notre Dame Law Review 84 (2009): 1195–1246.

    7. Marianne Bertrand, Dolly Chugh, and Sendhil Mullainathan, "Implicit Discrimination," American Economic Review 95, no. 2 (2005): 94–98.

    8. See, for example, Cheryl Staats and Danya Contractor, Race and Discipline in Ohio Schools: What the Data Say (Columbus, OH: Kirwan Institute for the Study of Race and Ethnicity, 2014).

    9. Russell J. Skiba, Robert S. Michael, Abra Carroll Nardo, and Reece L. Paterson, "The Color of Discipline: Sources of Racial and Gender Disproportionality in School Punishment," Urban Review 34 (2002): 317–342.

    10. Jennifer L. Eberhardt, Phillip Atiba Goff, Valerie J. Purdie, and Paul G. Davies, "Seeing Black: Race, Crime, and Visual Processing," Journal of Personality and Social Psychology 87 (2004): 876–893.

    11. Carla R. Monroe, "Why Are 'Bad Boys' Always Black? Causes of Disproportionality in School Discipline and Recommendations for Change," The Clearing House: A Journal of Educational Strategies, Issues and Ideas 79 (2005): 46.

    12. Jason A. Okonofua and Jennifer L. Eberhardt, "Two Strikes: Race and the Disciplining of Young Students," Psychological Science 26 (2015): 617–624.

    13. Okonofua and Eberhardt, "Two Strikes."

    14. Arin N. Reeves, Written in Black & White: Exploring Confirmation Bias in Racialized Perceptions of Writing Skills (Chicago: Nextions, 2014).

    15. Reeves, Written in Black & White, 6.

    16. Linda van den Bergh, Eddie Denessen, Lisette Hornstra, Marinus Voeten, and Rob W. Holland, "The Implicit Prejudiced Attitudes of Teachers: Relations to Teacher Expectations and the Ethnic Achievement Gap," American Educational Research Journal 47 (2010): 497–527.

    17. This example is from Anthony G. Greenwald, Debbie E. McGhee, and Jordan L. K. Schwartz, "Measuring Individual Differences in Implicit Cognition: The Implicit Association Test," Journal of Personality and Social Psychology 74 (1998): 1464–1480.

    18. Brian A. Nosek, Anthony G. Greenwald, and Mahzarin R. Banaji, "The Implicit Association Test at Age 7: A Methodological and Conceptual Review," in Social Psychology and the Unconscious: The Automaticity of Higher Mental Processes, ed. John A. Bargh (New York: Psychology Press, 2007), 265–292.

    19. Patricia G. Devine, Patrick S. Forscher, Anthony J. Austin, and William T. L. Cox, "Long-Term Reduction in Implicit Bias: A Prejudice Habit-Breaking Intervention," Journal of Experimental Social Psychology 48 (2012): 1267–1278 and John F. Dovidio, Kerry Kawakami, Craig Johnson, Brenda Johnson, and Adaiah Howard, "On the Nature of Prejudice: Automatic and Controlled Processes," Journal of Experimental Social Psychology 33 (1997): 510–540.

    20. Gordon W. Allport, The Nature of Prejudice (Cambridge, MA: Addison-Wesley, 1954). Allport also recognizes a fourth condition for optimal intergroup contact, which is authority sanctioning the contact.

    21. Thomas F. Pettigrew and Linda R. Tropp, "A Meta-Analytic Test of Intergroup Contact Theory," Journal of Personality and Social Psychology 90 (2006): 751–783.

    22. Nilanjana Dasgupta and Anthony G. Greenwald, "On the Malleability of Automatic Attitudes: Combating Automatic Prejudice with Images of Admired and Disliked Individuals," Journal of Personality and Social Psychology 81 (2001): 800–814 and Nilanjana Dasgupta and Shaki Asgari, "Seeing Is Believing: Exposure to Counterstereotypic Women Leaders and Its Effect on the Malleability of Automatic Gender Stereotyping," Journal of Experimental Social Psychology 40 (2004): 642–658.

    23. Jerry Kang, Mark Bennett, Devon Carbado, et al., "Implicit Bias in the Courtroom," UCLA Law Review 59 (2012): 1124–1186.

    24. Kent McIntosh, Erik J. Girvan, Robert H. Horner, and Keith Smolkowski, "Education Not Incarceration: A Conceptual Model for Reducing Racial and Ethnic Disproportionality in School Discipline," Journal of Applied Research on Children: Informing Policy for Children at Risk 5, no. 2 (2014): art. 4.

    25. Diana J. Burgess, "Are Providers More Likely to Contribute to Healthcare Disparities under High Levels of Cognitive Load? How Features of the Healthcare Setting May Lead to Biases in Medical Decision Making," Medical Decision Making 30 (2010): 246–257.

    26. Prudence Carter, Russell Skiba, Mariella Arredondo, and Mica Pollock, You Can't Fix What You Don't Look At: Acknowledging Race in Addressing Racial Discipline Disparities, Disciplinary Disparities Briefing Paper Series (Bloomington, IN: Equity Project at Indiana University, 2014).

    27. U.S. Department of Education, Guiding Principles: A Resource Guide for Improving School Climate and Discipline (Washington, DC: Department of Education, 2014), 17.

    28. For more on implicit bias and its effects in various professions, see the Kirwan Institute's annual State of the Science: Implicit Bias Review publication.


    How to generate a Time Tracking Report in Jira

    These are the simple steps to creating a time tracking report in Jira for a specific project. Which is available as standard in Jira.

    1. Go to ‘Reports’, then ‘Forecast & management’ then ‘Time Tracking Report’.
    2. Select the fix version that you want a report on via the dropdown menu
    3. and select how you want the issues to be sorted. You can sort by:
      1. ‘least completed issues’ – issues with the highest estimated time remaining
      2. ‘most completed issues’ – which have the lowest Estimated Time Remaining.
      1. ‘Only include sub-tasks with the selected version’
      2. ‘Also include sub-tasks without a version set’
      3. ‘Include all sub-tasks’ (which includes all sub-tasks, regardless of version).

      Comparing Self-Regulated Learning Models

      Next, the models will be compared in the following categories. First, the models’ number of cites. Second, all the models are divided into different SRL phases and subprocesses. They are compared here, to extract conclusions. Thirdly, there are three main areas that SRL explores (meta)cognition, motivation, and emotion therefore, their positioning in each of the six models is analyzed. And, fourth, the SRL models present significant differences in three major aspects of conceptualization: top𠄽own/bottom–up, automaticity, and context.

      Citations and Importance in the Field

      One word of advice before starting this section: the number of citations garnered is an indicator that can be influenced by aspects not related exclusively to the quality of the model. Important innovations can actually be made by models that have not received so many cites. Nevertheless, it is an interesting indicator to extract some conclusions from.

      In Table ​ Table1 1 , the number of citations per model is presented. The Efklides and SSRL models have a lower total number, as they were published recently. Nevertheless, they show promising numbers in citations per year, which indicates their relevance. The models of Boekaerts and of Winne and Hadwin’s models form a second group according to their number of citations. It is important to point out that Boekaerts and Corno’s (2005) study includes not only Boekaerts’ model, but also information about Corno’s and Kuhl’s models and, especially, a reflection on SRL measurement. Therefore, it is only a partial representation of citations of Boekaerts’ model, but it is her most cited paper where her model is presented. Winne and Hadwin’s (1998) book is the most cited work regarding their model, but it is not the original presentation of their work, as earlier discussed. Finally, Pintrich’s and Zimmerman’s models, both presented in the 2000 handbook, have the highest number of citations, with Zimmerman as the most cited.

      Table 1

      Number of citations of the different SRL models main publication.

      ModelPublicationTotal citationsCitations year ∗
      BoekaertsBoekaerts and Corno, 2005101184.25
      EfklidesEfklides, 201125141.83
      Hadwin et al.Hadwin et al., 201119632.67
      PintrichPintrich, 20003416200.94
      Winne and HadwinWinne and Hadwin, 1998103754.58
      ZimmermanZimmerman, 20004169245.24

      If we compare the number of the four older models, Pintrich’s and Zimmerman’s models have been more widely used in comparison to Boekaerts’ and that of Winne and Hadwin. There are two probable causes. One is that the first ones are more comprehensive and easier to understand and apply in classrooms (Dignath et al., 2008). With regards to the first cause, both Pintrich’s and Zimmerman’s models include a more complete vision of different types of subprocesses. If we compare these four models figures, it is salient that Zimmerman and Pintrich (a) present more specific subprocesses than Boekaerts and (b) include motivational and emotional aspects that are not directly presented by Winne and Hadwin. The second cause is that Boekaerts’ model and Winne and Hadwin’s are slightly less intuitive, and a deeper understanding of the underpinning theory is needed for a correct application. This is not to say that these two models are less relevant than the others on the contrary, both cover in depth two critical aspects for SRL: emotion regulation and metacognition. To finalize, Moos and Ringdal’s (2012) review of the teacher’s role in SRL in the classroom found that Zimmerman’s model has been the predominant in that line of research, as it offers 𠇊 robust explanatory lens” which might help the most when working with teachers as proposed by these authors.

      Phases and Subprocesses

      All of the model authors agree that SRL is cyclical, composed of different phases and subprocesses. However, the models present different phases and subprocesses, and by identifying them we can extract some conclusions. In general terms, Puustinen and Pulkkinen’s (2001) review concluded that the models they analyzed had three identifiable phases: (a) preparatory, which includes task analysis, planning, activation of goals, and setting goals (b) performance, in which the actual task is done while monitoring and controlling the progress of performance and (c) appraisal, in which the student reflects, regulates, and adapts for future performances. What is the conceptualization of SRL phases in the two added models? (see Table ​ Table2 2 ). First, Efklides (2011) does not clearly state an appraisal phase in her model, although she considers that the Person level is influenced after repeated performances of a task. Second, the SSRL model in its version from 2011, although strongly influenced by Winne and Hadwin’s, presents four phases that are similar to Pintrich’s but using different labels. Therefore, the SSRL model classification in the table is the same one that Puustinen and Pulkkinen (2001) proposed for Pintrich’s.

      Table 2

      ModelsSRL phases
      Preparatory phasePerformance phaseAppraisal phase
      BoekaertsIdentification, interpretation, primary and secondary appraisal, goal settingGoal strivingPerformance feedback
      EfklidesTask representationCognitive processing, performance
      Hadwin et al., 2011PlanningMonitoring, controlRegulating
      Hadwin et al. (in press) ∗Negotiating and awareness of the taskStrategic task engagementAdaptation
      PintrichForethought, planning, activationMonitoring, controlReaction and reflection
      Winne and HadwinTask definition, goal setting and planningApplying tactics and strategiesAdapting metacognition
      ZimmermanForethought (task analysis, self-motivation)Performance (self-control self-observation)Self-reflection (self-judgment, self-reaction)

      What can be concluded? Even if all of the models considered here except Efklides’, can be conceptualized around those three phases proposed by Puustinen and Pulkkinen, two conceptualizations of the SRL phases can be distinguished. First, some models emphasize a clearer distinction among the phases and the subprocesses that occur in each of them. Zimmerman’s and Pintrich’s models belong to this group, each having very distinct features for each phase. Those in the second group-the Winne and Hadwin, Boekaerts, Efklides, and SSRL (in its forthcoming version) models-transmit more explicitly that SRL is an “open” process, with recursive phases, and not as delimited as in the first group. For example, Winne and Hadwin’s figure does not make a clear distinction between the phases and the processes that belong to each: SRL is presented as a feedback loop that evolves over time. It is only through the text accompanying the figure that Winne and Hadwin (1998) clarified that they were proposing four phases.

      One implication from this distinctive difference could be in how to intervene according to the different models. The first group of models might allow for more specific interventions because the measurement of the effects might be more feasible. For example, if a teacher recognizes that one of her students has a motivation problem while performing a task, applying some of the subprocesses presented by Zimmerman at that particular phase (e.g., self-consequences) might have a positive outcome. On the other hand, the second group of models might suggest more holistic interventions, as they perceive the SRL as a more continuous process composed of more inertially related subprocesses. This hypothesis, though, would need to be explored in the future.

      (Meta)cognition, Motivation, and Emotion

      Next, the three main areas of SRL activity and how each model conceptualizes them will be explored. The interpretation is guided by the models’ figures as they reveal the most important SRL aspects for each author. A classification based on different levels for the three aforementioned areas is proposed (Table ​ Table3 3 ). It is important to clarify that the levels were conceptualized, not as being close in nature, but rather, as being positions on a continuum.

      Table 3

      Models figures comparison on cognition, motivation, and emotion.

      Levels of relevanceCognitionMotivationEmotion
      First (more emphasis)Winne Efklides SSRLZimmerman Boekaerts PintrichBoekaerts
      SecondPintrich ZimmermanSSRL Efklides WinneZimmerman/Pintrich SSRL
      Third (less emphasis)Boekaerts Efklides Winne

      (Meta)cognition

      Three levels are considered with regard to (meta)cognition. The first level includes models with a strong emphasis on (meta)cognition. The first model at this level is Winne and Hadwin’s, in which the predominant processes are metacognitive: “Metacognitive monitoring is the gateway to self-regulating one’s learning” (Winne and Perry, 2000, p. 540). Efklides’ model includes motivational and affective aspects, but the metacognitive ones are defined in more detail at the Task × Person level and are the ones with more substance. Finally, the SSRL model includes in the forthcoming version the COPES architecture from Winne and Hadwin. However, due to the fact that the SSRL 2011 version did not emphasize (meta)cognition, it was decided to locate it after the two more metacognitive models. At the second level are Pintrich’s and Zimmerman’s models. Pintrich (2000) incorporates the “regulation of cognition,” which has a central role along with aspects of metacognitive theory such as FOKs and FOLs. Zimmerman (2000) presents a number of leading cognitive/metacognitive strategies, but they are not emphasized over the motivational ones, as is the case for the models just discussed. At the third level, Boekaerts includes the use of (meta)cognitive strategies in her figures, but does not explicitly refer to specific strategies.

      Motivation

      A two-level classification is proposed. The Zimmerman, Boekaerts, and Pintrich models are at the first level. Zimmerman’s own definition of SRL explicitly states the importance of goals and presents SRL as a goal-driven activity. In his model, in the forethought phase, self-motivation beliefs are a crucial component the performance phase was originally described (Zimmerman, 2000) as performance/volitional control, which indicates how important volition is and at the self-reflection phase, self-reactions affect the motivation to perform the task in the future. According to Boekaerts, the students “interpret” the learning task and context, and then activate two different goal paths. Those pathways are the ones that lead the regulatory actions that the students do (or do not) activate (e.g., Boekaerts and Niemivirta, 2000). In addition, Boekaerts also included motivational beliefs in her models as a key aspect of SRL (see Figure ​ Figure7 7 ). Finally, Pintrich (2000) also included a motivation/affect area in his model that considers aspects similar to those in Zimmerman’s, but Pintrich’s places a greater emphasis on metacognition. It is also important to mention that Pintrich conducted the first research that explored the role of goal orientation in SRL (Pintrich and de Groot, 1990).

      The second level includes the SSRL, Efklides, and Winne and Hadwin models. SSRL included motivation in the 2011 version figure and emphasized its role in collaborative learning situations, but without differentiating motivational components in detail. Nevertheless, the authors have conducted a significant amount of research regarding motivation and its regulation at the group level (e.g., Järvelä et al., 2013). Finally, Winne and Hadwin (1998) and Efklides (2011) included motivation in their models, but it is not their main focus of analysis.

      Emotion

      Three levels are proposed. In the first one (Boekaerts, 1991 Boekaerts and Niemivirta, 2000) emphasizes the influence of emotions in students’ goals and how this activates two possible pathways and different strategies. For Boekaerts, ego protection plays a crucial role in the well-being pathway, and for that reason it is essential for students to have strategies to regulate their emotions, so that they will instead activate the learning pathway. At the second level, Pintrich (2000) and Zimmerman (2000) shared similar interpretations of emotions. They both put the most emphasis on the reactions (i.e., attributions and affective reactions) that occur when students self-evaluate their work during the last SRL phase. In addition, both mentioned strategies to control and monitor emotions during performance: Pintrich discusses 𠇊wareness and monitoring” and “selection and adaptation of strategies to manage” (Pintrich, 2000), and Zimmerman stated that imagery and self-consequences can be used by students to self-induce positive emotions (Zimmerman and Moylan, 2009). Nevertheless, in the preparatory phases, neither of them mentions emotions directly. Yet, Zimmerman argues that self-efficacy, which is included in his forethought phase, is a better predictor of performance at that phase than emotions or emotion regulation (Zimmerman, B. J. personal communication with the author, 28/02/2014). The SSRL model includes emotion in its 2011 version figure (Hadwin et al., 2011), but the subprocesses that underlie the regulation of emotion are not specified. Nonetheless, these authors clearly argue that collaborative learning situations present significant emotional challenges, and they have conducted empirical studies exploring this matter (e.g., Järvenoja and Järvelä, 2009 Koivuniemi et al., 2017). Finally, Efklides (2011) and Winne and Hadwin (e.g., Winne, 2011) mention the role of emotions in SRL [e.g., �t may directly impact metacognitive experiences as in the case of the mood” (Efklides, 2011, p. 19, and she included it in her model at two levels]. However, they do not place a major emphasis on emotion-regulation strategies.

      Three Additional Areas for a Comparison

      As mentioned earlier, three additional areas in which the models present salient differences were identified.

      Top𠄽own/Bottom–Up (TD/BU)

      The first model that included this categorization of self-regulation was Boekaerts and Niemivirta (2000). Top𠄽own is the mastery/growth pathway in which the learning/task goals are more relevant for the student. On the other hand, bottom–up is the well-being pathway in which students activate goals to protect their self-concept (i.e., self-esteem) from being damaged, also known as ego protection. Efklides (2011) also uses this categorization, but with different implications. For her, top𠄽own regulation occurs when goals are set in accordance with the person’s characteristics (e.g., cognitive ability, self-concept, attitudes, emotions, etc.), and self-regulation is guided based on those personal goals. Bottom–up occurs when the regulation is data-driven, i.e., when the specifics of performing the task (e.g., the monitoring of task progress) direct and regulate the student’s actions. In other words, the cognitive processes are the main focus when the student is trying to perform a task.

      The other models do not explore this categorization explicitly, although some implicit interpretations can be extracted. This way, there could be a third vision of TD/BU that is based on the interactive nature of Zimmerman’s model and Winne and Hadwin’s. Zimmerman (personal communication to author 27/02/2014) explained:

      Historically, top𠄽own theories have been cognitive and have emphasized personal beliefs and mental processes as primary (e.g., Information Processing theories). By contrast, bottom–up theories have been behavioral and have emphasized actions and environments as primary (e.g., Behavior Modification theories). When Bandura (1977) developed social cognitive theory, he concluded that both positions were half correct: both were important. His theory integrates both viewpoints using a triadic depiction. I contend that his formulation is neither top𠄽own [n]or bottom–up but rather interactionist where cognitive processes bi-directionally cause and are caused by behavior and environment. My cyclical model of SRL elaborates these triadic components and describes their interaction in repeated terms of cycles of feedback. Thus, any variable in this model (e.g., a student’s self-efficacy beliefs) is subject to change during the next feedback cycle…. There are countless examples of people without goals who experience success in sport, music, art, or academia and subsequently develop strong goals in the process. Interactionist theories emphasize developing one’s goals as much as following them.

      Winne (personal communication to the author 27/02/2014) stated:

      I didn’t introduce this terminology because it is limiting. A vital characteristic of SRL is cycles of information flow rather than one-directional flow of information. Some cycles are internal to the person and others cross the boundary between person and environment.

      In sum, Zimmerman and Winne do not consider TD/BU to be applicable to their models, as the recursive cycles of feedback during performance generate self-regulation and changes in the specificity of the goals.

      As Pintrich’s (2000) model is goal-driven, it could be assumed that it conceptualizes top𠄽own motivation as coming from personal characteristics, as proposed by Efklides (2011). Nevertheless, Pintrich also included goal orientation, which implicates performance and avoidance goals, which has a connection to Boekaerts’ well-being pathway, especially avoidance goals. Therefore, it is difficult to discern with any precision what the interpretation of TD/BU would be for his model. The SSRL model (Hadwin et al., 2011) has not yet clarified this issue, though a stance similar to that of Winne and Hadwin could be presupposed.

      Automaticity

      In SRL, automaticity usually refers to underlying processes that have become an automatic response pattern (Bargh and Barndollar, 1996 Moors and De Houwer, 2006 Winne, 2011). It is frequently used to refer to (meta)cognitive processes: some authors maintain that, for SRL to occur, some processes must become automatic so that the student can have less cognitive load and can then activate strategies (e.g., Zimmerman and Kitsantas, 2005 Winne, 2011). However, it can also refer to motivational and emotional processes that occur without student’s awareness (e.g., Boekaerts, 2011). Next, some quotations from the models on this topic will be presented to illustrate the different perspectives of automaticity. Winne (2011) stated:

      Most cognition is carried out without learners needing either to deliberate about doing it or to control fine-grained details of how it unfolds…Some researchers describe such cognition as “unconscious” but I prefer the label implicit�use so much of cognitive activity is implicit, learners are infrequently aware of their cognition. There are two qualifications. First, cognition can change from implicit to explicit when errors and obstacles arise. But, second, unless learners trace cognitive products as tangible representations -“notes to self” or underlines that signal discriminations about key ideas, for example-the track [of] cognitive of events across time can be unreliable, a fleeting memory (p. 18).

      This conception of the SRL functioning at the Task × Person level presupposes a cognitive architecture in which there are conscious analytic processes and explicit knowledge as well as non-conscious automatic processes and implicit knowledge that have a direct effect on behavior (p. 13).

      Boekaerts also assumed that automaticity can play a crucial role in the different pathways that students might activate: �rgh’s (1990) position is that goal activation can be automatic or deliberate and Bargh and Barndollar (1996) demonstrated that some goals may be activated or triggered directly by environmental cues, outside the awareness of the individual” (Boekaerts and Niemivirta, 2000, p. 422). Pintrich (2000) specified: “At some level, this process of activation of prior knowledge can and does happen automatically and without conscious thought” (p. 457). Finally, Zimmerman and Moylan (2009) asserted:

      In terms of their impact on forethought, process goals are designed to incorporate strategic planning-combining two key task analysis processes. With studying and/or practice, students will eventually use the strategy automatically. Automization occurs when a strategy can be executed without close metacognitive monitoring. At the point of automization, students can benefit from outcome feedback because it helps them to adapt their performance based on their own personal capabilities, such as when a basketball free throw shooter adjusts their throwing strategy based on the results of their last shot 3 . However, even experts will encounter subsequent difficulties after a strategy becomes automatic, and this will require them to shift their monitoring back from outcomes to processes (p. 307).

      Thus, automaticity is an important aspect in the majority of the models. Here, there are three aspects for reflection. First, there are automatic actions that affect SRL for example, Pintrich (2000) mentioned access to prior knowledge and Boekaerts (2011) discussed goal activation. Second, we can assume that even self-regulation, when it is understood to be the enactment of a number of learning strategies to reach students’ goals, can happen implicitly, as proposed by Winne (2011). This means that students can be so advanced in their use of SRL strategies that they do not need an explicit, conscious, purposive action to act strategically. Nevertheless, this takes practice. Third, some automatic reactions, particularly some emotions, and even some complex emotion-regulation strategies may not be positive for learning (Bargh and Williams, 2007). For example, Boekaerts (2011) mentions that the well-being pathways can be activated even when students are not aware. Therefore, assisting students to become aware of those negative automatic processes could have the potential to enhance self-regulation that is oriented toward learning.

      Context

      The SSRL model emphasizes not only the role of context, but also the ability of different external sources (group members, teachers, etc.) to promote individual self-regulation by exerting social influence (CrRL) or of groups of students to regulate jointly while they are collaborating (SSRL) (Järvelä and Hadwin, 2013). Zimmerman (2000) did not include context in his Cyclical Phases model, only a minor reference to the specific strategy 𠇎nvironmental structuring.” However, in his Triadic and Multi-level models, the influence of context and vicarious learning is key to the development of self-regulatory skills (Zimmerman, 2013). Boekaerts and Niemivirta (2000) posits that students’ interpretation of the context activates different goal pathways and that previous experiences affect the different roles that students adopt in their classrooms (e.g., joker, geek). For Winne (1996), Pintrich (2000), and Efklides (2011) models, context is: (1) important to adapt to the task demands, and (2) part of the loops of feedback as students receive information from the context and adapt their strategies accordingly. In sum, all of the models include context as a significant variable to SRL. Nevertheless, with the exception of Hadwin, Järvelä, and Miller’s work, not much research has been conducted by the others in exploring how significantly other contexts or the task context affect SRL.


      DUAL PROCESS MODELS OF INFORMATION PROCESSING

      In the field of psychology, dual-process models are used to explain the dynamics and development of broad domains of functioning. These domains include attention, cognition, emotion, and social behavior (eg, Barrett et al, 2004 Eisenberg et al, 1994 MacDonald, 2008 Norman and Shallice, 1986 Rothbart and Bates, 2006 Rothbart and Derryberry, 1981 Strack and Deutsch, 2004). The overarching theme of these models is that human information processing involves at least two complementary strategies. The first strategy involves the processing of information in an automatic, stimulus-driven, and reflexive way. The second involves more controlled, goal-directed, and contemplative approaches. These systems are engaged by different stimulus properties and demands, have unique neural underpinnings, support different forms of learning, and provide potentially competing response pathways (Corbetta and Shulman, 2002). Engagement of these systems occurs on a relative rather than absolute scale, such that few behaviors are completely dominated by one or the other mode of processing. Rather, differences in behavior are explained by the relative balance of these two modes of processing in any given context. Several disparate lines of research suggest that individual differences in health and adaptation reflect the way in which these dual modes functionally integrate in the service of adaptation (eg, Carver et al, 2009 Derryberry and Rothbart, 1997).

      For BI children, the deployment of automatic and controlled modes of processing in motivationally and emotionally significant contexts appears particularly relevant. Such contexts contain signals of reward and punishment, stimuli for which organisms will extend effort to approach or avoid. In such contexts, motivationally significant cues engage automatic modes of processing and trigger reflexive and rapidly deployed responses. As such, automatic information-processing modes are central to evolutionary theories emphasizing the adaptive function of rapid approach- and avoidance-related strategies. When children with a history of BI enter novel contexts, they tend to remain on the periphery, carefully watching but not engaging with novel objects or people. In such contexts, a state of hypervigilance supports detailed processing of stimulus features but limits the more flexible and integrative processing of the broader context, which is necessary for fluid, reciprocal social interactions. From a neural perspective, automatic modes of processing engage a network of brain regions centered on subcortical, medial temporal structures, particularly the amygdala and anterior hippocampus, as well as components of the ventral prefrontal cortex (PFC) that are most heavily connected to these structures (Braver et al, 2007 Posner, 2012). These subcortical structures are brain regions that are relatively old from an evolutionary perspective and relatively conserved across mammals, reflecting the adaptive advantage of this automatic, rapid mode of responding.

      Whereas the automatic mode narrows attention to remain responsive to immediately present threats and rewards, the controlled mode is recruited when behavior is goal directed and dependent on the active maintenance of task-related goals, even if these goals are far removed from the immediate context. This control mode is described as reflective, endogenous, strategic, logical, and effortful. The control mode incorporates information beyond that which is immediately present, supporting more planful, reasoned and goal-directed behavior in comparison with behaviors regulated by the automatic mode. For example, engagement of controlled processing in novel contexts may allow BI children to more flexibly attend to and process novel situations and to access and implement previously learned social scripts. Moreover, controlled processing maintains a prolonged influence on behavior relative to the quick and short-acting influence of the automatic mode of processing. Controlled processes place extensive cognitive demands on the organism including working memory and self-monitoring and are therefore more resource demanding, less efficient and more slowly engaged than automatic modes of processing. Consistent with such a demanding, complex nature, this processing mode shows a later, more prolonged developmental time course, relative to automatic, reflexive modes of processing that guide behaviors from birth.

      Controlled processing is further distinguished from automatic processing based on underlying neural systems. Controlled processes engage a network centered on the dorsolateral PFC (DLPFC). The DLPFC in turn draws on other regions that have a role in both controlled and automatic processing. These include the dorsal anterior cingulate gyrus, anterior insula with expanses onto the ventro-lateral PFC, and basal ganglia. Of note, this DLPFC-centered network encompasses regions, particularly so-called ‘granular’ components of PFC, which evolved relatively late, compared with the brain regions that support automatic modes of processing. Considerable debate remains on the precise adaptive function conferred by these evolutionary changes in brain anatomy. Nevertheless, many compelling theories emphasize the role of this network in flexible maintenance of goal-directed behaviors in contexts where stimulus contingencies change rapidly. Thus, for humans, the complex and rapidly changing nature of social interactions could represent one instance where flexible maintenance of goals in changing contexts confers a particularly important adaptive advantage.


      Method

      Participants

      Mechanical Turk was used to sample 666 participants who were older than 18 years. Consistent with cultural analyses of Mechanical Turk 16 , the majority of users came from the USA (n = 418) and India (n = 225) with the remainder coming from 20 other countries (n = 23). All participants were paid USD

      Discussion

      This study examined whether own-age biases affect the initial interpretation of an image at a subconscious level. To test this, the classic young/old lady ambiguous figure was administered to a group of participants of varying ages using Mechanical Turk. Although the estimated age data are bimodal, there is a bias towards reporting a younger woman. It is possible that this bias relates to a default ‘younger’ response. As noted by Georgiades and Harris 15 , participants are biased towards reporting a younger woman by 70%. This bias of response may be the default interpretation by the brain, which is only overcome when the social in-group favours an ‘older’ response.

      A median split was used to sort the participants into groups of younger and older respondents. Analyses of the different groups revealed that younger participants estimated the woman’s age to be 6.3 years younger than the older participants. This difference in estimated age increased to 12.1 when the very-youngest and -oldest participants were selected. Both split analyses were supported by a simple correlational analysis, which showed that, as the age of the observer increased, so too did the estimated age of the woman. The consistency of the association between estimated age and participants’ age across the different types of split and the correlation analysis demonstrates that the effect is not an artefact of the way we analysed the data. The effect of the observer’s age on the estimated age of the woman is consistent with an own-age social group bias. Within the respective age-groups, participants have a bias towards processing faces of a similar age. A strong delineation between younger and older people in Western society in general and within the USA in particular 17,18 may have precipitated social in- and out-groups, which is known to affect face processing.

      The own-age bias may have been stronger for younger- compared to older-participants as reflected in lower standard deviations for the younger and very-young groups (SD = 13.55 & 14.16, respectively) compared to the older and very-old groups (SD = 18.73 & 21.63, respectively). A larger variation in estimated age for the older participants is in line with an exposure effect 7 which may reduce the own age bias for this group 6 .

      When participants engaged in the task, they were naïve in relation to the age-related aims of the study and did not expect the young/old ambiguous figure. The image was also displayed briefly for 500 ms. Both procedures ensured that any biases in the reported age of the woman reflected the operation of a preconscious perceptual process. Bearing this in mind, we believe that our data demonstrate that high-level social/group processes have a subconscious effect on low-level face detection mechanisms. Bar 21 describes a neural mechanism to explain the effect of top-down facilitation of object recognition. In this model, a partially analysed version of the image is sent from early visual centres to the prefrontal cortex. This image then interacts with higher-level expectations of the image and is then sent as an ‘initial guess’ to the temporal cortex where it integrates with bottom-up mechanisms. In the current study, we believe that a partially analysed version of the ambiguous figure is passed through to frontal regions where social predispositions bias the interpretation towards an in-group outcome, which is subsequently fed-back to the decision-making mechanism.

      Future research could rule out the possibility that the effect of the observer’s age on perceived age is specific to the bi-stable image used in this study. It is possible that participants simply estimate an age for the illusion that is closer to their own. This could be tested by simply picking a middle-aged face and asking participants to estimate the age of the face. Alternatively, the discrimination could be made orthogonal to the dimension of interest by asking participants to determine whether the face is looking to the side (old lady) or away (young lady) from the viewer.

      .30 for their time.

      While Mechanical Turk has several distinct advantages for data collection 19 , there are also reports that users pay less attention to experimental materials 20 . To select attentive participants, we included two attention-check questions so that participants could be selected on an a priori basis (see procedure for details). Participants were also required to provide valid answers to the demographic questions as well as estimate the lady’s age to be older than 18 years.

      Initial analyses of compliance revealed marked differences between the USA and India. For people from India, 55% failed the attention-check test whereas only 6% failed from the USA. Given the poor attention-check results for participants from India and the possibility that many of them may not have understood the task instructions, or that the young/old lady illusion is culturally specific, the current sample was limited to participants from the USA. There were therefore 393 participants (m = 242, f = 151) from the USA in the final sample. The mean age of the sample was 32.87 years (SD = 10.07) with a range of 18 to 68 years. The distribution of age was positively skewed with a strong bias towards younger participants. This bias, which most likely reflects familiarity with computers, meant that only five participants were over 60 years of age. The method and experimental procedure of the present research was approved by, and carried out in accordance with, the guidelines of the Social and Behavioural Research Ethics Committee at Flinders University.

      Stimuli and Procedure

      Participants were recruited using Mechanical Turk. Informed consent was obtained from all participants prior to participation. After agreeing to participate, demographic data were collected, including the participant’s age (in years), sex, and country of residence. Participants were then readied for the presentation of the young/old lady bistable image - copied from the one used by Boring 14 (see Fig. 1). The ambiguous image was subsequently presented for 500 ms, after which the display was cleared. To verify that participants had seen the image in one of its forms, two questions were then asked: “Did you see a person or an animal?” (possible responses: person/animal/neither) and, if this was answered correctly, “What was the sex of the person?” (possible responses: male/female/don’t know). Participants who answered both questions correctly were then asked to estimate the age of the woman in years. The testing session was terminated for participants who gave incorrect responses to either of the attention-check questions. The entire testing session took less than five minutes.


      How can automatic processing be estimated in an editing task? - Psychology

      Memory is such an everyday thing that we almost take it for granted. We all remember what we had for breakfast this morning or what we did last weekend. It's only when memory starts to fail that we appreciate just how amazing it is and how much we allow our past experiences to define us.

      But memory is not always a good thing. As the American poet and clergyman John Lancaster Spalding once said, "As memory may be a paradise from which we cannot be driven, it may also be a hell from which we cannot escape." Many of us experience chapters of our lives that we would prefer to never have happened. It is estimated that nearly 90 percent of us will experience some sort of traumatic event during our lifetimes. Many of us will suffer acutely following these events and then recover, maybe even become better people because of those experiences. But some events are so extreme that many — up to half of those who survive sexual violence, for example — will go on to develop post-traumatic stress disorder, or PTSD.

      PTSD is a debilitating mental health condition characterized by symptoms such as intense fear and anxiety and flashbacks of the traumatic event. These symptoms have a huge impact on a person's quality of life and are often triggered by particular situations or cues in that person's environment. The responses to those cues may have been adaptive when they were first learned — fear and diving for cover in a war zone, for example — but in PTSD, they continue to control behavior when it's no longer appropriate. If a combat veteran returns home and is diving for cover when he or she hears a car backfiring or can't leave their own home because of intense anxiety, then the responses to those cues, those memories, have become what we would refer to as maladaptive. In this way, we can think of PTSD as being a disorder of maladaptive memory.

      Now, I should stop myself here, because I'm talking about memory as if it's a single thing. It isn't. There are many different types of memory, and these depend upon different circuits and regions within the brain. As you can see, there are two major distinctions in our types of memory. There are those memories that we're consciously aware of, where we know we know and that we can pass on in words. This would include memories for facts and events. Because we can declare these memories, we refer to these as declarative memories.

      The other type of memory is non-declarative. These are memories where we often don't have conscious access to the content of those memories and that we can't pass on in words. The classic example of a non-declarative memory is the motor skill for riding a bike. Now, this being Cambridge, the odds are that you can ride a bike. You know what you're doing on two wheels. But if I asked you to write me a list of instructions that would teach me how to ride a bike, as my four-year-old son did when we bought him a bike for his last birthday, you would really struggle to do that. How should you sit on the bike so you're balanced? How fast do you need to pedal so you're stable? If a gust of wind comes at you, which muscles should you tense and by how much so that you don't get blown off? I'll be staggered if you can give the answers to those questions. But if you can ride a bike, you do have the answers, you're just not consciously aware of them.

      Getting back to PTSD, another type of non-declarative memory is emotional memory. Now, this has a specific meaning in psychology and refers to our ability to learn about cues in our environment and their emotional and motivational significance. What do I mean by that? Well, think of a cue like the smell of baking bread, or a more abstract cue like a 20-pound note. Because these cues have been pegged with good things in the past, we like them and we approach them. Other cues, like the buzzing of a wasp, elicit very negative emotions and quite dramatic avoidance behavior in some people.

      Now, I hate wasps. I can tell you that fact. But what I can't give you are the non-declarative emotional memories for how I react when there's a wasp nearby. I can't give you the racing heart, the sweaty palms, that sense of rising panic. I can describe them to you, but I can't give them to you. Now, importantly, from the perspective of PTSD, stress has very different effects on declarative and non-declarative memories and the brain circuits and regions supporting them. Emotional memory is supported by a small almond-shaped structure called the amygdala and its connections. Declarative memory, especially the what, where and when of event memory, is supported by a seahorse-shaped region of the brain called the hippocampus.

      The extreme levels of stress experienced during trauma have very different effects on these two structures. As you can see, as you increase a person's level of stress from not stressful to slightly stressful, the hippocampus, acting to support the event memory, increases in its activity and works better to support the storage of that declarative memory. But as you increase to moderately stressful, intensely stressful and then extremely stressful, as would be found in trauma, the hippocampus effectively shuts down. This means that under the high levels of stress hormones that are experienced during trauma, we are not storing the details, the specific details of what, where and when.

      Now, while stress is doing that to the hippocampus, look at what it does to the amygdala, that structure important for the emotional, non-declarative memory. Its activity gets stronger and stronger. So what this leaves us with in PTSD is an overly strong emotional — in this case fear — memory that is not tied to a specific time or place, because the hippocampus is not storing what, where and when. In this way, these cues can control behavior when it's no longer appropriate, and that's how they become maladaptive. So if we know that PTSD is due to maladaptive memories, can we use that knowledge to improve treatment outcomes for patients with PTSD?

      A radical new approach being developed to treat post-traumatic stress disorder aims to destroy those maladaptive emotional memories that underlie the disorder. This approach has only been considered a possibility because of the profound changes in our understanding of memory in recent years. Traditionally, it was thought that making a memory was like writing in a notebook in pen: once the ink had dried, you couldn't change the information. It was thought that all those structural changes that happen in the brain to support the storage of memory were finished within about six hours, and after that, they were permanent. This is known as the consolidation view.

      However, more recent research suggests that making a memory is actually more like writing in a word processor. We initially make the memory and then we save it or store it. But under the right conditions, we can edit that memory. This reconsolidation view suggests that those structural changes that happen in the brain to support memory can be undone, even for old memories.

      Now, this editing process isn't happening all the time. It only happens under very specific conditions of memory retrieval. So let's consider memory retrieval as being recalling the memory or, like, opening the file. Quite often, we are simply retrieving the memory. We're opening the file as read-only. But under the right conditions, we can open that file in edit mode, and then we can change the information. In theory, we could delete the content of that file, and when we press save, that is how the file — the memory — persists. Not only does this reconsolidation view allow us to account for some of the quirks of memory, like how we all sometimes misremember the past, it also gives us a way to destroy those maladaptive fear memories that underlie PTSD. All we would need would be two things: a way of making the memory unstable — opening that file in edit mode — and a way to delete the information.

      We've made the most progress with working out how to delete the information. It was found fairly early on that a drug widely prescribed to control blood pressure in humans — a beta-blocker called Propranolol — could be used to prevent the reconsolidation of fear memories in rats. If Propranolol was given while the memory was in edit mode, rats behaved as if they were no longer afraid of a frightening trigger cue. It was as if they had never learned to be afraid of that cue. And this was with a drug that was safe for use in humans. Now, not long after that, it was shown that Propranolol could destroy fear memories in humans as well, but critically, it only works if the memory is in edit mode.

      Now, that study was with healthy human volunteers, but it's important because it shows that the rat findings can be extended to humans and ultimately, to human patients. And with humans, you can test whether destroying the non-declarative emotional memory does anything to the declarative event memory. And this is really interesting. Even though people who were given Propranolol while the memory was in edit mode were no longer afraid of that frightening trigger cue, they could still describe the relationship between the cue and the frightening outcome. It was as if they knew they should be afraid, and yet they weren't. This suggests that Propranolol can selectively target the non-declarative emotional memory but leave the declarative event memory intact. But critically, Propranolol can only have any effect on the memory if it's in edit mode.

      So how do we make a memory unstable? How do we get it into edit mode? Well, my own lab has done quite a lot of work on this. We know that it depends on introducing some but not too much new information to be incorporated into the memory. We know about the different chemicals the brain uses to signal that a memory should be updated and the file edited. Now, our work is mostly in rats, but other labs have found the same factors allow memories to be edited in humans, even maladaptive memories like those underlying PTSD. In fact, a number of labs in several different countries have begun small-scale clinical trials of these memory-destroying treatments for PTSD and have found really promising results.

      Now, these studies need replication on a larger scale, but they show the promise of these memory-destroying treatments for PTSD. Maybe trauma memories do not need to be the hell from which we cannot escape.

      Now, although this memory-destroying approach holds great promise, that's not to say that it's straightforward or without controversy. Is it ethical to destroy memories? What about things like eyewitness testimony? What if you can't give someone Propranolol because it would interfere with other medicines that they're taking?

      Well, with respect to ethics and eyewitness testimony, I would say the important point to remember is the finding from that human study. Because Propranolol is only acting on the non-declarative emotional memory, it seems unlikely that it would affect eyewitness testimony, which is based on declarative memory. Essentially, what these memory-destroying treatments are aiming to do is to reduce the emotional memory, not get rid of the trauma memory altogether. This should make the responses of those with PTSD more like those who have been through trauma and not developed PTSD than people who have never experienced trauma in the first place. I think that most people would find that more ethically acceptable than a treatment that aimed to create some sort of spotless mind.

      What about Propranolol? You can't give Propranolol to everyone, and not everyone wants to take drugs to treat mental health conditions. Well, here Tetris could be useful. Yes, Tetris. Working with clinical collaborators, we've been looking at whether behavioral interventions can also interfere with the reconsolidation of memories. Now, how would that work?

      Well, we know that it's basically impossible to do two tasks at the same time if they both depend on the same brain region for processing. Think trying to sing along to the radio while you're trying to compose an email. The processing for one interferes with the other. Well, it's the same when you retrieve a memory, especially in edit mode. If we take a highly visual symptom like flashbacks in PTSD and get people to recall the memory in edit mode and then get them to do a highly engaging visual task like playing Tetris, it should be possible to introduce so much interfering information into that memory that it essentially becomes meaningless. That's the theory, and it's supported by data from healthy human volunteers. Now, our volunteers watched highly unpleasant films — so, think eye surgery, road traffic safety adverts, Scorsese's "The Big Shave." These trauma films produce something like flashbacks in healthy volunteers for about a week after viewing them. We found that getting people to recall those memories, the worst moments of those unpleasant films, and playing Tetris at the same time, massively reduced the frequency of the flashbacks. And again: the memory had to be in edit mode for that to work.

      Now, my collaborators have since taken this to clinical populations. They've tested this in survivors of road traffic accidents and mothers who've had emergency Caesarean sections, both types of trauma that frequently lead to PTSD, and they found really promising reductions in symptoms in both of those clinical cases.

      So although there is still much to learn and procedures to optimize, these memory-destroying treatments hold great promise for the treatment of mental health disorders like PTSD. Maybe trauma memories do not need to be a hell from which we cannot escape.

      I believe that this approach should allow those who want to to turn the page on chapters of their lives that they would prefer to never have experienced, and so improve our mental health.


      Watch the video: Home Video - The Automatic Process (January 2022).