Introduction

Recently, entire occupations have been granted the label of “Heroes”. For example, during the recent COVID-19 pandemic, as well as in previous health crises (such as the SARS and H1N1 pandemics), healthcare workers were applauded – sometimes literally – as ‘heroes’. Military personnel in the US, but also in the UK, are also often called ‘heroes’. So are police, firefighters, and other first responders. But how malleable is this perception of heroism in these occupations? Can we manipulate the public perception toward recognising “heroism” in occupations not typically perceived as such (e.g., psychiatrists)?

In this study, we tested how malleable the perception of heroism in an occupation is. Specifically, we hypothesised two core components of heroism perception:

Perception of physical threat contributes to the perception of heroism

Altruistic motivations of the group’s member contribute to the perception of heroism

Rationale

Franco et al. (2011) provided the general and, to their own admission, overly simplistic definition of heroism as “to act in a prosocial manner despite personal risk” (p.99). Altruism and exposure to danger are indeed two core elements that are consistently found in the lay people’s prototype of a hero (Kinsella et al., 2015). Regarding physical threat, Sternstorm & Curtis (2012) observed that altruistic actions were distinguished between being merely altruistic vs heroic as a direct function of the level of physical danger. The more dangerous the action, the more heroic rather than altruistic.

As emphasised by Sternstorm & Curtis (2012), altruism may not be a sufficient element to the perception of heroism, but it might be necessary element nonetheless. However, to our knowledge, there is a lack of experimental evidence for this influence of altruism on the perception of heroism.

Manipulations

We manipulated the description of the target occupations as exposed to physical risks (vs Boredom) and having altruistic motivations (vs self-centered motivations).

  • The first one (MC1) will use attributes ratings for each occupation regarding how much participants perceive them to be “brave” (physical threat manipulation check) and “Selfless” (motivation type manipulation check). [Personality attributes evaluation]

  • The second manipulation check (MC2) will use self-reported evaluation of how much participants perceive each occupation as being “exposed to physical risks” and “effectively helping people”. [Physical ‘objective’ evaluation]

MC1 will be successful if 1) Occupations described as involving a Physical threat are perceived as significantly “braver” than occupations described as involving boredom, and 2) occupations described as involving an altruistic motivation are perceived as significantly less “selfish” than occupations not described as involving altruistic motivations.

MC2 will be successful if 1) Occupations described as involving a Physical threat are perceived as significantly more exposed to physical risks than occupations described as involving Boredom, and 2) occupations described as involving an altruistic motivation are perceived as significantly helping more people than occupations not described as involving altruistic motivations.

Hypotheses

H1a - Describing an occupation as exposed to physical threats (vs Boredom) will increase perception of heroism across all types of occupations

H1b - Describing an occupation as having altruistic motivations (vs Self-centered) will increase perception of heroism across all types of occupations


H2a: Perceived heroism will be positively predicted by perceived bravery across all types of occupations

H2b: Perceived heroism will be positively predicted by perceived altruism across all types of occupations


H3a: Perceived heroism will be positively predicted by perceived physical risk across all types of occupations

H3b: Perceived heroism will be positively predicted by perceived help provided to others across all types of occupations

Methods

After reading the Consent form and confirming their participation, reporting their Prolific ID, and answering a commitment check (Geisen, 2022 [avaiblable at https://www.qualtrics.com/blog/attention-checks-and-data-quality/]; Peer et al., 2024), participants read a description of one of the target occupation (randomly selected, between-participant). Each description consisted in a short 3-paragraph long description of the occupation. The first paragraphs briefly described the targeted occupation. The two following paragraphs were our manipulations: one emphasized the type of risk associated with the occupation (Physical vs None) and the other emphasized the motivation of the workers (Altruistic vs self-centered). These two paragraphs were displayed in a random order. Conditions were randomly assigned to participants.

The descriptions were followed by three Comprehension checks questions. Participants had 2 chances of getting the comprehension check right - after which, they were redirected toward a page asking them to return their survey. Participants succeeding in the comprehension checks completed a scale measuring to what extent, in their personal opinion, the target occupations are “heroes”. Once completed, participants rated 4 manipulation checks. They then reported their general attitude toward the target group (covariate) using a 5 point scale from “Very negative” to “Very positive”.

At the end of the study, participants completed some demographics regarding their gender, age, and occupation. They finally answered to 1 final manipulation check regarding the credibility of our deception. They were then debriefed and thanked.

Occupation Descriptions. Each description consisted in 3 paragraphs with the two last ones (randomised order) emphasising Physical threat (vs None) and Altruistic motivations (vs self-centered motivations, see Material on OSF project). We used deceptive information to manipulate risk and motivations. Specifically we reported false results from a survey indicating that “83% of XXX thought that their life was at risk in the past 12 months” (vs “83% of XXX reported being bored most of the time”), and “74% of XXX identified”Helping people” as their primary motivation” (vs “74% of XXX identified”self-improvement” as their primary motivation”).

Moral role attribution. Participants evaluated to what extent they agreed with the target occupations being described as “Heroes” using a 5-point Likert scale from “Strongly disagree” to “Strongly agree”.

Character attribution Evaluation. Participants evaluated the applicability of two adjectives using two 7 points bipolar scales. One ranged from cowardly to brave, and the other ranged from Selfish to selfless.

Situation Evaluation. Participants then rated to what extent they believed the target occupation is objectively associated to physical danger, and to what extent the people in this profession are helpful. Both questions used 7-point Likert scales from “Not at all” to “Extremely”.

General attitude. Participants reported their general attitude toward the target occupation using a 7-point-scale from “Very negative” to “Very positive”.

Comprehension checks. Participants responded to three multiple choice questions asking them about the content of the vignette they just read (see material for details). They had two chances of getting all comprehension checks correct - after which they were asked to return their survey. The comprehension checks were displayed right beneath the descriptions and consisted in easy questions about the content of the vignette.

Credibility check. Participants answered the following question: “In your opinion, how believable was the information that you read at the beginning of this study, about the target occupation’s motivation and working conditions?” from 1- Very unbelievable to 7 - Very believable.

Loading data

Please adjust the path if you are running this script in your local machine.

DF_HeroFactory_April2025.csv is our main data frame. It is publicly accessible from our OSF webpage (https://osf.io/jdhbf/?view_only=db07323e133247b29ce0c8fe6bfe40dc). A second data frame is required to reproduce some descriptives statistics: Demog_Hero_factory_2025.csv, also available on the OSF webpage of the project.

Both of these data frame can be re-computed from the Qualtrics output using the Data Wrangling code chunk presented in the Appendix section of the report.

Set <- read.csv("DF_HeroFactory_April2025.csv", comment.char="#")
Demographics <- read.csv("Demog_Hero_factory_2025.csv")

In this report, I use some packages to format data, plot results, and run robust analyses. Installed packages are detailed in the code chunk below.

if(!require("dplyr")) install.packages("dplyr")
if(!require("tidyr")) install.packages("tidyr")
if(!require("stringr")) install.packages("stringr")
if(!require("ggplot2")) install.packages("ggplot2")
if(!require("emmeans")) install.packages("emmeans")
if(!require("data.table")) install.packages("data.table")
if(!require("PerformanceAnalytics")) install.packages("PerformanceAnalytics")
if(!require("interactions")) install.packages("interactions")
if(!require("car")) install.packages("car")
if(!require("effectsize")) install.packages("effectsize")
if(!require("RColorBrewer")) install.packages("RColorBrewer")
if(!require("effectsize")) install.packages("effectsize")
if(!require("report")) install.packages("report")
if(!require("ordinal")) install.packages("ordinal")
if(!require("robustbase")) install.packages("robustbase")
if(!require("olsrr")) install.packages("olsrr")
if(!require("knitr")) install.packages("knitr")
if(!require("kableExtra")) install.packages("kableExtra")
if(!require("gt")) install.packages("gt")
if(!require("lavaan")) install.packages("lavaan")

Results (as registered)

We aimed to collect answers from 1360 representative UK residents. A final sample of 1362 participants completed our survey. Because two of the participants timed out (IDs: “5cb6f38ffdc7fa0013f809a3”, “5cf84c0f4b639a0016a45a54”), they were not included toward the completion of our representative sample. They were thus excluded from our analyses.

Our sample included 692 women, 653 men, 11 others, and 4 who preferred not indicating their gender. The mean age was 46.55 year old (SD = 15.36).

Most of them (n = 1242) did not report a job relevant to the study. However, several participants (n = 4) reported impossible responses: “None of the above” and any other job from the list. They were removed from our descriptive analyses of job distribution (see plot below). Note that some participants reported several past occupations that were relevant to the study (e.g., Police officers & Firefighters).

Demographics$flag_inconsistent <- (Demographics$Job_match_6 == "None of the above") & 
  (Demographics$Job_match_1 != "" | Demographics$Job_match_2 != "" | Demographics$Job_match_3 != "" | 
   Demographics$Job_match_4 != "" | Demographics$Job_match_5 != "")




paste0("1360 participants took part in the study. Mean age in the sample is ", mean(as.numeric(Set$Age)), ", SD = ", sd(as.numeric(Set$Age)))
## [1] "1360 participants took part in the study. Mean age in the sample is 46.5455882352941, SD = 15.359732853326"
## Gender

Set %>% group_by(Gender) %>% summarise(N=n()) %>%
  ggplot(aes(x=Gender,y=N,fill=Gender))+
  geom_bar(stat = 'identity',color='black')+
  scale_y_continuous(labels = scales::comma_format(accuracy = 2))+
  geom_text(aes(label=N),vjust=-0.25,fontface='bold')+
  theme_bw()+
  theme(axis.text = element_text(color='black',face='bold'),
        axis.title = element_text(color='black',face='bold'),
        legend.text = element_text(color='black',face='bold'),
        legend.title = element_text(color='black',face='bold')) +
  ggtitle("Gender distribution")

## Occupations
#colnames(Set)
jobs <- unlist(Demographics[-which(Demographics$flag_inconsistent == T), 4:10])           # Make a long list of all jobs that were named

jobs <- jobs[jobs != ""]     # Remove empty strings

job_df <- as.data.frame(table(jobs))
colnames(job_df) <- c("Job", "Count")

ggplot(job_df, aes(x = Job, y = Count, fill = Job)) +
  geom_bar(stat = 'identity',color='black')+
  scale_y_continuous(labels = scales::comma_format(accuracy = 2))+
  geom_text(aes(label=Count),vjust=-0.25,fontface='bold')+
  theme_bw()+
  theme(axis.text = element_text(color='black',face='bold'),
        axis.title = element_text(color='black',face='bold'),
        legend.text = element_text(color='black',face='bold'),
        legend.title = element_text(color='black',face='bold')) +
  ggtitle("Job distribution")

Plots

Descriptively, most jobs were evaluated as relatively heroic. Below are visual representations of the perception of our target occupations (firefighters [F], Nurses [N], police officers [P], Psychiatrists [Ps] and Welders [W]).

# 1. Create a summary dataframe for facet annotations (mean and SD)
df_summary <- Set %>%
  group_by(Job) %>%
  summarize(
    mean_score = mean(Heroism, na.rm = TRUE),
    sd_score   = sd(Heroism, na.rm = TRUE),
    .groups = "drop"
  )

# 2. Create the ggplot using the long format data
ggplot(Set, aes(x = Heroism)) +
  geom_histogram(aes(fill = after_stat(count)),
                 binwidth = 1,
                 color = "black", show.legend = FALSE) +
  facet_grid( ~ Job, scales = "free") +
  scale_fill_gradientn(
    colours = brewer.pal(9, "YlOrBr"),
    name = "Count"
  ) +
  labs(
    title = "Histograms of Variable by Occupation",
    x = "Score",
    y = "Count"
  ) +
  # Annotate each facet with the mean and standard deviation
  geom_text(data = df_summary,
            aes(x = 7, y = Inf,
                label = paste0("Mean = ", round(mean_score, 2),
                               "\nSD = ", round(sd_score, 2))),
            vjust = 1.5, hjust = 1.1, size = 3) +
  theme_classic() +
  theme(panel.grid.major.y = element_line(linewidth = 0.5),
        panel.grid.minor.y = element_line(linewidth = 0.5))

Additional descriptions of our main variables show a tendency to perceive occupations favorably.

Set$Credibility <- as.numeric(Set$Credibility)
df_long <- Set %>%
  pivot_longer(
    cols = c(Heroism, Danger, Helpfulness, Selfless, Brave, Attitude, Credibility),
    names_to = "Variable",
    values_to = "Score"
  )

# 2. Compute summary statistics by Job and Variable
df_long2<- subset(df_long, df_long$Variable != "Credibility")

df_summary <- df_long %>%
  group_by(Job, Variable) %>%
  summarize(
    mean_score = mean(Score, na.rm = TRUE),
    sd_score   = sd(Score, na.rm = TRUE),
    .groups = "drop"
  )
df_summary2 <- df_long2 %>%
  group_by(Job, Variable) %>%
  summarize(
    mean_score = mean(Score, na.rm = TRUE),
    sd_score   = sd(Score, na.rm = TRUE),
    .groups = "drop"
  )
# 3. Create the ggplot using the long format data
ggplot(df_long2, aes(x = Score)) +
  geom_histogram(aes(fill = after_stat(count)),
                 binwidth = 1,
                 color = "black", show.legend = FALSE) +
  facet_grid(Variable ~ Job, scales = "free") +
  scale_fill_gradientn(
    colours = brewer.pal(9, "YlOrBr"),
    name = "Count"
  ) +
  labs(
    title = "Histograms of Variable by Occupation",
    x = "Score",
    y = "Count"
  ) +
  # Annotate each facet with the mean and standard deviation
  geom_text(data = df_summary2,
            aes(x = 7, y = Inf,
                label = paste0("Mean = ", round(mean_score, 2),
                               "\nSD = ", round(sd_score, 2))),
            vjust = 1.5, hjust = 1.1, size = 3) +
  theme_classic() +
  theme(
    panel.grid.major.y = element_line(linewidth = 0.5),
    panel.grid.minor.y = element_line(linewidth = 0.5)
  )

Manipulation checks

We are manipulating the description of the target occupations as exposed to high physical risks (vs no physical risk) and having altruistic motivations (vs self-centered motivations) in vignettes.

Two manipulation checks were used:

  • Character attribution: Participants rated how brave and selfless the target occupation was, as a manipulation check for our Risk and Motivation manipulations respectively. This manipulation check focuses on character attributions rather that verifiable objective evaluation.

  • Situation attribution: Participants rated how dangerous and helpful the occupations were, as a manipulation check for our Risk and Motivation manipulations respectively. This manipulation check focuses on the evaluation of a situation, objectively - in contrast to a character evaluation.

Each manipulation check is presented in the two tabs below.

Manipulation check 1: Character attribution

This tab contains:

Perception of bravery

The Risk type (physical vs Boredom) should predict the perception of bravery.

t.test(Set$Brave ~Set$Risk)
## 
##  Welch Two Sample t-test
## 
## data:  Set$Brave by Set$Risk
## t = -5.206, df = 1357, p-value = 2.227e-07
## alternative hypothesis: true difference in means between group B and group R is not equal to 0
## 95 percent confidence interval:
##  -0.4931945 -0.2232317
## sample estimates:
## mean in group B mean in group R 
##        5.622781        5.980994
cohens_d(Set$Brave ~Set$Risk)
## Cohen's d |         95% CI
## --------------------------
## -0.28     | [-0.39, -0.18]
## 
## - Estimated using pooled SD.
summary
## standardGeneric for "summary" defined from package "base"
## 
## function (object, ...) 
## standardGeneric("summary")
## <environment: 0x11fc8d980>
## Methods may be defined for arguments: object
## Use  showMethods(summary)  for currently available ones.
my_sum <- Set %>%
  group_by(Risk) %>%
  summarise( 
    n=n(),
    mean=mean(Brave),
    sd=sd(Brave)
  ) %>%
  mutate( se=sd/sqrt(n))
 
# Standard deviation
ggplot(my_sum) +
  geom_bar( aes(x=Risk, y=mean), stat="identity", fill="forestgreen", alpha=0.5) +
  geom_errorbar( aes(x=Risk, ymin=mean-se, ymax=mean+se), width=0.4, colour="orange", alpha=0.9, size=1.5) +
  ggtitle("Brave ~ Risk type (B = Boredom; R = High risk); bars are SE")
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.

by(Set$Brave, Set$Risk, sd)
## Set$Risk: B
## [1] 1.278198
## ------------------------------------------------------------ 
## Set$Risk: R
## [1] 1.259083
sd(Set$Brave)
## [1] 1.280748

The Welch Two Sample t-test testing the difference of Bravery by Risk condition (mean in group Boredom = 5.62, mean in group Risk = 5.98) suggests that the effect is negative, statistically significant, and medium sized (difference = -0.36, 95% CI [-0.49, -0.22], t(1357.02) = -5.21, p < .001; Cohen’s d = -0.28, 95% CI [-0.39, -0.18])

==> Success Manipulation Check. Yes, Our manipulation of Physical risks influenced Bravery evaluations. This is a medium effect, d = 0.28.

Perception of selflessness

The Motivation type (Altruistic vs Self-centered) should predict the perception of Altruism, defined as the average between “Caring” scores and reverse-coded “Selfless” scores.

t.test(Set$Selfless ~ Set$Help)
## 
##  Welch Two Sample t-test
## 
## data:  Set$Selfless by Set$Help
## t = 4.7993, df = 1336.6, p-value = 1.771e-06
## alternative hypothesis: true difference in means between group H and group S is not equal to 0
## 95 percent confidence interval:
##  0.2034586 0.4847767
## sample estimates:
## mean in group H mean in group S 
##        5.669118        5.325000
my_sum <- Set %>%
  group_by(Help) %>%
  summarise( 
    n=n(),
    mean=mean(Brave),
    sd=sd(Brave)
  ) %>%
  mutate( se=sd/sqrt(n))
 
# Standard deviation
ggplot(my_sum) +
  geom_bar( aes(x=Help, y=mean), stat="identity", fill="forestgreen", alpha=0.5) +
  geom_errorbar( aes(x=Help, ymin=mean-se, ymax=mean+se), width=0.4, colour="orange", alpha=0.9, size=1.5) +
  ggtitle("Selfless ~ Help type (H = Helping; S = Self improve); bars are SE")

The Welch Two Sample t-test testing the difference of Selflessness by Motivation condition (mean in group Helping People = 5.67, mean in group Self-improvement = 5.33) suggests that the effect is positive, statistically significant, and medium sized (difference = 0.34, 95% CI [0.20, 0.48], t(1336.56) = 4.80, p < .001; Cohen’s d = 0.26, 95% CI [0.15, 0.37])

==> Success. Yes, motivation types did predict perception of selflessness. This is a medium effect, d = 0.26


Manipulation check 2: Situation attribution

This tab contains:

Perceived Risk

To what extent do you believe the target occupation is exposed to physical dangerRisk type should increase evaluation of physical risks

t.test(Set$Danger ~Set$Risk)
## 
##  Welch Two Sample t-test
## 
## data:  Set$Danger by Set$Risk
## t = -18.15, df = 1190.5, p-value < 2.2e-16
## alternative hypothesis: true difference in means between group B and group R is not equal to 0
## 95 percent confidence interval:
##  -1.294546 -1.041971
## sample estimates:
## mean in group B mean in group R 
##        5.113905        6.282164
my_sum <- Set %>%
  group_by(Risk) %>%
  summarise( 
    n=n(),
    mean=mean(Danger),
    sd=sd(Danger)
  ) %>%
  mutate( se=sd/sqrt(n))
 
# Standard deviation
ggplot(my_sum) +
  geom_bar( aes(x=Risk, y=mean), stat="identity", fill="forestgreen", alpha=0.5) +
  geom_errorbar( aes(x=Risk, ymin=mean-se, ymax=mean+se), width=0.4, colour="orange", alpha=0.9, size=1.5) +
  ggtitle("Danger ~ Risk type (B = Boredom; R = High risk); bars are SE")

The Welch Two Sample t-test testing the difference of Situation Danger by Risk condition (mean in group Boredom = 5.11, mean in group Risk = 6.28) suggests that the effect is negative, statistically significant, and large (difference = -1.17, 95% CI [-1.29, -1.04], t(1190.54) = -18.15, p < .001; Cohen’s d = -0.99, 95% CI [-1.10, -0.87])

==> Success. Yes, Risk condition did predict perception of Danger – this is a very large effect.


Perceived helpfulness

To what extent do you believe the target occupation helps people?Motivation type should increase evaluation of help provided

t.test(Set$Helpfulness ~Set$Help)
## 
##  Welch Two Sample t-test
## 
## data:  Set$Helpfulness by Set$Help
## t = 5.0127, df = 1289.5, p-value = 6.113e-07
## alternative hypothesis: true difference in means between group H and group S is not equal to 0
## 95 percent confidence interval:
##  0.1852758 0.4235477
## sample estimates:
## mean in group H mean in group S 
##        6.107353        5.802941
my_sum <- Set %>%
  group_by(Help) %>%
  summarise( 
    n=n(),
    mean=mean(Helpfulness),
    sd=sd(Helpfulness)
  ) %>%
  mutate( se=sd/sqrt(n))
 
# Standard deviation
ggplot(my_sum) +
  geom_bar( aes(x=Help, y=mean), stat="identity", fill="forestgreen", alpha=0.5) +
  geom_errorbar( aes(x=Help, ymin=mean-se, ymax=mean+se), width=0.4, colour="orange", alpha=0.9, size=1.5) +
  ggtitle("Helpfulness ~ Motivation type (H = Helping; S = Self-improve); bars are SE")

The Welch Two Sample t-test testing the difference of Helpfulness by Motivation condition (mean in group Helping people = 6.11, mean in group Self-improvement = 5.80) suggests that the effect is positive, statistically significant, and medium sized (difference = 0.30, 95% CI [0.19, 0.42], t(1289.51) = 5.01, p < .001; Cohen’s d = 0.27, 95% CI [0.16, 0.38])

==> Success. Yes, Motivation condition did predict helpfulness – this is medium effect, d = 0.27.


Credibility analyses

We included a credibility check. Participants were asked at the end of the study, “to what extent they found the information presented believable”. Scores range from 1 to 7. We can check the distribution of the credibility rating:

hist(Set$Credibility, main = "Frequency of Responses for the Credibility item")

my_sum <- Set %>%
  group_by(Job) %>%
  summarise( 
    n=n(),
    mean=mean(Credibility),
    sd=sd(Credibility)
  ) %>%
  mutate( se=sd/sqrt(n))

# Standard deviation
ggplot(my_sum) +
  geom_bar( aes(x=Job, y=mean), stat="identity", fill="forestgreen", alpha=0.5) +
  geom_errorbar( aes(x=Job, ymin=mean-se, ymax=mean+se), width=0.4, colour="orange", alpha=0.9, size=1.5) +
  ggtitle("Credibility (bars are SE) by job condition")

Cred <- subset(df_long, df_long$Variable == "Credibility")
df_summary <- Cred %>%
  group_by(Job) %>%
  summarize(
    mean_score = mean(Score, na.rm = TRUE),
    sd_score   = sd(Score, na.rm = TRUE),
    .groups = "drop"
  )

ggplot(Cred, aes(x = Score)) +
  geom_histogram(aes(fill = after_stat(count)),
                 binwidth = 1,
                 color = "black", show.legend = FALSE) +
  facet_grid( ~ Job, scales = "free") +
  scale_fill_gradientn(
    colours = brewer.pal(9, "YlOrBr"),
    name = "Count"
  ) +
  labs(
    title = "Frequency of Responses for the Credibility item, for each occupation condition",
    x = "Score",
    y = "Count"
  ) +
  # Annotate each facet with the mean and standard deviation
  geom_text(data = df_summary,
            aes(x = 7, y = Inf,
                label = paste0("Mean = ", round(mean_score, 2),
                               "\nSD = ", round(sd_score, 2))),
            vjust = 1.5, hjust = 1.1, size = 3) +
  theme_classic() +
  theme(
    panel.grid.major.y = element_line(linewidth = 0.5),
    panel.grid.minor.y = element_line(linewidth = 0.5)
  )

The study was overall quite credible. In the exploratory section, we will further assess to what extent credibility rating influenced our effects.

Principal analyses

On the basis of our manipulation checks, it can be stated that our manipulations (both physical risks and motivation type) were successful in changing evaluations of bravery, selflessness, danger, and helpfulness. We proceed with our main registered hypotheses. This section contains analyses regarding:

  • H1: the effect of our manipulations on heroism
  • H2: the effect of perceived selflessness and perceived bravery on heroism
  • H3: the effect of perceived risk and perceived helpfulness on heroism

Hypotheses 1: Manipulation as predictors

This tab contains:

From registration:

***Statistical Technique***
A model comparison approach will be used to assess the effects of our manipulations and qualify the variance part of Heroism perception explained by our manipulation (dummy coded -0.5 and 0.5) and by normative evaluation effects (i.e., baseline differences in heroism perceptions across occupations), and by a general halo effect relating to general attitude.

We will use two steps to evaluate the effects of our manipulation of perceived heroism:

A first regression model (Model 1) to assess: 
- The effect of the Risk type (dummy coded -0.5 for the No risk condition, and 0.5 for the High risk condition) on the attribution of Heroism across all occupations  (H1a) 
- The effect of the Motivation type (dummy coded - 0.5 for the self-centered condition, and 0.5 for the altruistic condition)  on the attribution of Heroism across all occupations  (H1b)
- Their interaction (exploratory analysis) 

A second model with Job type as a moderator (Model 2):
- The effect of the Risk type (dummy coded -0.5 for the No risk condition, and 0.5 for the High risk condition) on the attribution of Heroism across all occupations  (H1a) 
- The effect of the Motivation type (dummy coded - 0.5 for the self-centered condition, and 0.5 for the altruistic condition)  on the attribution of Heroism across all occupations  (H1b)
- Their interaction (exploratory analysis) 
- A higher order interaction when considering Job type (to control for potential higher order interaction -  we will further explore the effects within each occupation condition in independent OLS regression models, see Registered R script, section "Further explorations")

We will compare the two models (with and without accounting for the occupation condition) to quantify the extent to which the effect of our manipulation is explained by normative differences in occupation type: A significant reduction in the other predictors' main effect sizes when controlling for Job Type would indicate that these effects are partially dependent on normative evaluations of the job type.

We will base our conclusions regarding the effects of our manipulations irrespective of the job types (i.e., across all jobs) on the Model 1. Model 2 is used to quantify normative evaluation effects in our effects.

Because we have clear registered predictions for the two main effects, but not for any interaction, we will use one-tailed tests for the main effects, but two-tailed tests for the interactions.

H1 Model comparison

We will compare the two models (with and without accounting for the occupation condition) to quantify the extent to which the effect of our manipulation is explained by normative differences in occupation type: A significant reduction in the other predictors’ main effect sizes when controlling for Job Type would indicate that these effects are partially dependent on normative evaluations of the job type.

We will base our conclusions regarding the effects of our manipulations irrespective of the job types (i.e., across all jobs) on the Model 1. Model 2 is used to quantify normative evaluation effects in our effects.

mod <-lm(Heroism ~ Risk_dummy * Help_dummy, data = Set)

mod_cov<-lm(Heroism ~ Risk_dummy * Help_dummy + Job, data = Set)

anova(mod, mod_cov)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Risk_dummy * Help_dummy
## Model 2: Heroism ~ Risk_dummy * Help_dummy + Job
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1   1356 2906.4                                  
## 2   1352 2437.6  4    468.79 65.004 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Model comparison indicates that Heroism rate are significantly driven by normative ratings: model using Job as a covariate is associated to a significantly lower RSS (RSS = 2378.1 16) than the model not using a covariate (RSS = 2906.4, F = 18.603, p < .000001).


H1 Model Comparison Attitude

To test whether a halo effect could account for any observed effect, we also did a model comparison with attitudes as a covariate:

summary(mod_cov2<-lm(Heroism ~ Risk_dummy * Help_dummy + Attitude, data = Set))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy + Attitude, data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.1585 -0.4928 -0.0421  0.8415  3.6071 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            0.196735   0.148368   1.326    0.185    
## Risk_dummy             0.276350   0.059317   4.659 3.49e-06 ***
## Help_dummy            -0.008258   0.059290  -0.139    0.889    
## Attitude               0.831791   0.025076  33.171  < 2e-16 ***
## Risk_dummy:Help_dummy  0.012139   0.118064   0.103    0.918    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.088 on 1355 degrees of freedom
## Multiple R-squared:  0.4647, Adjusted R-squared:  0.4631 
## F-statistic: 294.1 on 4 and 1355 DF,  p-value: < 2.2e-16
anova(mod, mod_cov2)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Risk_dummy * Help_dummy
## Model 2: Heroism ~ Risk_dummy * Help_dummy + Attitude
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1   1356 2906.4                                  
## 2   1355 1603.9  1    1302.4 1100.3 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

An overwhelming importance of attitudes in predicting Heroism ratings. It emphasises the need to control for attitudes.

==> JOB and ATTITUDE explain Heroism above and beyond our experimental manipulations


H1 Main registered model

Below, we present some plots showing QQ-plot (normality of the residuals), fitted vs predicted residuals (heteroscedasticity) and Cook’s distance of predicted values. These diagnostics flags major concerns about normality of the residuals and the influence of some observations (Cook’s d plot). This further warrants some robust analyses (see the robust model). In the mean time, we also present the OLS results.

plot(mod)

summary(mod)
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy, data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.3246 -0.8994  0.1006  1.1006  2.3373 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)             5.0199     0.0397 126.448  < 2e-16 ***
## Risk_dummy              0.4777     0.0794   6.017 2.29e-09 ***
## Help_dummy              0.1841     0.0794   2.319   0.0205 *  
## Risk_dummy:Help_dummy  -0.1051     0.1588  -0.662   0.5082    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.464 on 1356 degrees of freedom
## Multiple R-squared:  0.03004,    Adjusted R-squared:  0.0279 
## F-statistic:    14 on 3 and 1356 DF,  p-value: 5.444e-09
report(mod)
## We fitted a linear model (estimated using OLS) to predict Heroism with
## Risk_dummy and Help_dummy (formula: Heroism ~ Risk_dummy * Help_dummy). The
## model explains a statistically significant and weak proportion of variance (R2
## = 0.03, F(3, 1356) = 14.00, p < .001, adj. R2 = 0.03). The model's intercept,
## corresponding to Risk_dummy = 0 and Help_dummy = 0, is at 5.02 (95% CI [4.94,
## 5.10], t(1356) = 126.45, p < .001). Within this model:
## 
##   - The effect of Risk dummy is statistically significant and positive (beta =
## 0.48, 95% CI [0.32, 0.63], t(1356) = 6.02, p < .001; Std. beta = 0.16, 95% CI
## [0.11, 0.21])
##   - The effect of Help dummy is statistically significant and positive (beta =
## 0.18, 95% CI [0.03, 0.34], t(1356) = 2.32, p = 0.021; Std. beta = 0.06, 95% CI
## [9.45e-03, 0.11])
##   - The effect of Risk dummy × Help dummy is statistically non-significant and
## negative (beta = -0.11, 95% CI [-0.42, 0.21], t(1356) = -0.66, p = 0.508; Std.
## beta = -0.02, 95% CI [-0.07, 0.03])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.

We fitted a linear model (estimated using OLS) to predict Heroism with Risk_dummy and Help_dummy (formula: Heroism ~ Risk_dummy * Help_dummy). The model explains a statistically significant proportion of variance (R2 = 0.03, F(3, 1356) = 14.00, p < .001, adj. R2 = 0.03). Within this model:

  • The effect of Risk dummy is statistically significant and positive (beta = 0.48, 95% CI [0.32, 0.63], t(1356) = 6.02, p < .001; Std. beta = 0.16, 95% CI [0.11, 0.21])
  • The effect of Help dummy is statistically significant and positive (beta = 0.18, 95% CI [0.03, 0.34], t(1356) = 2.32, p = 0.021; Std. beta = 0.06, 95% CI [9.45e-03, 0.11])
  • The effect of Risk dummy × Help dummy is statistically non-significant (beta = -0.11, 95% CI [-0.42, 0.21], t(1356) = -0.66, p = 0.508; Std. beta = -0.02, 95% CI [-0.07, 0.03])
ModIII <- car::Anova(mod, type = "III")
eta_squared(ModIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter             | Eta2 (partial) |       95% CI
## -----------------------------------------------------
## Risk_dummy            |           0.03 | [0.01, 1.00]
## Help_dummy            |       3.95e-03 | [0.00, 1.00]
## Risk_dummy:Help_dummy |       3.23e-04 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].

Model Linearity?

Because the main outcome is an ordinal variable (using integrals), it is unclear whether the model is linear or not. We can fit a cumulative link model, ideal for ordinal outcomes, to assess to what extent this changes our inferences.

# # Fitted values from your model
# fitted_vals <- fitted(mod)
# 
# # Plot observed values against fitted values
# plot(fitted_vals, Set$Heroism,
#      xlab = "Fitted Values",
#      ylab = "Observed Heroes",
#      main = "Observed vs Fitted Values")
# abline(0, 1, col = "blue", lty = 2)

summary(clm(ordered(Heroism) ~ Risk_dummy * Help_dummy, data = Set, link = "logit"))
## formula: ordered(Heroism) ~ Risk_dummy * Help_dummy
## data:    Set
## 
##  link  threshold nobs logLik   AIC     niter max.grad cond.H 
##  logit flexible  1360 -2328.78 4675.56 5(0)  5.44e-08 3.8e+01
## 
## Coefficients:
##                       Estimate Std. Error z value Pr(>|z|)    
## Risk_dummy             0.59565    0.09725   6.125 9.07e-10 ***
## Help_dummy             0.20359    0.09628   2.115   0.0345 *  
## Risk_dummy:Help_dummy -0.10979    0.19233  -0.571   0.5681    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Threshold coefficients:
##     Estimate Std. Error z value
## 1|2 -3.83582    0.18501 -20.733
## 2|3 -2.66230    0.10855 -24.527
## 3|4 -1.84560    0.07873 -23.444
## 4|5 -0.71009    0.05811 -12.220
## 5|6  0.42414    0.05592   7.584
## 6|7  1.50898    0.07038  21.441

Using a Cumulative Link model does not change our results whatsoever.

==> Success. Both our Risk and Motivation manipulations positively influenced perceived heroism. They did not interact. Results were robust to a model adapted to ordinal variables. Importantly, the effect size of the Motivation manipulation is particularly modest: d = 0.12 95%CI[0.02, 0.23]

paste0("Cohen's d for the effect of risk is:")
## [1] "Cohen's d for the effect of risk is:"
effectsize::cohens_d(Set$Heroism ~ relevel(as.factor(Set$Risk), ref = "R"))
## Cohen's d |       95% CI
## ------------------------
## 0.33      | [0.22, 0.43]
## 
## - Estimated using pooled SD.
paste0("Cohen's d for the effect of Motivation is:")
## [1] "Cohen's d for the effect of Motivation is:"
effectsize::cohens_d(Set$Heroism ~ Set$Help)
## Cohen's d |       95% CI
## ------------------------
## 0.12      | [0.02, 0.23]
## 
## - Estimated using pooled SD.

H1 Outliers analyses

Note on Outlier management: As registered, We compared the outputs from our models to the outputs of a robust model using a smoothed Huber function to down-weight extreme residuals. Robust models used the default parameters from the lmrob command in the robustbase package – that is a set of parameters resulting in a 95% efficiency (assuming normal residual distribution and no contamination). This model is robust to the presence of extreme values and less sensitive to deviations from normality assumption

Cook’s distance plot does show that there are some influential cases:

ols_plot_cooksd_bar(mod, type = 1)

(Note that in all Cook’s d barplots, the threshold for identifying influential cases is 4/N = 0.003 – this is an usual way of finding influential cases based on Cook’s distance but… it’s a rule of thumb).

This warrant the use of robust models, or at the very least, a model comparison with robust models.

summary(modrob<-lmrob(Set$Heroism ~ Set$Risk_dummy * Set$Help_dummy))
## 
## Call:
## lmrob(formula = Set$Heroism ~ Set$Risk_dummy * Set$Help_dummy)
##  \--> method = "MM"
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -4.40872 -0.96122  0.03878  1.03878  2.25711 
## 
## Coefficients:
##                               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                    5.10065    0.04178 122.071  < 2e-16 ***
## Set$Risk_dummy                 0.49720    0.08155   6.097 1.41e-09 ***
## Set$Help_dummy                 0.16863    0.08137   2.072   0.0384 *  
## Set$Risk_dummy:Set$Help_dummy -0.09939    0.16268  -0.611   0.5413    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 1.462 
## Multiple R-squared:  0.03149,    Adjusted R-squared:  0.02935 
## Convergence in 10 IRWLS iterations
## 
## Robustness weights: 
##  93 weights are ~= 1. The remaining 1267 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.3432  0.8792  0.9545  0.9088  0.9852  0.9972 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         7.353e-05         1.819e-12         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
paste0("Weights applied to residuals - a value of zero would mean that the observation was discarded, a value of 1 means no re-weighting")
## [1] "Weights applied to residuals - a value of zero would mean that the observation was discarded, a value of 1 means no re-weighting"
plot(modrob$rweights)

Our inferences remain unchanged by downweighting extreme residuals. No large discrepancy.


H1 Job interaction

We registered that we would explore any higher-order interaction with occupation type.

The effect of Risk was qualified by occupation type:

anova(lm(Heroism ~ Risk_dummy * Help_dummy * Job, data = Set))
## Analysis of Variance Table
## 
## Response: Heroism
##                             Df  Sum Sq Mean Sq F value    Pr(>F)    
## Risk_dummy                   1   77.59  77.587 43.7176 5.464e-11 ***
## Help_dummy                   1   11.49  11.489  6.4737   0.01106 *  
## Job                          4  468.56 117.141 66.0053 < 2.2e-16 ***
## Risk_dummy:Help_dummy        1    1.17   1.166  0.6569   0.41779    
## Risk_dummy:Job               4   45.35  11.338  6.3885 4.314e-05 ***
## Help_dummy:Job               4    8.85   2.212  1.2462   0.28941    
## Risk_dummy:Help_dummy:Job    4    5.25   1.312  0.7391   0.56531    
## Residuals                 1340 2378.13   1.775                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

H1 Occupation Decomposition

Because, we observed that occupation influenced the effect of the risk manipulation, there is a solid ground to test our hypothesis within each job. In the table below, we report results from the model lm(Heroism ~ Risk_dummy + Help_dummy, data = ...) for each occupation.

Nurses <- subset(Set, Set$Job == "N")
Pol <- subset(Set, Set$Job == "P")
Firef <- subset(Set, Set$Job == "F")
Psych <- subset(Set, Set$Job == "Ps")
Psych <- subset(Set, Set$Job == "Ps")


# Regression results data frame
regression_results <- data.frame(
  Group    = rep(c("Nurses", "Police", "Firefighters", "Psychiatrists", "Welders"), each = 3),
  Effect   = rep(c("Risk dummy", "Help dummy", "Risk dummy × Help dummy"), times = 5),
  Beta     = c(0.20, 0.05, -0.51,
               0.24, 0.12, 0.01,
               0.29, 0.00586, 0.02,
               0.52, 0.36, -0.29,
               1.18, 0.40, 0.18),
  CI       = c("[-0.15, 0.54]", "[-0.30, 0.39]", "[-1.20, 0.18]",
               "[-0.08, 0.56]", "[-0.19, 0.44]", "[-0.63, 0.65]",
               "[0.01, 0.56]", "[-0.27, 0.28]", "[-0.53, 0.57]",
               "[0.19, 0.86]", "[0.02, 0.70]", "[-0.97, 0.38]",
               "[0.87, 1.49]", "[0.08, 0.71]", "[-0.44, 0.80]"),
  t_value  = c(1.12, 0.27, -1.46,
               1.49, 0.77, 0.03,
               2.04, 0.04, 0.06,
               3.05, 2.11, -0.86,
               7.46, 2.50, 0.57),
  p_value  = c(0.264, 0.789, 0.146,
               0.137, 0.442, 0.973,
               0.042, 0.967, 0.950,
               0.002, 0.035, 0.392,
               "<0.001", 0.013, 0.568),
  Std_Beta = c(0.07, 0.02, -0.09,
               0.09, 0.05, 0.00207,
               0.12, 0.00252, 0.00384,
               0.18, 0.13, -0.05,
               0.41, 0.14, 0.03),
  Std_CI   = c("[-0.05, 0.19]", "[-0.10, 0.14]", "[-0.21, 0.03]",
               "[-0.03, 0.21]", "[-0.07, 0.17]", "[-0.12, 0.12]",
               "[4.40e-03, 0.24]", "[-0.12, 0.12]", "[-0.12, 0.12]",
               "[0.06, 0.30]", "[7.95e-03, 0.24]", "[-0.17, 0.07]",
               "[0.30, 0.52]", "[0.03, 0.25]", "[-0.08, 0.14]"),
  stringsAsFactors = FALSE
)

# Create the effect sizes data frame
effect_sizes <- data.frame(
  Group    = rep(c("Nurses", "Police", "Firefighters", "Psychiatrists", "Welders"), each = 2),
  Effect   = rep(c("Risk", "Motivation"), times = 5),
  Cohen_d  = c(0.14, 0.03,
               0.18, 0.09,
               0.25, 0.00593,
               0.37, 0.25,
               0.89, 0.27),
  CI       = c("[-0.10, 0.37]", "[-0.20, 0.27]",
               "[-0.06, 0.42]", "[-0.14, 0.33]",
               "[0.01, 0.49]", "[-0.23, 0.24]",
               "[0.13, 0.61]", "[0.01, 0.49]",
               "[0.64, 1.14]", "[0.03, 0.51]"),
  stringsAsFactors = FALSE
)

regression_results_gt <- regression_results %>%
  gt(groupname_col = "Group") %>%  # automatically groups rows by the Group column
  fmt_number(
    columns = c("Beta", "t_value", "Std_Beta"),
    decimals = 2
  ) %>%
  tab_header(title = "Regression Results Summary")



## Create the gt table for effect sizes:
effect_sizes_gt <- effect_sizes %>%
  gt(groupname_col = "Group") %>%
  # Format numeric columns to display 2 decimals
  fmt_number(
    columns = c("Cohen_d"), # add any other numeric columns you want to format
    decimals = 2
  ) %>%
  tab_header(
    title = "Effect Sizes (Cohen's d)"
  ) %>%
  cols_label(
    Effect  = "Effect",
    Cohen_d = "Cohen's d",
    CI      = "95% CI"
  )

# Display the tables in an HTML document (in R Markdown, simply putting the table object in a code chunk will render it)
regression_results_gt
Regression Results Summary
Effect Beta CI t_value p_value Std_Beta Std_CI
Nurses
Risk dummy 0.20 [-0.15, 0.54] 1.12 0.264 0.07 [-0.05, 0.19]
Help dummy 0.05 [-0.30, 0.39] 0.27 0.789 0.02 [-0.10, 0.14]
Risk dummy × Help dummy −0.51 [-1.20, 0.18] −1.46 0.146 −0.09 [-0.21, 0.03]
Police
Risk dummy 0.24 [-0.08, 0.56] 1.49 0.137 0.09 [-0.03, 0.21]
Help dummy 0.12 [-0.19, 0.44] 0.77 0.442 0.05 [-0.07, 0.17]
Risk dummy × Help dummy 0.01 [-0.63, 0.65] 0.03 0.973 0.00 [-0.12, 0.12]
Firefighters
Risk dummy 0.29 [0.01, 0.56] 2.04 0.042 0.12 [4.40e-03, 0.24]
Help dummy 0.01 [-0.27, 0.28] 0.04 0.967 0.00 [-0.12, 0.12]
Risk dummy × Help dummy 0.02 [-0.53, 0.57] 0.06 0.95 0.00 [-0.12, 0.12]
Psychiatrists
Risk dummy 0.52 [0.19, 0.86] 3.05 0.002 0.18 [0.06, 0.30]
Help dummy 0.36 [0.02, 0.70] 2.11 0.035 0.13 [7.95e-03, 0.24]
Risk dummy × Help dummy −0.29 [-0.97, 0.38] −0.86 0.392 −0.05 [-0.17, 0.07]
Welders
Risk dummy 1.18 [0.87, 1.49] 7.46 <0.001 0.41 [0.30, 0.52]
Help dummy 0.40 [0.08, 0.71] 2.50 0.013 0.14 [0.03, 0.25]
Risk dummy × Help dummy 0.18 [-0.44, 0.80] 0.57 0.568 0.03 [-0.08, 0.14]

Table below synthetise the effect sizes (Cohen’s d) for each model.

effect_sizes_gt
Effect Sizes (Cohen's d)
Effect Cohen's d 95% CI
Nurses
Risk 0.14 [-0.10, 0.37]
Motivation 0.03 [-0.20, 0.27]
Police
Risk 0.18 [-0.06, 0.42]
Motivation 0.09 [-0.14, 0.33]
Firefighters
Risk 0.25 [0.01, 0.49]
Motivation 0.01 [-0.23, 0.24]
Psychiatrists
Risk 0.37 [0.13, 0.61]
Motivation 0.25 [0.01, 0.49]
Welders
Risk 0.89 [0.64, 1.14]
Motivation 0.27 [0.03, 0.51]

It turns out that our manipulation was fully effective only in the psychiatrists and welders conditions. These two roles are assumed to be “not typically heroic”. As such, we explain these differences between roles as relating to the stereotypes of these occupations being more malleable than the others. People might have weaker opinions of psychiatrists and welders opinions.


H1 Conclusion

Hypothesis 1 received support. However, the Motivation manipulation had a very small effect on heroism perception (eta2 < 1%; d= 0.12). Moreover, when looking into each specific jobs - it turns out that Psychiatrists and Welders (least stereotyped jobs) are driving these effects. Other, typically heroized, occupations were unaffected by our manipulations.

There was NO interaction between the manipulations of risk and motivation. The two effects are independent - motivation type did not influence the independent effect of the risk manipulation.

interact_plot(mod, pred = "Risk_dummy", modx = "Help_dummy")


Hypotheses 2: Character attributions as predictors

This tab contains:

From registration:

We will use three steps to evaluate the effects of our manipulation of perceived heroism:

A first regression model to assess: 
- The effect of the bravery perceptions on Heroism attribution across all occupations (H2a)
- The effect of the selflessness perceptions on Heroism attribution across all occupations  (H2b)
- Their interaction (exploratory analysis) 

A second model with Job type as covariate and moderator:
- The effect of the bravery perceptions on Heroism attribution across all occupations (H2a)
- The effect of the selflessness perceptions on Heroism attribution across all occupations  (H2b)
- Their interaction (exploratory analysis) 
- A higher order interaction when considering Job type (to control for potential higher order interaction - should occupation interact with any variable, we would further explore the interactions when decomposing by job types in independent OLS regression models, see Registered R script, section "Further explorations")

We will compare the two models to quantify the extent to which the effects of measured bravery and selflessness are explained by normative differences in occupation type: A significant reduction in the other predictors' main effect sizes when controlling for Job Type would indicate that these effects are partially dependent on normative evaluations of the job type.

A third model will be computed to control for a possible halo effect explaining positive correlations, using General attitude as a covariate (Model 3):
- The effect of the bravery perceptions on Heroism attribution across all occupations (H2a)
- The effect of the selflessness perceptions on Heroism attribution across all occupations  (H2b)
- Their interaction (exploratory analysis) 
- The effect of General attitude (covariate)

We will compare Model 1 and Model 3 to assess to what extent our effects are conditioned by a general halo effect -- that is heroism is explained by general attitude rather than our key variables. A significant reduction in the other predictors' main effect sizes when controlling for General attitude would indicate that these effects are partially dependent on the participant's attitude of the target occupation.

We will base our conclusions regarding the effects of our manipulations irrespective of the job types (i.e., across all jobs) on the Model 1. Model 2 is used to quantify normative evaluation effects in our effects, and Model 3 to quantify halo effects involved in our results.

H2 Model comparison with job

We compare a model with job as a covariate to a model that does not account for job. This will test the importance of the normative effect of jobs in heroism.

Set$Selfless_scale <- scale(Set$Selfless)
Set$Brave_scale <- scale(Set$Brave)

Set$Danger_scale <- scale(Set$Danger)
Set$Helpful_scale <- scale(Set$Helpfulness)
Set$Attitude_scale <- scale(Set$Attitude)

(mod<-lm(Heroism ~ Selfless_scale * Brave_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Selfless_scale * Brave_scale, data = Set)
## 
## Coefficients:
##                (Intercept)              Selfless_scale  
##                     4.8831                      0.5745  
##                Brave_scale  Selfless_scale:Brave_scale  
##                     0.5844                      0.2073
mod_cov<-lm(Heroism ~ Selfless_scale * Brave_scale + Job , data = Set)
anova(mod, mod_cov)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Selfless_scale * Brave_scale
## Model 2: Heroism ~ Selfless_scale * Brave_scale + Job
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1   1356 1766.3                                  
## 2   1352 1663.6  4    102.74 20.874 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Here also, the heroism rate is driven by normative evaluations of the job – including jobs as covariate leads to a better fit (RSS = 1766.3, vs RSS = 1663.6, F = 20.874, p < .000001) ___

H2 Model comparison with attitude

To account for a possible Halo effect, let’s see if attitudes play a role in explaining heroism.

(mod<-lm(Heroism ~ Selfless_scale * Brave_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Selfless_scale * Brave_scale, data = Set)
## 
## Coefficients:
##                (Intercept)              Selfless_scale  
##                     4.8831                      0.5745  
##                Brave_scale  Selfless_scale:Brave_scale  
##                     0.5844                      0.2073
summary(mod_cov2<-lm(Heroism ~ Selfless_scale * Brave_scale + Attitude_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Selfless_scale * Brave_scale + Attitude_scale, 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.4204 -0.4442  0.1491  0.5796  2.8274 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.91561    0.03090 159.075  < 2e-16 ***
## Selfless_scale              0.29629    0.04015   7.379 2.76e-13 ***
## Brave_scale                 0.34209    0.04221   8.104 1.18e-15 ***
## Attitude_scale              0.67719    0.03679  18.408  < 2e-16 ***
## Selfless_scale:Brave_scale  0.15862    0.02058   7.708 2.46e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.021 on 1355 degrees of freedom
## Multiple R-squared:  0.5284, Adjusted R-squared:  0.527 
## F-statistic: 379.6 on 4 and 1355 DF,  p-value: < 2.2e-16
anova(mod, mod_cov2)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Selfless_scale * Brave_scale
## Model 2: Heroism ~ Selfless_scale * Brave_scale + Attitude_scale
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1   1356 1766.3                                  
## 2   1355 1413.0  1    353.35 338.85 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Massively so: the model using attitude as a covariant explained heroism way better (RSS = 1413) than the model omitting this variable (RSS = 1766), F = 339, p < .00001. It will be important to control for attitude in our conclusions.

==> JOB and ATTITUDE explain Heroism above and beyond the ratings of selflessness and bravery ___

H2 Main registered model

summary(mod)
## 
## Call:
## lm(formula = Heroism ~ Selfless_scale * Brave_scale, data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.2958 -0.4015  0.2531  0.7042  4.1631 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.88313    0.03448  141.62   <2e-16 ***
## Selfless_scale              0.57453    0.04157   13.82   <2e-16 ***
## Brave_scale                 0.58440    0.04483   13.04   <2e-16 ***
## Selfless_scale:Brave_scale  0.20734    0.02281    9.09   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.141 on 1356 degrees of freedom
## Multiple R-squared:  0.4105, Adjusted R-squared:  0.4092 
## F-statistic: 314.8 on 3 and 1356 DF,  p-value: < 2.2e-16
confint(mod)
##                                2.5 %    97.5 %
## (Intercept)                4.8154906 4.9507726
## Selfless_scale             0.4929716 0.6560860
## Brave_scale                0.4964587 0.6723356
## Selfless_scale:Brave_scale 0.1625999 0.2520898
report(mod)
## We fitted a linear model (estimated using OLS) to predict Heroism with
## Selfless_scale and Brave_scale (formula: Heroism ~ Selfless_scale *
## Brave_scale). The model explains a statistically significant and substantial
## proportion of variance (R2 = 0.41, F(3, 1356) = 314.76, p < .001, adj. R2 =
## 0.41). The model's intercept, corresponding to Selfless_scale = 0 and
## Brave_scale = 0, is at 4.88 (95% CI [4.82, 4.95], t(1356) = 141.62, p < .001).
## Within this model:
## 
##   - The effect of Selfless scale is statistically significant and positive (beta
## = 0.57, 95% CI [0.49, 0.66], t(1356) = 13.82, p < .001; Std. beta = 0.39, 95%
## CI [0.33, 0.44])
##   - The effect of Brave scale is statistically significant and positive (beta =
## 0.58, 95% CI [0.50, 0.67], t(1356) = 13.04, p < .001; Std. beta = 0.39, 95% CI
## [0.33, 0.45])
##   - The effect of Selfless scale × Brave scale is statistically significant and
## positive (beta = 0.21, 95% CI [0.16, 0.25], t(1356) = 9.09, p < .001; Std. beta
## = 0.14, 95% CI [0.11, 0.17])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
mean_selfless <- mean(Set$Selfless_scale, na.rm = TRUE)
sd_selfless <- sd(Set$Selfless_scale, na.rm = TRUE)


Set$Selfless_p1sd <- Set$Selfless_scale - (mean_selfless + sd_selfless)
Set$Selfless_m1sd <- Set$Selfless_scale - (mean_selfless - sd_selfless)

model_p1sd <- lm(Heroism ~ Brave_scale * Selfless_p1sd, data = Set)
report(model_p1sd)
## We fitted a linear model (estimated using OLS) to predict Heroism with
## Brave_scale and Selfless_p1sd (formula: Heroism ~ Brave_scale * Selfless_p1sd).
## The model explains a statistically significant and substantial proportion of
## variance (R2 = 0.41, F(3, 1356) = 314.76, p < .001, adj. R2 = 0.41). The
## model's intercept, corresponding to Brave_scale = 0 and Selfless_p1sd = 0, is
## at 5.46 (95% CI [5.35, 5.56], t(1356) = 101.77, p < .001). Within this model:
## 
##   - The effect of Brave scale is statistically significant and positive (beta =
## 0.79, 95% CI [0.68, 0.90], t(1356) = 13.79, p < .001; Std. beta = 0.39, 95% CI
## [0.33, 0.45])
##   - The effect of Selfless p1sd is statistically significant and positive (beta =
## 0.57, 95% CI [0.49, 0.66], t(1356) = 13.82, p < .001; Std. beta = 0.39, 95% CI
## [0.33, 0.44])
##   - The effect of Brave scale × Selfless p1sd is statistically significant and
## positive (beta = 0.21, 95% CI [0.16, 0.25], t(1356) = 9.09, p < .001; Std. beta
## = 0.14, 95% CI [0.11, 0.17])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
model_m1sd <- lm(Heroism ~ Brave_scale * Selfless_m1sd, data = Set)
report(model_m1sd)
## We fitted a linear model (estimated using OLS) to predict Heroism with
## Brave_scale and Selfless_m1sd (formula: Heroism ~ Brave_scale * Selfless_m1sd).
## The model explains a statistically significant and substantial proportion of
## variance (R2 = 0.41, F(3, 1356) = 314.76, p < .001, adj. R2 = 0.41). The
## model's intercept, corresponding to Brave_scale = 0 and Selfless_m1sd = 0, is
## at 4.31 (95% CI [4.20, 4.42], t(1356) = 79.21, p < .001). Within this model:
## 
##   - The effect of Brave scale is statistically significant and positive (beta =
## 0.38, 95% CI [0.29, 0.46], t(1356) = 8.98, p < .001; Std. beta = 0.39, 95% CI
## [0.33, 0.45])
##   - The effect of Selfless m1sd is statistically significant and positive (beta =
## 0.57, 95% CI [0.49, 0.66], t(1356) = 13.82, p < .001; Std. beta = 0.39, 95% CI
## [0.33, 0.44])
##   - The effect of Brave scale × Selfless m1sd is statistically significant and
## positive (beta = 0.21, 95% CI [0.16, 0.25], t(1356) = 9.09, p < .001; Std. beta
## = 0.14, 95% CI [0.11, 0.17])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
ModIII <- car::Anova(mod, type = "III")
eta_squared(ModIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter                  | Eta2 (partial) |       95% CI
## ----------------------------------------------------------
## Selfless_scale             |           0.12 | [0.10, 1.00]
## Brave_scale                |           0.11 | [0.09, 1.00]
## Selfless_scale:Brave_scale |           0.06 | [0.04, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
interact_plot(mod, pred = "Brave_scale", modx = "Selfless_scale")

We replicate our previous study: selflessness and bravery account for heroism.

Here there is a synergy regarding individual perceptions: the more we believe an occupation is brave, the more we see them as heroic, but it is particularly true for occupations we also perceive to be selfless.

Accounting for a possible halo effects by controlling for attitude, does not change our inferences (nor does accounting for job’s normative effects):

summary(mod_cov2)
## 
## Call:
## lm(formula = Heroism ~ Selfless_scale * Brave_scale + Attitude_scale, 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.4204 -0.4442  0.1491  0.5796  2.8274 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.91561    0.03090 159.075  < 2e-16 ***
## Selfless_scale              0.29629    0.04015   7.379 2.76e-13 ***
## Brave_scale                 0.34209    0.04221   8.104 1.18e-15 ***
## Attitude_scale              0.67719    0.03679  18.408  < 2e-16 ***
## Selfless_scale:Brave_scale  0.15862    0.02058   7.708 2.46e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.021 on 1355 degrees of freedom
## Multiple R-squared:  0.5284, Adjusted R-squared:  0.527 
## F-statistic: 379.6 on 4 and 1355 DF,  p-value: < 2.2e-16
report(mod_cov2)
## We fitted a linear model (estimated using OLS) to predict Heroism with
## Selfless_scale, Brave_scale and Attitude_scale (formula: Heroism ~
## Selfless_scale * Brave_scale + Attitude_scale). The model explains a
## statistically significant and substantial proportion of variance (R2 = 0.53,
## F(4, 1355) = 379.60, p < .001, adj. R2 = 0.53). The model's intercept,
## corresponding to Selfless_scale = 0, Brave_scale = 0 and Attitude_scale = 0, is
## at 4.92 (95% CI [4.85, 4.98], t(1355) = 159.07, p < .001). Within this model:
## 
##   - The effect of Selfless scale is statistically significant and positive (beta
## = 0.30, 95% CI [0.22, 0.38], t(1355) = 7.38, p < .001; Std. beta = 0.20, 95% CI
## [0.15, 0.25])
##   - The effect of Brave scale is statistically significant and positive (beta =
## 0.34, 95% CI [0.26, 0.42], t(1355) = 8.10, p < .001; Std. beta = 0.23, 95% CI
## [0.17, 0.29])
##   - The effect of Attitude scale is statistically significant and positive (beta
## = 0.68, 95% CI [0.61, 0.75], t(1355) = 18.41, p < .001; Std. beta = 0.46, 95%
## CI [0.41, 0.50])
##   - The effect of Selfless scale × Brave scale is statistically significant and
## positive (beta = 0.16, 95% CI [0.12, 0.20], t(1355) = 7.71, p < .001; Std. beta
## = 0.11, 95% CI [0.08, 0.13])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.

Assumption checks

Below, some plots diagnose normality of residuals (QQ plot), homoscedasticity (predicted vs fitted plot), and presence of influential cases (Cooks d plot).

plot(mod)

Model Linearity

Because our main DV is ordinal, it is worth checking if a more relevant test (Cumulative Link model) changes our inferences.

# Fitted values from your model
fitted_vals <- fitted(mod)

# Plot observed values against fitted values
plot(fitted_vals, Set$Heroism,
     xlab = "Fitted Values",
     ylab = "Observed Heroes",
     main = "Observed vs Fitted Values")
abline(0, 1, col = "blue", lty = 2)

library(ordinal)

# Fit a cumulative link mixed model
Set$Heroes_ord <- factor(Set$Heroism, ordered = TRUE)

# Now fit the cumulative link mixed model
clm_mod <- clm(Heroes_ord ~ Brave_scale * Selfless_scale, data = Set, link = "logit")

# Summarize the cumulative link mixed model
summary(clm_mod)
## formula: Heroes_ord ~ Brave_scale * Selfless_scale
## data:    Set
## 
##  link  threshold nobs logLik   AIC     niter max.grad cond.H 
##  logit flexible  1360 -1939.75 3897.50 6(0)  3.94e-11 5.4e+01
## 
## Coefficients:
##                            Estimate Std. Error z value Pr(>|z|)    
## Brave_scale                 0.98821    0.07960   12.41   <2e-16 ***
## Selfless_scale              1.12946    0.07918   14.26   <2e-16 ***
## Brave_scale:Selfless_scale  0.45021    0.04471   10.07   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Threshold coefficients:
##     Estimate Std. Error z value
## 1|2 -4.56671    0.19789  -23.08
## 2|3 -3.30968    0.12539  -26.39
## 3|4 -2.37036    0.09618  -24.64
## 4|5 -0.87703    0.07253  -12.09
## 5|6  0.88164    0.07410   11.90
## 6|7  2.55186    0.10281   24.82

Using cumulative link regression does not influence our results.

Consistent with our predictions, Heroes are significantly selfless and Brave. An interaction can be observed, seemingly synergetic: the two variables encourage themselves. However, we note that the effectsize associated to the interaction is quite modest


H2 Outliers analyses

Cook’s distance plot does show high studentized residuals:

ols_plot_cooksd_bar(mod)

This warrant the use of robust models, or at the very least, a model comparison with robust models.

Outlier analyses through model comparison with a robust model:

summary(Robmod<-lmrob(Heroism ~ Brave_scale * Selfless_scale, data = Set))
## 
## Call:
## lmrob(formula = Heroism ~ Brave_scale * Selfless_scale, data = Set)
##  \--> method = "MM"
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.5042 -0.5042  0.1853  0.5754  4.6326 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.97647    0.05490  90.653  < 2e-16 ***
## Brave_scale                 0.52584    0.06403   8.212 5.03e-16 ***
## Selfless_scale              0.72545    0.06639  10.927  < 2e-16 ***
## Brave_scale:Selfless_scale  0.20698    0.09187   2.253   0.0244 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 0.8809 
## Multiple R-squared:  0.5328, Adjusted R-squared:  0.5318 
## Convergence in 41 IRWLS iterations
## 
## Robustness weights: 
##  8 observations c(289,471,487,512,626,798,1095,1150)
##   are outliers with |weight| = 0 ( < 7.4e-05); 
##  57 weights are ~= 1. The remaining 1295 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.01441 0.84180 0.96150 0.87600 0.97890 0.99890 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         7.353e-05         2.302e-11         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)

Robust models did not change our main effects, but the interaction does appear weakened when accouting for extreme residuals.


H2 Decomposition of the effects within job

We registered an exploration of the effects within each occupation if there was any higher order interaction involving type of occupation.

Let’s see if there is an higher-order job interaction:

anova(lm(Heroism ~ Selfless_scale*Brave_scale*Job, data = Set))
## Analysis of Variance Table
## 
## Response: Heroism
##                                  Df  Sum Sq Mean Sq  F value    Pr(>F)    
## Selfless_scale                    1  981.91  981.91 807.4118 < 2.2e-16 ***
## Brave_scale                       1  140.47  140.47 115.5086 < 2.2e-16 ***
## Job                               4  139.60   34.90  28.6981 < 2.2e-16 ***
## Selfless_scale:Brave_scale        1   70.78   70.78  58.2030 4.463e-14 ***
## Selfless_scale:Job                4    2.50    0.63   0.5143 0.7252524    
## Brave_scale:Job                   4    7.72    1.93   1.5871 0.1752918    
## Selfless_scale:Brave_scale:Job    4   23.79    5.95   4.8902 0.0006457 ***
## Residuals                      1340 1629.60    1.22                       
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Job does have a (somewhat small) influence on our interaction.

As registered: Decomposition of the effects in each job.

To do this, we conduct Ordinary Least Squares Regression (as there is no need for random intercept anymore).

For each analysis, I report the shape of the interaction and compare the partial eta^2 of each predictors.

Firefighters
paste0("Firefighter analysis")
## [1] "Firefighter analysis"
Firef<-subset(Set, Set$Job == "F")
summary(FireMod<-lm(Heroism ~ Brave_scale * Selfless_scale, data = Firef))
## 
## Call:
## lm(formula = Heroism ~ Brave_scale * Selfless_scale, data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.0064 -0.5562  0.4438  0.4438  2.0932 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 5.12141    0.08912  57.464  < 2e-16 ***
## Brave_scale                 0.65106    0.12642   5.150 5.04e-07 ***
## Selfless_scale              0.52142    0.09609   5.426 1.28e-07 ***
## Brave_scale:Selfless_scale  0.22611    0.04327   5.226 3.48e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.883 on 269 degrees of freedom
## Multiple R-squared:  0.4237, Adjusted R-squared:  0.4173 
## F-statistic: 65.92 on 3 and 269 DF,  p-value: < 2.2e-16
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter                  | Eta2 (partial) |       95% CI
## ----------------------------------------------------------
## Brave_scale                |           0.09 | [0.04, 1.00]
## Selfless_scale             |           0.10 | [0.05, 1.00]
## Brave_scale:Selfless_scale |           0.09 | [0.04, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(FireMod, pred = Brave_scale, modx = Selfless_scale)
## Warning: 1.38158931300079 is outside the observed range of Selfless_scale
## JOHNSON-NEYMAN INTERVAL
## 
## When Selfless_scale is OUTSIDE the interval [-4.32, -1.91], the slope of
## Brave_scale is p < .05.
## 
## Note: The range of observed values of Selfless_scale is [-3.37, 1.13]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Brave_scale when Selfless_scale = -0.2860620 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.59   0.12     4.86   0.00
## 
## Slope of Brave_scale when Selfless_scale =  0.5477637 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.77   0.14     5.55   0.00
## 
## Slope of Brave_scale when Selfless_scale =  1.3815893 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.96   0.16     5.86   0.00
interact_plot(FireMod, pred = Brave_scale, modx = Selfless_scale)
## Warning: 1.38158931300079 is outside the observed range of Selfless_scale

# etc.

In firefighters: there is a large Bravery/heroism association — The interaction is significant and synergetic

NHS
paste0("HC analysis")
## [1] "HC analysis"
HCrole<- subset(Set, Set$Job == "N")
HCrole$Brave <- scale(HCrole$Brave)
HCrole$Selfless <- scale(HCrole$Selfless)

summary(HCMod<-lm(Heroism ~ Brave * Selfless, data = HCrole))
## 
## Call:
## lm(formula = Heroism ~ Brave * Selfless, data = HCrole)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.5977 -0.4969  0.4023  0.5031  2.9977 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     5.30036    0.07737  68.510  < 2e-16 ***
## Brave           0.52032    0.11409   4.560 7.78e-06 ***
## Selfless        0.63472    0.10974   5.784 2.04e-08 ***
## Brave:Selfless  0.23868    0.04575   5.217 3.66e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.135 on 267 degrees of freedom
## Multiple R-squared:  0.3875, Adjusted R-squared:  0.3806 
## F-statistic: 56.31 on 3 and 267 DF,  p-value: < 2.2e-16
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter      | Eta2 (partial) |       95% CI
## ----------------------------------------------
## Brave          |           0.07 | [0.03, 1.00]
## Selfless       |           0.11 | [0.06, 1.00]
## Brave:Selfless |           0.09 | [0.04, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(HCMod, pred = Brave, modx = Selfless)
## Warning: 0.999999999999999 is outside the observed range of Selfless
## JOHNSON-NEYMAN INTERVAL
## 
## When Selfless is OUTSIDE the interval [-3.55, -1.27], the slope of Brave is
## p < .05.
## 
## Note: The range of observed values of Selfless is [-3.55, 0.93]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Brave when Selfless = -1.000000e+00 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.28   0.11     2.59   0.01
## 
## Slope of Brave when Selfless =  2.371002e-16 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.52   0.11     4.56   0.00
## 
## Slope of Brave when Selfless =  1.000000e+00 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.76   0.14     5.59   0.00
interact_plot(HCMod, pred = Brave, modx = Selfless)
## Warning: 0.999999999999999 is outside the observed range of Selfless

# etc.

In Healthcare workers, Heroism is largely associated to selflessness (11%), less so to bravery (7%). The interaction indicates a large synergy (9%).

Police officers
paste0("Police Officers")
## [1] "Police Officers"
Pol<- subset(Set, Set$Job == "P")
Pol$Brave <- scale(Pol$Brave)
Pol$Selfless <- scale(Pol$Selfless)

summary(PolMod<-lm(Heroism ~ Brave * Selfless, data = Pol))
## 
## Call:
## lm(formula = Heroism ~ Brave * Selfless, data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.7383 -0.4540  0.1713  0.5759  3.8594 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     4.55066    0.07291  62.416  < 2e-16 ***
## Brave           0.39739    0.08614   4.613 6.15e-06 ***
## Selfless        0.45904    0.08437   5.441 1.20e-07 ***
## Brave:Selfless -0.04020    0.05025  -0.800    0.424    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.087 on 268 degrees of freedom
## Multiple R-squared:  0.3453, Adjusted R-squared:  0.338 
## F-statistic: 47.12 on 3 and 268 DF,  p-value: < 2.2e-16
HCPol_typeIII <- car::Anova(PolMod, type = "III")
eta_squared(HCPol_typeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter      | Eta2 (partial) |       95% CI
## ----------------------------------------------
## Brave          |           0.07 | [0.03, 1.00]
## Selfless       |           0.10 | [0.05, 1.00]
## Brave:Selfless |       2.38e-03 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(PolMod, pred = Brave, modx = Selfless)
## JOHNSON-NEYMAN INTERVAL
## 
## When Selfless is INSIDE the interval [-7.00, 2.26], the slope of Brave is p
## < .05.
## 
## Note: The range of observed values of Selfless is [-3.13, 1.64]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Brave when Selfless = -1.000000e+00 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.44   0.09     4.83   0.00
## 
## Slope of Brave when Selfless =  1.404871e-16 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.40   0.09     4.61   0.00
## 
## Slope of Brave when Selfless =  1.000000e+00 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.36   0.11     3.30   0.00
interact_plot(PolMod, pred = Brave, modx = Selfless)

# etc.

In Police officers, heroism levels are largely predicted by Bravery estimates (eta2 = 7%), and selflessness (eta2 = 10%). There is no interaction in the Police condition: eta2 = 0.2%

Psychiatrists
paste0("Psy analysis")
## [1] "Psy analysis"
PsyRole<- subset(Set, Set$Job == "Ps")
PsyRole$Brave <- scale(PsyRole$Brave)
PsyRole$Selfless <- scale(PsyRole$Selfless)


summary(PsyMod<-lm(Heroism ~ Brave * Selfless, data = PsyRole))
## 
## Call:
## lm(formula = Heroism ~ Brave * Selfless, data = PsyRole)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.2357 -0.6637  0.2474  0.7643  3.2342 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     4.35846    0.07809  55.813  < 2e-16 ***
## Brave           0.71198    0.09378   7.592 5.33e-13 ***
## Selfless        0.25024    0.08883   2.817 0.005212 ** 
## Brave:Selfless  0.19676    0.05354   3.675 0.000287 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.174 on 266 degrees of freedom
## Multiple R-squared:  0.3369, Adjusted R-squared:  0.3294 
## F-statistic: 45.05 on 3 and 266 DF,  p-value: < 2.2e-16
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter      | Eta2 (partial) |       95% CI
## ----------------------------------------------
## Brave          |           0.18 | [0.11, 1.00]
## Selfless       |           0.03 | [0.01, 1.00]
## Brave:Selfless |           0.05 | [0.01, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(PsyMod, pred = Brave, modx = Selfless)
## JOHNSON-NEYMAN INTERVAL
## 
## When Selfless is OUTSIDE the interval [-7.37, -2.33], the slope of Brave is
## p < .05.
## 
## Note: The range of observed values of Selfless is [-3.00, 1.42]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Brave when Selfless = -1.000000e+00 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.52   0.09     5.62   0.00
## 
## Slope of Brave when Selfless = -2.564307e-16 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.71   0.09     7.59   0.00
## 
## Slope of Brave when Selfless =  1.000000e+00 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.91   0.12     7.44   0.00
interact_plot(PsyMod, pred = Brave, modx = Selfless)

# etc.

Psychiatrists heroism is largely derived from both Bravery (18%) but not by selflessness (2%). The interaction is modest (5%) but indicates synergy.

Welders
paste0("Welders analysis")
## [1] "Welders analysis"
Weld<- subset(Set, Set$Job == "W")
Weld$Brave <- scale(Weld$Brave)
Weld$Selfless <- scale(Weld$Selfless)


summary(WeldMod<-lm(Heroism ~ Brave * Selfless, data = Weld))
## 
## Call:
## lm(formula = Heroism ~ Brave * Selfless, data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.8247 -0.7899  0.1753  0.7481  4.6123 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     4.59765    0.07732  59.465  < 2e-16 ***
## Brave           0.45945    0.08770   5.239 3.26e-07 ***
## Selfless        0.52031    0.08193   6.351 9.03e-10 ***
## Brave:Selfless  0.18801    0.05764   3.262  0.00125 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.205 on 270 degrees of freedom
## Multiple R-squared:  0.3087, Adjusted R-squared:  0.3011 
## F-statistic:  40.2 on 3 and 270 DF,  p-value: < 2.2e-16
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter      | Eta2 (partial) |       95% CI
## ----------------------------------------------
## Brave          |           0.09 | [0.04, 1.00]
## Selfless       |           0.13 | [0.07, 1.00]
## Brave:Selfless |           0.04 | [0.01, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(WeldMod, pred = Brave, modx = Selfless)
## JOHNSON-NEYMAN INTERVAL
## 
## When Selfless is OUTSIDE the interval [-5.63, -1.43], the slope of Brave is
## p < .05.
## 
## Note: The range of observed values of Selfless is [-3.85, 1.29]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Brave when Selfless = -1.000000e+00 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.27   0.09     3.16   0.00
## 
## Slope of Brave when Selfless =  2.791765e-16 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.46   0.09     5.24   0.00
## 
## Slope of Brave when Selfless =  1.000000e+00 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.65   0.12     5.35   0.00
interact_plot(WeldMod, pred = Brave, modx = Selfless)

# etc.

Small interaction. Both variables contribute to heroism. ___

H2 Conclusion

Support for Hypotheses 2. Perception of heroism is linked to perception of bravery and selflessness. The effect sizes are quite large for our main effects. We can note that the interaction between the two attributes is quite small (this varies across jobs, with police officers not benefitting from this interactive effect). When using robust regressions (which is encouraged by the presence of very influential cases in the model), the interaction is also significantly weakened (p = .02, eta2 < 1%). Both Selflessness and Bravery contribute to heroism, but they appear to do so independently.

That said, there are discrepancies with our previous experiment. In our previous study, we found that bravery contributed to heroism in the firefighter and soldier condition significantly more than selflessness, and conversely for nurses. In contrast, in this study, the only occupation showing a marked difference between the two predictors was Psychiatrists.


Hypotheses 3: Situation attributions as predictors

This tab contains:

We repeat the analyses using the attributes rating (Manipulation check #2) as predictors, from registration:


We will use three steps to evaluate the effects of our manipulation of perceived heroism:

A first regression model to assess: 
- The effect of the perceived exposure to physical threat on Heroism attribution across all occupations  (H3a)
- The effect of the perceived help to others on Heroism attribution across all occupations  (H3b)
- Their interaction (exploratory analysis) 

A second model with Job type as covariate and moderator:
- The effect of the perceived exposure to physical threat on Heroism attribution across all occupations  (H3a)
- The effect of the perceived help to others on Heroism attribution across all occupations  (H3b)
- Their interaction (exploratory analysis) 
- A higher order interaction when considering Job type (to control for potential higher order interaction - should occupation interact with any variable, we would further explore the interactions when decomposing by job types in independent OLS regression models, see Registered R script, section "Further explorations")

We will compare the two models to quantify the extent to which the effects of measured bravery and selflessness are explained by normative differences in occupation type: A significant reduction in the other predictors' main effect sizes when controlling for Job Type would indicate that these effects are partially dependent on normative evaluations of the job type.

A third model will be computed to control for a possible halo effect explaining positive correlations, using General attitude as a covariate (Model 3):
- The effect of the perceived exposure to physical threat on Heroism attribution across all occupations  (H3a)
- The effect of the perceived help to others on Heroism attribution across all occupations  (H3b)
- Their interaction (exploratory analysis) 
- The effect of General attitude (covariate)

We will compare Model 1 and Model 3 to assess to what extent our effects are conditioned by a general halo effect -- that is heroism is explained by general attitude rather than our key variables. A significant reduction in the other predictors' main effect sizes when controlling for General attitude would indicate that these effects are partially dependent on the participant's attitude of the target occupation.

We will base our conclusions regarding the effects of our manipulations irrespective of the job types (i.e., across all jobs) on the Model 1. Model 2 is used to quantify normative evaluation effects in our effects, and Model 3 to quantify halo effects involved in our results. 

H3 Model comparison with job

Set$Danger_scale <- scale(Set$Danger)
Set$Helpfulness_scale <- scale(Set$Helpfulness)

(mod<-lm(Heroism ~ Helpfulness_scale * Danger_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Helpfulness_scale * Danger_scale, data = Set)
## 
## Coefficients:
##                    (Intercept)               Helpfulness_scale  
##                         5.0023                          0.7401  
##                   Danger_scale  Helpfulness_scale:Danger_scale  
##                         0.3562                          0.0433
mod_cov<-lm(Heroism ~ Helpfulness_scale * Danger_scale + Job , data = Set)
anova(mod, mod_cov)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Helpfulness_scale * Danger_scale
## Model 2: Heroism ~ Helpfulness_scale * Danger_scale + Job
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1   1356 1812.3                                  
## 2   1352 1719.4  4    92.891 18.261 1.272e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Here also, the heroism rate is driven by normative evaluations of the job – including jobs as covariate leads to a better fit (RSS = 1812.3, vs RSS = 1719.4, F = 18.26, p < .000001) ___

H3 Model comparison with attitude

To account for a possible Halo effect, let’s see if attitudes play a role in explaining heroism.

(mod<-lm(Heroism ~ Helpfulness_scale * Danger_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Helpfulness_scale * Danger_scale, data = Set)
## 
## Coefficients:
##                    (Intercept)               Helpfulness_scale  
##                         5.0023                          0.7401  
##                   Danger_scale  Helpfulness_scale:Danger_scale  
##                         0.3562                          0.0433
summary(mod_cov2<-lm(Heroism ~ Helpfulness_scale * Danger_scale + Attitude_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Helpfulness_scale * Danger_scale + Attitude_scale, 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.3125 -0.4388  0.1466  0.6875  2.9213 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     4.98466    0.03007 165.753  < 2e-16 ***
## Helpfulness_scale               0.34542    0.03916   8.820  < 2e-16 ***
## Danger_scale                    0.22601    0.03256   6.941 6.03e-12 ***
## Attitude_scale                  0.70355    0.03910  17.993  < 2e-16 ***
## Helpfulness_scale:Danger_scale  0.08330    0.02389   3.487 0.000505 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.039 on 1355 degrees of freedom
## Multiple R-squared:  0.5118, Adjusted R-squared:  0.5104 
## F-statistic: 355.1 on 4 and 1355 DF,  p-value: < 2.2e-16
anova(mod, mod_cov2)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Helpfulness_scale * Danger_scale
## Model 2: Heroism ~ Helpfulness_scale * Danger_scale + Attitude_scale
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1   1356 1812.3                                  
## 2   1355 1462.8  1    349.49 323.74 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Massively so: the model using attitude as a covariant explained heroism way better (RSS = 1462.8) than the model omitting this variable (RSS = 1812.3), F = 323.74, p < .00001. It will be important to control for attitude in our conclusions.

==> JOB and ATTITUDE explain Heroism above and beyond ratings of physical danger and helpfulness.


H3 Main registered model

summary(mod)
## 
## Call:
## lm(formula = Heroism ~ Helpfulness_scale * Danger_scale, data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.0765 -0.5704  0.1585  0.8965  3.6948 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     5.00226    0.03344 149.575   <2e-16 ***
## Helpfulness_scale               0.74007    0.03610  20.502   <2e-16 ***
## Danger_scale                    0.35615    0.03533  10.082   <2e-16 ***
## Helpfulness_scale:Danger_scale  0.04330    0.02647   1.636    0.102    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.156 on 1356 degrees of freedom
## Multiple R-squared:  0.3952, Adjusted R-squared:  0.3938 
## F-statistic: 295.3 on 3 and 1356 DF,  p-value: < 2.2e-16
confint(mod)
##                                       2.5 %     97.5 %
## (Intercept)                     4.936657149 5.06786894
## Helpfulness_scale               0.669252945 0.81088079
## Danger_scale                    0.286851890 0.42545283
## Helpfulness_scale:Danger_scale -0.008619894 0.09522744
report(mod) 
## We fitted a linear model (estimated using OLS) to predict Heroism with
## Helpfulness_scale and Danger_scale (formula: Heroism ~ Helpfulness_scale *
## Danger_scale). The model explains a statistically significant and substantial
## proportion of variance (R2 = 0.40, F(3, 1356) = 295.33, p < .001, adj. R2 =
## 0.39). The model's intercept, corresponding to Helpfulness_scale = 0 and
## Danger_scale = 0, is at 5.00 (95% CI [4.94, 5.07], t(1356) = 149.58, p < .001).
## Within this model:
## 
##   - The effect of Helpfulness scale is statistically significant and positive
## (beta = 0.74, 95% CI [0.67, 0.81], t(1356) = 20.50, p < .001; Std. beta = 0.50,
## 95% CI [0.45, 0.55])
##   - The effect of Danger scale is statistically significant and positive (beta =
## 0.36, 95% CI [0.29, 0.43], t(1356) = 10.08, p < .001; Std. beta = 0.24, 95% CI
## [0.19, 0.29])
##   - The effect of Helpfulness scale × Danger scale is statistically
## non-significant and positive (beta = 0.04, 95% CI [-8.62e-03, 0.10], t(1356) =
## 1.64, p = 0.102; Std. beta = 0.03, 95% CI [-5.81e-03, 0.06])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
ModIII <- car::Anova(mod, type = "III")
eta_squared(ModIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter                      | Eta2 (partial) |       95% CI
## --------------------------------------------------------------
## Helpfulness_scale              |           0.24 | [0.21, 1.00]
## Danger_scale                   |           0.07 | [0.05, 1.00]
## Helpfulness_scale:Danger_scale |       1.97e-03 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].

We replicated the previous study, in particular:

  • A main effect of Danger ; beta = 0.74, 95% CI [0.67, 0.81], t(1356) = 20.50, p < .001; Std. beta = 0.50, 95% CI [0.45, 0.55]
  • A main effect of Helpfulness; beta = 0.36, 95% CI [0.29, 0.43], t(1356) = 10.08, p < .001; Std. beta = 0.24, 95% CI [0.19, 0.29]
  • No interaction between these features (p = .1)
interact_plot(mod, pred = "Danger_scale", modx = "Helpfulness_scale")
## Warning: 1.00000000000001 is outside the observed range of Helpfulness_scale

Here there is no interaction regarding evaluation of danger and helpfulness.

Accounting for a possible halo effects by controlling for attitude, does not change our main inferences:

summary(mod_cov2)
## 
## Call:
## lm(formula = Heroism ~ Helpfulness_scale * Danger_scale + Attitude_scale, 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.3125 -0.4388  0.1466  0.6875  2.9213 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     4.98466    0.03007 165.753  < 2e-16 ***
## Helpfulness_scale               0.34542    0.03916   8.820  < 2e-16 ***
## Danger_scale                    0.22601    0.03256   6.941 6.03e-12 ***
## Attitude_scale                  0.70355    0.03910  17.993  < 2e-16 ***
## Helpfulness_scale:Danger_scale  0.08330    0.02389   3.487 0.000505 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.039 on 1355 degrees of freedom
## Multiple R-squared:  0.5118, Adjusted R-squared:  0.5104 
## F-statistic: 355.1 on 4 and 1355 DF,  p-value: < 2.2e-16
report(mod_cov2)
## We fitted a linear model (estimated using OLS) to predict Heroism with
## Helpfulness_scale, Danger_scale and Attitude_scale (formula: Heroism ~
## Helpfulness_scale * Danger_scale + Attitude_scale). The model explains a
## statistically significant and substantial proportion of variance (R2 = 0.51,
## F(4, 1355) = 355.15, p < .001, adj. R2 = 0.51). The model's intercept,
## corresponding to Helpfulness_scale = 0, Danger_scale = 0 and Attitude_scale =
## 0, is at 4.98 (95% CI [4.93, 5.04], t(1355) = 165.75, p < .001). Within this
## model:
## 
##   - The effect of Helpfulness scale is statistically significant and positive
## (beta = 0.35, 95% CI [0.27, 0.42], t(1355) = 8.82, p < .001; Std. beta = 0.23,
## 95% CI [0.18, 0.28])
##   - The effect of Danger scale is statistically significant and positive (beta =
## 0.23, 95% CI [0.16, 0.29], t(1355) = 6.94, p < .001; Std. beta = 0.15, 95% CI
## [0.11, 0.20])
##   - The effect of Attitude scale is statistically significant and positive (beta
## = 0.70, 95% CI [0.63, 0.78], t(1355) = 17.99, p < .001; Std. beta = 0.47, 95%
## CI [0.42, 0.53])
##   - The effect of Helpfulness scale × Danger scale is statistically significant
## and positive (beta = 0.08, 95% CI [0.04, 0.13], t(1355) = 3.49, p < .001; Std.
## beta = 0.06, 95% CI [0.02, 0.09])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.

However, the interaction is now significant. Weirdly enough.

interact_plot(mod_cov2, pred = "Danger_scale", modx = "Helpfulness_scale")
## Warning: 1.00000000000001 is outside the observed range of Helpfulness_scale

So: the part of heroism which is not predicted by attitude, is predicted by helpfulness and danger perceptions, that worked together in a synergetic way.

Assumption checks

plot(mod)

Model Linearity

# Fitted values from your model
fitted_vals <- fitted(mod)

# Plot observed values against fitted values
plot(fitted_vals, Set$Heroism,
     xlab = "Fitted Values",
     ylab = "Observed Heroes",
     main = "Observed vs Fitted Values")
abline(0, 1, col = "blue", lty = 2)

library(ordinal)

# Fit a cumulative link mixed model
Set$Heroes_ord <- factor(Set$Heroism, ordered = TRUE)

# Now fit the cumulative link mixed model
clm_mod <- clm(Heroes_ord ~ Danger_scale * Helpfulness_scale, data = Set, link = "logit")

# Summarize the cumulative link mixed model
summary(clm_mod)
## formula: Heroes_ord ~ Danger_scale * Helpfulness_scale
## data:    Set
## 
##  link  threshold nobs logLik   AIC     niter max.grad cond.H 
##  logit flexible  1360 -1974.77 3967.55 6(0)  1.56e-11 5.8e+01
## 
## Coefficients:
##                                Estimate Std. Error z value Pr(>|z|)    
## Danger_scale                    0.64401    0.05983  10.764  < 2e-16 ***
## Helpfulness_scale               1.27886    0.06532  19.580  < 2e-16 ***
## Danger_scale:Helpfulness_scale  0.18919    0.04359   4.341 1.42e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Threshold coefficients:
##     Estimate Std. Error z value
## 1|2 -5.00003    0.21767  -22.97
## 2|3 -3.60312    0.13650  -26.40
## 3|4 -2.55879    0.10059  -25.44
## 4|5 -0.98444    0.07063  -13.94
## 5|6  0.67737    0.06771   10.01
## 6|7  2.16274    0.08871   24.38

Using cumulative link regression does not influence our main predictions, BUT, an interaction between Danger and Helpfulness appear. This might be something to explore?

Consistent with our predictions, Heroes are significantly Helpful and in Dangerous situations. Looking at the effect sizes, it would appear that Heroism is MOSTLY driven by perception of Helpfulness (eta2 = 42%) rather than danger (eta2 = 8%). Heroes help people, they’re not necessary in danger. These two effects seem, on a surface level, to work independently. However, an interaction can be observed, seemingly synergetic, but only when accounting for a halo effect OR when using CML model: the two variables encourage themselves. However, we note that the effectsize associated to this interaction is quite modest: eta 2 = 0.8%


H3 Outliers analyses

Cook’s distance plot does show high studentized residuals:

ols_plot_cooksd_bar(mod)

This warrant the use of robust models, or at the very least, a model comparison with robust models.

Outlier analyses through model comparison with a robust model:

summary(Robmod<-lmrob(Heroism ~ Danger_scale * Helpfulness_scale, data = Set))
## 
## Call:
## lmrob(formula = Heroism ~ Danger_scale * Helpfulness_scale, data = Set)
##  \--> method = "MM"
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -5.35359 -0.61788  0.01426  0.64641  3.96659 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     5.10176    0.03226 158.125  < 2e-16 ***
## Danger_scale                    0.39099    0.04168   9.380  < 2e-16 ***
## Helpfulness_scale               0.83700    0.03579  23.385  < 2e-16 ***
## Danger_scale:Helpfulness_scale  0.10235    0.02523   4.056 5.27e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 0.9275 
## Multiple R-squared:  0.5241, Adjusted R-squared:  0.523 
## Convergence in 11 IRWLS iterations
## 
## Robustness weights: 
##  14 observations c(128,306,363,379,471,626,689,798,936,1036,1150,1230,1305,1309)
##   are outliers with |weight| = 0 ( < 7.4e-05); 
##  113 weights are ~= 1. The remaining 1233 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.02519 0.84750 0.95620 0.88450 0.97740 0.99800 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         7.353e-05         2.841e-11         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)

Robust models did not change our main effects, but the interaction does appear STRONGER when accouting for extreme residuals. It might be worth investigating. The absence of interaction might be driven by outliers.

In particular, the interaction is there: when accounting for attitude, when using CML model, and when using robust model.


H3 Decomposition of the effects within job

We registered an exploration of the effects within each occupation if there was any higher order interaction involving type of occupation.

Let’s see if there is an higher-order job interaction:

anova(lm(Heroism ~ Helpfulness_scale*Danger_scale*Job, data = Set))
## Analysis of Variance Table
## 
## Response: Heroism
##                                      Df  Sum Sq Mean Sq  F value    Pr(>F)    
## Helpfulness_scale                     1 1048.23 1048.23 835.1014 < 2.2e-16 ***
## Danger_scale                          1  132.29  132.29 105.3915 < 2.2e-16 ***
## Job                                   4   96.32   24.08  19.1846 2.332e-15 ***
## Helpfulness_scale:Danger_scale        1    0.15    0.15   0.1158  0.733699    
## Helpfulness_scale:Job                 4   21.04    5.26   4.1901  0.002241 ** 
## Danger_scale:Job                      4   13.56    3.39   2.7001  0.029333 *  
## Helpfulness_scale:Danger_scale:Job    4    2.80    0.70   0.5581  0.693133    
## Residuals                          1340 1681.99    1.26                       
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Job does have a (somewhat small) influence on our main predictions (the effect of danger and the effect of helpfulness).

As registered: Decomposition of the effects in each job.

To do this, we conduct Ordinary Least Squares Regression (as there is no need for random intercept anymore).

For each analysis, I report the shape of the interaction and compare the partial eta^2 of each predictors.

Firefighters
paste0("Firefighter analysis")
## [1] "Firefighter analysis"
Firef<-subset(Set, Set$Job == "F")
summary(FireMod<-lm(Heroism ~ Danger_scale * Helpfulness_scale, data = Firef))
## 
## Call:
## lm(formula = Heroism ~ Danger_scale * Helpfulness_scale, data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.3549 -0.3549  0.3133  0.6451  1.7724 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     5.39064    0.08984  60.005  < 2e-16 ***
## Danger_scale                    0.33814    0.10843   3.118  0.00202 ** 
## Helpfulness_scale               0.64213    0.12138   5.290 2.53e-07 ***
## Danger_scale:Helpfulness_scale  0.04147    0.05537   0.749  0.45454    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.9777 on 269 degrees of freedom
## Multiple R-squared:  0.2935, Adjusted R-squared:  0.2856 
## F-statistic: 37.24 on 3 and 269 DF,  p-value: < 2.2e-16
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter                      | Eta2 (partial) |       95% CI
## --------------------------------------------------------------
## Danger_scale                   |           0.03 | [0.01, 1.00]
## Helpfulness_scale              |           0.09 | [0.05, 1.00]
## Danger_scale:Helpfulness_scale |       2.08e-03 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(FireMod, pred = Danger_scale, modx = Helpfulness_scale)
## Warning: 1.23764448941848 is outside the observed range of Helpfulness_scale
## JOHNSON-NEYMAN INTERVAL
## 
## When Helpfulness_scale is INSIDE the interval [-1.40, 4.85], the slope of
## Danger_scale is p < .05.
## 
## Note: The range of observed values of Helpfulness_scale is [-4.39, 0.92]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Danger_scale when Helpfulness_scale = -0.1400676 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.33   0.11     3.02   0.00
## 
## Slope of Danger_scale when Helpfulness_scale =  0.5487884 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.36   0.11     3.34   0.00
## 
## Slope of Danger_scale when Helpfulness_scale =  1.2376445 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.39   0.12     3.27   0.00
interact_plot(FireMod, pred = Danger_scale, modx = Helpfulness_scale)
## Warning: 1.23764448941848 is outside the observed range of Helpfulness_scale

# etc.

In firefighters: Heroism is driven bu Helpfulness, and then Danger. No interaction.

NHS
paste0("HC analysis")
## [1] "HC analysis"
HCrole<- subset(Set, Set$Job == "N")
HCrole$Danger <- scale(HCrole$Danger)
HCrole$Helpfulness <- scale(HCrole$Helpfulness)

summary(HCMod<-lm(Heroism ~ Danger * Helpfulness, data = HCrole))
## 
## Call:
## lm(formula = Heroism ~ Danger * Helpfulness, data = HCrole)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.0583 -0.5187  0.1772  0.6729  3.0262 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         5.49175    0.07663  71.667  < 2e-16 ***
## Danger              0.34346    0.07931   4.331 2.10e-05 ***
## Helpfulness         0.65103    0.08681   7.499 9.52e-13 ***
## Danger:Helpfulness -0.01819    0.06774  -0.268    0.789    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.153 on 267 degrees of freedom
## Multiple R-squared:  0.3683, Adjusted R-squared:  0.3612 
## F-statistic:  51.9 on 3 and 267 DF,  p-value: < 2.2e-16
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter          | Eta2 (partial) |       95% CI
## --------------------------------------------------
## Danger             |           0.07 | [0.03, 1.00]
## Helpfulness        |           0.17 | [0.11, 1.00]
## Danger:Helpfulness |       2.70e-04 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(HCMod, pred = Danger, modx = Helpfulness)
## Warning: 1 is outside the observed range of Helpfulness
## JOHNSON-NEYMAN INTERVAL
## 
## When Helpfulness is INSIDE the interval [-2.81, 1.91], the slope of Danger
## is p < .05.
## 
## Note: The range of observed values of Helpfulness is [-4.32, 0.71]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Danger when Helpfulness = -1.000000e+00 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.36   0.10     3.61   0.00
## 
## Slope of Danger when Helpfulness =  1.712447e-16 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.34   0.08     4.33   0.00
## 
## Slope of Danger when Helpfulness =  1.000000e+00 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.33   0.11     3.00   0.00
interact_plot(HCMod, pred = Danger, modx = Helpfulness)
## Warning: 1 is outside the observed range of Helpfulness

# etc.

Similar conclusions for nurses, Help> Danger. No interactin.

Police
paste0("Police Officers")
## [1] "Police Officers"
Pol<- subset(Set, Set$Job == "P")
Pol$Danger <- scale(Pol$Danger)
Pol$Helpfulness <- scale(Pol$Helpfulness)

summary(PolMod<-lm(Heroism ~ Danger * Helpfulness, data = Pol))
## 
## Call:
## lm(formula = Heroism ~ Danger * Helpfulness, data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.8871 -0.5108  0.0161  0.8233  3.0095 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         4.48612    0.07166  62.599   <2e-16 ***
## Danger              0.14025    0.08008   1.751    0.081 .  
## Helpfulness         0.76996    0.08153   9.444   <2e-16 ***
## Danger:Helpfulness  0.06749    0.05547   1.217    0.225    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.053 on 268 degrees of freedom
## Multiple R-squared:  0.3859, Adjusted R-squared:  0.379 
## F-statistic: 56.13 on 3 and 268 DF,  p-value: < 2.2e-16
HCPol_typeIII <- car::Anova(PolMod, type = "III")
eta_squared(HCPol_typeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter          | Eta2 (partial) |       95% CI
## --------------------------------------------------
## Danger             |           0.01 | [0.00, 1.00]
## Helpfulness        |           0.25 | [0.18, 1.00]
## Danger:Helpfulness |       5.49e-03 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(PolMod, pred = Danger, modx = Helpfulness)
## JOHNSON-NEYMAN INTERVAL
## 
## When Helpfulness is INSIDE the interval [0.53, 1.33], the slope of Danger
## is p < .05.
## 
## Note: The range of observed values of Helpfulness is [-3.03, 1.46]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Danger when Helpfulness = -1.000000e+00 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.07   0.09     0.81   0.42
## 
## Slope of Danger when Helpfulness =  4.612324e-17 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.14   0.08     1.75   0.08
## 
## Slope of Danger when Helpfulness =  1.000000e+00 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.21   0.10     2.00   0.05
interact_plot(PolMod, pred = Danger, modx = Helpfulness)

# etc.

In Police officers, heroism levels are predicted by Helpfulness estimates (eta2 = 25%). However, Danger does not predict heroism. There is no interaction to report neither.

Psychiatrists
paste0("Psy analysis")
## [1] "Psy analysis"
PsyRole<- subset(Set, Set$Job == "Ps")
PsyRole$Danger <- scale(PsyRole$Danger)
PsyRole$Helpfulness <- scale(PsyRole$Helpfulness)


summary(PsyMod<-lm(Heroism ~ Danger * Helpfulness, data = PsyRole))
## 
## Call:
## lm(formula = Heroism ~ Danger * Helpfulness, data = PsyRole)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.4511 -0.6129  0.1978  0.6907  2.3997 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         4.46569    0.07640  58.454  < 2e-16 ***
## Danger              0.18022    0.07926   2.274   0.0238 *  
## Helpfulness         0.73211    0.08436   8.678  4.1e-16 ***
## Danger:Helpfulness  0.02080    0.06115   0.340   0.7340    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.188 on 266 degrees of freedom
## Multiple R-squared:  0.3213, Adjusted R-squared:  0.3136 
## F-statistic: 41.97 on 3 and 266 DF,  p-value: < 2.2e-16
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter          | Eta2 (partial) |       95% CI
## --------------------------------------------------
## Danger             |           0.02 | [0.00, 1.00]
## Helpfulness        |           0.22 | [0.15, 1.00]
## Danger:Helpfulness |       4.35e-04 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(PsyMod, pred = Danger, modx = Helpfulness)
## Warning: 1 is outside the observed range of Helpfulness
## JOHNSON-NEYMAN INTERVAL
## 
## When Helpfulness is INSIDE the interval [-0.51, 1.13], the slope of Danger
## is p < .05.
## 
## Note: The range of observed values of Helpfulness is [-4.62, 1.00]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Danger when Helpfulness = -1.000000e+00 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.16   0.10     1.57   0.12
## 
## Slope of Danger when Helpfulness = -2.297031e-16 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.18   0.08     2.27   0.02
## 
## Slope of Danger when Helpfulness =  1.000000e+00 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.20   0.10     2.04   0.04
interact_plot(PsyMod, pred = Danger, modx = Helpfulness)
## Warning: 1 is outside the observed range of Helpfulness

# etc.

Psychiatrists heroism is largely derived from both Helpfulness (22%) but not so much by Danger (2%). The interaction is not significant.

Welders

paste0("Welders analysis")
## [1] "Welders analysis"
Weld<- subset(Set, Set$Job == "W")
Weld$Danger <- scale(Weld$Danger)
Weld$Helpfulness <- scale(Weld$Helpfulness)


summary(WeldMod<-lm(Heroism ~ Danger * Helpfulness, data = Weld))
## 
## Call:
## lm(formula = Heroism ~ Danger * Helpfulness, data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.6892 -0.6892  0.0038  0.7559  2.9158 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         4.69611    0.07579  61.961  < 2e-16 ***
## Danger              0.42995    0.07669   5.606 5.11e-08 ***
## Helpfulness         0.54876    0.07616   7.206 5.78e-12 ***
## Danger:Helpfulness -0.05174    0.07329  -0.706    0.481    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.213 on 270 degrees of freedom
## Multiple R-squared:  0.2996, Adjusted R-squared:  0.2918 
## F-statistic:  38.5 on 3 and 270 DF,  p-value: < 2.2e-16
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter          | Eta2 (partial) |       95% CI
## --------------------------------------------------
## Danger             |           0.10 | [0.05, 1.00]
## Helpfulness        |           0.16 | [0.10, 1.00]
## Danger:Helpfulness |       1.84e-03 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
sim_slopes(WeldMod, pred = Danger, modx = Helpfulness)
## JOHNSON-NEYMAN INTERVAL
## 
## When Helpfulness is INSIDE the interval [-4.66, 1.92], the slope of Danger
## is p < .05.
## 
## Note: The range of observed values of Helpfulness is [-3.55, 1.23]
## 
## SIMPLE SLOPES ANALYSIS
## 
## Slope of Danger when Helpfulness = -1.00000e+00 (- 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.48   0.10     4.84   0.00
## 
## Slope of Danger when Helpfulness =  3.99113e-17 (Mean): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.43   0.08     5.61   0.00
## 
## Slope of Danger when Helpfulness =  1.00000e+00 (+ 1 SD): 
## 
##   Est.   S.E.   t val.      p
## ------ ------ -------- ------
##   0.38   0.11     3.37   0.00
interact_plot(WeldMod, pred = Danger, modx = Helpfulness)

# etc.

No interaction, more contribution from Helpfulness.


H3 Conclusion

Support for Hypotheses 3. Perception of heroism is linked to perception of Danger and Helpfulness. The effect sizes are quite large for our main effects, with a definite advantage of Helpfulness. This effect can be due to the fact that police officers can be villainized despite everyone agreeing that they are objectively in risky situations. In a nutshell, people can disagree on bravery and heroism, but it would be counterfactual to disagree on the risk involved in occupations. This can make this dimension less predictive of heroism (but note that the contribution of risk perception to heroism remains quite significant).

We can note that interactions between the two attributes is small and negligeable (a comparison between additive and interactive model further emphasise this point, see Is the model additive or interactive). Both helpfulness and risk contribute to heroism, but Helpfulness is consistently a better predictor of heroism. This is true in all job.


Conclusion

We aimed to assess the effect of our manipulation of risk exposure and altruism on heroism. Our two manipulation checks indicated a general success. Our manipulation had positive significant effects on perceived bravery, selflessness, Danger, helpfulness.

HYPOTHESIS 1 was globally supported. However, the effect of our Motivation manipulation was small. Looking at the job level, our manipulations were only successful for Welders and Psychiatrists – the two least stereotyped occupations.

Replicating the previous study, we found strong support that perceiving heroism is strongly related to the perception of bravery and selflessness and to the perception of Risk and helpfulness. Again, this was not fully accounted by general attitude (Halo effect) and was relatively observed across each occupation.

The two components (Physical risk and Motivation) appears relatively independent: Both our manipulation and the perception of physical danger/helpfulness, did not interact. Only the more subjective character attribution of selflessness and bravery somehow interacted to predict heroism (with the exception of Police officers).

Regarding the Situation attribution (Danger & Helpfulness), heroism was largely driven by helpfulness. Effect sizes of danger, although significant in virtually all occupations (with the exception of police officers), were quite small.

summary(testmod<-lm(Heroism ~ Helpful_scale + Danger_scale + Brave_scale + Selfless_scale, data = Set))
## 
## Call:
## lm(formula = Heroism ~ Helpful_scale + Danger_scale + Brave_scale + 
##     Selfless_scale, data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.3177 -0.4796  0.1518  0.6823  3.6512 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     5.02132    0.02879 174.423  < 2e-16 ***
## Helpful_scale   0.49803    0.03544  14.052  < 2e-16 ***
## Danger_scale    0.20038    0.03385   5.920 4.08e-09 ***
## Brave_scale     0.28065    0.04050   6.930 6.47e-12 ***
## Selfless_scale  0.33371    0.04109   8.122 1.02e-15 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.062 on 1355 degrees of freedom
## Multiple R-squared:  0.4903, Adjusted R-squared:  0.4888 
## F-statistic: 325.9 on 4 and 1355 DF,  p-value: < 2.2e-16
ModIII <- car::Anova(testmod, type = "III")
eta_squared(ModIII, partial = TRUE)
## # Effect Size for ANOVA (Type III)
## 
## Parameter      | Eta2 (partial) |       95% CI
## ----------------------------------------------
## Helpful_scale  |           0.13 | [0.10, 1.00]
## Danger_scale   |           0.03 | [0.01, 1.00]
## Brave_scale    |           0.03 | [0.02, 1.00]
## Selfless_scale |           0.05 | [0.03, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].

When looking at eta squared in a competitive model of all manipulation checks, Helpful appears to be the largest contributor to heroism (13%), followed by selflessness (5%). Finally Bravery (3%) and Danger (3%). Being a Hero is first and foremost about being helpful, it would appear.

In the next sections, we explore further those two aspects: the additive vs interactive nature of altruism and risk taking, and the relative contribution of each component in each occupation.


Exploratory analyses

Some exploratory analyses were registered (see ‘Additional comments’ section on OSF registration)

The following exploratory analyses were conducted:

Further not-registered analyses:


Without Relevant Job participants

Additional sensitivity analyses will be conducted when excluding participants whose occupations are directly relevant or similar to the target occupations of the study.

# e.g., 

# Add the General attitudes to the target df
df_Roles_Excl <- subset(Set, Set$Part_Job=="None of the above")

anova(mod<-lm(Heroism ~ Risk_dummy * Help_dummy * Job, data = df_Roles_Excl))
## Analysis of Variance Table
## 
## Response: Heroism
##                             Df  Sum Sq Mean Sq F value    Pr(>F)    
## Risk_dummy                   1   82.28  82.276 47.7464 7.789e-12 ***
## Help_dummy                   1    9.33   9.326  5.4120   0.02016 *  
## Job                          4  429.50 107.375 62.3123 < 2.2e-16 ***
## Risk_dummy:Help_dummy        1    3.29   3.293  1.9111   0.16710    
## Risk_dummy:Job               4   45.15  11.288  6.5508 3.243e-05 ***
## Help_dummy:Job               4   10.43   2.608  1.5134   0.19592    
## Risk_dummy:Help_dummy:Job    4    5.67   1.416  0.8219   0.51118    
## Residuals                 1222 2105.72   1.723                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(mod<-lm(Heroism ~ Brave_scale * Selfless_scale * Job, data = df_Roles_Excl))
## Analysis of Variance Table
## 
## Response: Heroism
##                                  Df  Sum Sq Mean Sq  F value    Pr(>F)    
## Brave_scale                       1  848.28  848.28 722.4792 < 2.2e-16 ***
## Selfless_scale                    1  226.26  226.26 192.7071 < 2.2e-16 ***
## Job                               4  110.65   27.66  23.5602 < 2.2e-16 ***
## Brave_scale:Selfless_scale        1   44.17   44.17  37.6191 1.159e-09 ***
## Brave_scale:Job                   4    2.12    0.53   0.4516  0.771299    
## Selfless_scale:Job                4    7.21    1.80   1.5350  0.189667    
## Brave_scale:Selfless_scale:Job    4   17.88    4.47   3.8076  0.004403 ** 
## Residuals                      1222 1434.79    1.17                       
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(mod<-lm(Heroism ~ Danger_scale * Helpful_scale * Job, data = df_Roles_Excl))
## Analysis of Variance Table
## 
## Response: Heroism
##                                  Df  Sum Sq Mean Sq  F value    Pr(>F)    
## Danger_scale                      1  564.65  564.65 470.1323 < 2.2e-16 ***
## Helpful_scale                     1  542.94  542.94 452.0542 < 2.2e-16 ***
## Job                               4   78.64   19.66  16.3682 4.536e-13 ***
## Danger_scale:Helpful_scale        1    0.09    0.09   0.0791 0.7785979    
## Danger_scale:Job                  4   12.57    3.14   2.6161 0.0337985 *  
## Helpful_scale:Job                 4   23.45    5.86   4.8810 0.0006601 ***
## Danger_scale:Helpful_scale:Job    4    1.36    0.34   0.2830 0.8891173    
## Residuals                      1222 1467.68    1.20                       
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#summary(mod)

Does not appear to change anything.

Correlations

library(PerformanceAnalytics)


PerformanceAnalytics::chart.Correlation(Set[, c("Heroism", "Selfless", "Brave", "Danger", "Helpfulness", "Attitude")], method = "spearman")


Is the model additive or interactive

It appears that all interactions are weak. Perhaps Altruism and Risk work independently on heroism and not that synergetically. In the following section, I compare additive and interactive solutions to see which model result in the best fit.

In this section we will compare an additive model (Outcome ~ IV1 + IV2) and an interactive model (Outcome ~ IV1 * IV2)

H1 - Risk manip and Motiv manip

modInt<-lm(Heroism ~ Risk_dummy * Help_dummy, data = Set)
modAdd<-lm(Heroism ~ Risk_dummy + Help_dummy, data = Set)
paste0("Comparison between additive vs interactive models")
## [1] "Comparison between additive vs interactive models"
anova(modInt, modAdd)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Risk_dummy * Help_dummy
## Model 2: Heroism ~ Risk_dummy + Help_dummy
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1   1356 2906.4                           
## 2   1357 2907.3 -1  -0.93901 0.4381 0.5082
paste0("additive model")
## [1] "additive model"
anova(modAdd)
## Analysis of Variance Table
## 
## Response: Heroism
##              Df  Sum Sq Mean Sq F value    Pr(>F)    
## Risk_dummy    1   77.59  77.587 36.2140 2.271e-09 ***
## Help_dummy    1   11.49  11.489  5.3625   0.02072 *  
## Residuals  1357 2907.31   2.142                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
ModIII <- car::Anova(modAdd, type = "III")
eta_squared(ModIII, partial = TRUE)
## # Effect Size for ANOVA (Type III)
## 
## Parameter  | Eta2 (partial) |       95% CI
## ------------------------------------------
## Risk_dummy |           0.03 | [0.01, 1.00]
## Help_dummy |       3.94e-03 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
paste0("Interactive model")
## [1] "Interactive model"
anova(modInt)
## Analysis of Variance Table
## 
## Response: Heroism
##                         Df  Sum Sq Mean Sq F value    Pr(>F)    
## Risk_dummy               1   77.59  77.587 36.1990 2.289e-09 ***
## Help_dummy               1   11.49  11.489  5.3603   0.02075 *  
## Risk_dummy:Help_dummy    1    0.94   0.939  0.4381   0.50815    
## Residuals             1356 2906.37   2.143                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
ModIII <- car::Anova(modInt, type = "III")
eta_squared(ModIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter             | Eta2 (partial) |       95% CI
## -----------------------------------------------------
## Risk_dummy            |           0.03 | [0.01, 1.00]
## Help_dummy            |       3.95e-03 | [0.00, 1.00]
## Risk_dummy:Help_dummy |       3.23e-04 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].

Additive Model: Main Bravery effect (eta^2 = 3%), Main Selflessness effect (eta^2 < 1%) Interactive Model: Main Bravery effect (eta^2 = 3%), Main Selflessness effect (eta^2 < 1%), Interaction (eta^2 < 1%).

==> The interactive model provides a better fit.


H2 - Bravery and Selflessness

modInt<-lm(Heroism ~ Brave_scale * Selfless_scale, data = Set)
modAdd<-lm(Heroism ~ Brave_scale + Selfless_scale, data = Set)
paste0("Comparison between additive vs interactive models")
## [1] "Comparison between additive vs interactive models"
anova(modInt, modAdd)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Brave_scale * Selfless_scale
## Model 2: Heroism ~ Brave_scale + Selfless_scale
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1   1356 1766.3                                  
## 2   1357 1874.0 -1   -107.64 82.636 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
paste0("additive model")
## [1] "additive model"
anova(modAdd)
## Analysis of Variance Table
## 
## Response: Heroism
##                  Df  Sum Sq Mean Sq F value    Pr(>F)    
## Brave_scale       1  883.92  883.92  640.06 < 2.2e-16 ***
## Selfless_scale    1  238.47  238.47  172.68 < 2.2e-16 ***
## Residuals      1357 1874.00    1.38                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
ModIII <- car::Anova(modAdd, type = "III")
eta_squared(ModIII, partial = TRUE)
## # Effect Size for ANOVA (Type III)
## 
## Parameter      | Eta2 (partial) |       95% CI
## ----------------------------------------------
## Brave_scale    |           0.07 | [0.05, 1.00]
## Selfless_scale |           0.11 | [0.09, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
paste0("Interactive model")
## [1] "Interactive model"
anova(modInt)
## Analysis of Variance Table
## 
## Response: Heroism
##                              Df  Sum Sq Mean Sq F value    Pr(>F)    
## Brave_scale                   1  883.92  883.92 678.567 < 2.2e-16 ***
## Selfless_scale                1  238.47  238.47 183.068 < 2.2e-16 ***
## Brave_scale:Selfless_scale    1  107.64  107.64  82.636 < 2.2e-16 ***
## Residuals                  1356 1766.35    1.30                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
ModIII <- car::Anova(modInt, type = "III")
eta_squared(ModIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter                  | Eta2 (partial) |       95% CI
## ----------------------------------------------------------
## Brave_scale                |           0.11 | [0.09, 1.00]
## Selfless_scale             |           0.12 | [0.10, 1.00]
## Brave_scale:Selfless_scale |           0.06 | [0.04, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].

Additive Model: Main Bravery effect (eta^2 = 7%), Main Selflessness effect (eta^2 = 11%) Interactive Model: Main Bravery effect (eta^2 = 11%), Main Selflessness effect (eta^2 = 12%), Interaction (eta^2 = 6%).

==> The interactive model provides a better fit.

H2 - Risk and Helpfulness

modAdd<-lm(Heroism ~ scale(Danger) + scale(Helpfulness), data = Set)
modInt<-lm(Heroism ~ scale(Danger) * scale(Helpfulness), data = Set)
paste0("Comparison between additive vs interactive models")
## [1] "Comparison between additive vs interactive models"
anova(modInt, modAdd)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Danger) * scale(Helpfulness)
## Model 2: Heroism ~ scale(Danger) + scale(Helpfulness)
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1   1356 1812.3                           
## 2   1357 1815.9 -1   -3.5773 2.6767 0.1021
paste0("Additive model")
## [1] "Additive model"
anova(modAdd)
## Analysis of Variance Table
## 
## Response: Heroism
##                      Df  Sum Sq Mean Sq F value    Pr(>F)    
## scale(Danger)         1  604.52  604.52  451.76 < 2.2e-16 ***
## scale(Helpfulness)    1  576.00  576.00  430.45 < 2.2e-16 ***
## Residuals          1357 1815.86    1.34                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
ModIII <- car::Anova(modAdd, type = "III")
eta_squared(ModIII, partial = TRUE)
## # Effect Size for ANOVA (Type III)
## 
## Parameter          | Eta2 (partial) |       95% CI
## --------------------------------------------------
## scale(Danger)      |           0.07 | [0.05, 1.00]
## scale(Helpfulness) |           0.24 | [0.21, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].
paste0("Interactive model")
## [1] "Interactive model"
anova(modInt)
## Analysis of Variance Table
## 
## Response: Heroism
##                                    Df  Sum Sq Mean Sq  F value Pr(>F)    
## scale(Danger)                       1  604.52  604.52 452.3202 <2e-16 ***
## scale(Helpfulness)                  1  576.00  576.00 430.9811 <2e-16 ***
## scale(Danger):scale(Helpfulness)    1    3.58    3.58   2.6767 0.1021    
## Residuals                        1356 1812.28    1.34                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
ModIII <- car::Anova(modInt, type = "III")
eta_squared(ModIII, partial = TRUE)
## Type 3 ANOVAs only give sensible and informative results when covariates
##   are mean-centered and factors are coded with orthogonal contrasts (such
##   as those produced by `contr.sum`, `contr.poly`, or `contr.helmert`, but
##   *not* by the default `contr.treatment`).
## # Effect Size for ANOVA (Type III)
## 
## Parameter                        | Eta2 (partial) |       95% CI
## ----------------------------------------------------------------
## scale(Danger)                    |           0.07 | [0.05, 1.00]
## scale(Helpfulness)               |           0.24 | [0.21, 1.00]
## scale(Danger):scale(Helpfulness) |       1.97e-03 | [0.00, 1.00]
## 
## - One-sided CIs: upper bound fixed at [1.00].

Additive Model: Main Danger effect (eta^2 = 7%), Main Helpfulness effect (eta^2 = 24%) Interactive Model: Main Risk effect (eta^2 = 7%), Main Helpfulness effect (eta^2 = 24%), Interaction (eta^2 < 0.1%).

Here it is clearly additive. Interaction is associated to a worst fit.

==> This replicates our previous study


Job Analysis and Manipulation : Interactive vs Additive components of heroism?

To assess additivity vs interactivity of our manipulation within jobs - we built an additive and an interactive model and compare their fit to the data (as previously done), for each occupation

Firefighters

summary(modAdd<-lm(Heroism ~ Risk_dummy + Help_dummy, data = Firef))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy + Help_dummy, data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.7986 -0.7927  0.2014  0.9220  1.2073 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 5.938262   0.069730  85.161   <2e-16 ***
## Risk_dummy  0.285241   0.139459   2.045   0.0418 *  
## Help_dummy  0.005829   0.139459   0.042   0.9667    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.152 on 270 degrees of freedom
## Multiple R-squared:  0.01527,    Adjusted R-squared:  0.007972 
## F-statistic: 2.093 on 2 and 270 DF,  p-value: 0.1253
summary(modInt<-lm(Heroism ~ Risk_dummy * Help_dummy, data = Firef))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy, data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.7941 -0.7941  0.2029  0.9265  1.2059 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)           5.938246   0.069859  85.003   <2e-16 ***
## Risk_dummy            0.285273   0.139718   2.042   0.0421 *  
## Help_dummy            0.005861   0.139718   0.042   0.9666    
## Risk_dummy:Help_dummy 0.017690   0.279436   0.063   0.9496    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.154 on 269 degrees of freedom
## Multiple R-squared:  0.01528,    Adjusted R-squared:  0.004299 
## F-statistic: 1.391 on 3 and 269 DF,  p-value: 0.2457
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Risk_dummy + Help_dummy
## Model 2: Heroism ~ Risk_dummy * Help_dummy
##   Res.Df    RSS Df Sum of Sq     F Pr(>F)
## 1    270 358.39                          
## 2    269 358.38  1 0.0053391 0.004 0.9496
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "Risk_dummy = Help_dummy") 
## 
## Linear hypothesis test:
## Risk_dummy - Help_dummy = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ Risk_dummy + Help_dummy
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    271 361.04                           
## 2    270 358.39  1    2.6544 1.9998 0.1585

In firefighters, No model stands out as a better fit. In the interactive fit (chosen for parsimony reasons), Risk manip and motiv manip provide equal contributions to heroism… weirdly enough because only risk manip reaches significance.

Nurses

summary(modAdd<-lm(Heroism ~ Risk_dummy + Help_dummy, data = Nurses))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy + Help_dummy, data = Nurses)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.5576 -0.6035  0.4424  1.3965  1.6376 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  5.48295    0.08773  62.500   <2e-16 ***
## Risk_dummy   0.19519    0.17546   1.112    0.267    
## Help_dummy   0.04594    0.17546   0.262    0.794    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.444 on 268 degrees of freedom
## Multiple R-squared:  0.004874,   Adjusted R-squared:  -0.002552 
## F-statistic: 0.6563 on 2 and 268 DF,  p-value: 0.5196
summary(modInt<-lm(Heroism ~ Risk_dummy * Help_dummy, data = Nurses))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy, data = Nurses)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.6866 -0.5373  0.3134  1.3134  1.7647 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            5.48436    0.08755  62.643   <2e-16 ***
## Risk_dummy             0.19611    0.17510   1.120    0.264    
## Help_dummy             0.04686    0.17510   0.268    0.789    
## Risk_dummy:Help_dummy -0.51033    0.35020  -1.457    0.146    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.441 on 267 degrees of freedom
## Multiple R-squared:  0.01273,    Adjusted R-squared:  0.001633 
## F-statistic: 1.147 on 3 and 267 DF,  p-value: 0.3305
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Risk_dummy + Help_dummy
## Model 2: Heroism ~ Risk_dummy * Help_dummy
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    268 558.94                           
## 2    267 554.53  1    4.4104 2.1236 0.1462
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "Risk_dummy = Help_dummy") 
## 
## Linear hypothesis test:
## Risk_dummy - Help_dummy = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ Risk_dummy + Help_dummy
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    269 559.68                           
## 2    268 558.94  1   0.74627 0.3578 0.5502

In nurses: No real difference here in fit between additive and interactive. risk and motiv provide equal contributions to the model.

Police

summary(modAdd<-lm(Heroism ~ Risk_dummy + Help_dummy, data = Pol))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy + Help_dummy, data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.7074 -0.6137  0.2926  0.6595  2.6595 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   4.5240     0.0809  55.920   <2e-16 ***
## Risk_dummy    0.2419     0.1618   1.495    0.136    
## Help_dummy    0.1250     0.1618   0.773    0.440    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.334 on 269 degrees of freedom
## Multiple R-squared:  0.01042,    Adjusted R-squared:  0.003064 
## F-statistic: 1.416 on 2 and 269 DF,  p-value: 0.2444
summary(modInt<-lm(Heroism ~ Risk_dummy * Help_dummy, data = Pol))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy, data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.7101 -0.6123  0.2899  0.6567  2.6567 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.52396    0.08105  55.816   <2e-16 ***
## Risk_dummy             0.24194    0.16210   1.493    0.137    
## Help_dummy             0.12492    0.16210   0.771    0.442    
## Risk_dummy:Help_dummy  0.01103    0.32421   0.034    0.973    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.337 on 268 degrees of freedom
## Multiple R-squared:  0.01043,    Adjusted R-squared:  -0.0006516 
## F-statistic: 0.9412 on 3 and 268 DF,  p-value: 0.4212
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Risk_dummy + Help_dummy
## Model 2: Heroism ~ Risk_dummy * Help_dummy
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    269 478.78                           
## 2    268 478.78  1 0.0020685 0.0012 0.9729
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "Risk_dummy = Help_dummy") 
## 
## Linear hypothesis test:
## Risk_dummy - Help_dummy = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ Risk_dummy + Help_dummy
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    270 479.24                           
## 2    269 478.78  1   0.46492 0.2612 0.6097

For police officers, again, the interactive model does not provide any advantage over an additive model.

Risk and motivation provide equal contribution to the model

Psychiatrists

summary(modAdd<-lm(Heroism ~ Risk_dummy + Help_dummy, data = Psych))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy + Help_dummy, data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.9110 -0.9110  0.0890  0.9705  2.9705 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  4.47021    0.08546  52.307  < 2e-16 ***
## Risk_dummy   0.52206    0.17093   3.054  0.00248 ** 
## Help_dummy   0.35942    0.17091   2.103  0.03640 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.404 on 267 degrees of freedom
## Multiple R-squared:  0.04866,    Adjusted R-squared:  0.04154 
## F-statistic: 6.829 on 2 and 267 DF,  p-value: 0.001281
summary(modInt<-lm(Heroism ~ Risk_dummy * Help_dummy, data = Psych))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy, data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.8382 -0.8382  0.1618  1.0455  3.0455 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.46966    0.08551  52.273   <2e-16 ***
## Risk_dummy             0.52210    0.17101   3.053   0.0025 ** 
## Help_dummy             0.36159    0.17101   2.114   0.0354 *  
## Risk_dummy:Help_dummy -0.29309    0.34202  -0.857   0.3922    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.405 on 266 degrees of freedom
## Multiple R-squared:  0.05128,    Adjusted R-squared:  0.04058 
## F-statistic: 4.793 on 3 and 266 DF,  p-value: 0.002866
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Risk_dummy + Help_dummy
## Model 2: Heroism ~ Risk_dummy * Help_dummy
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    267 526.39                           
## 2    266 524.94  1    1.4492 0.7344 0.3922
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "Risk_dummy = Help_dummy") 
## 
## Linear hypothesis test:
## Risk_dummy - Help_dummy = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ Risk_dummy + Help_dummy
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    268 527.29                           
## 2    267 526.39  1   0.89927 0.4561    0.5

For psychiatrists: here also, interactive and additive are not distinguished. Risk and motivation provide equal contribution to the model

Welders

summary(modAdd<-lm(Heroism ~ Risk_dummy + Help_dummy, data = Weld))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy + Help_dummy, data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.0738 -0.7891  0.1043  0.9262  3.1043 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  4.68248    0.07888  59.359  < 2e-16 ***
## Risk_dummy   1.17807    0.15777   7.467 1.13e-12 ***
## Help_dummy   0.39546    0.15777   2.507   0.0128 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.306 on 271 degrees of freedom
## Multiple R-squared:  0.1856, Adjusted R-squared:  0.1796 
## F-statistic: 30.88 on 2 and 271 DF,  p-value: 8.267e-13
summary(modInt<-lm(Heroism ~ Risk_dummy * Help_dummy, data = Weld))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy, data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.0290 -0.8346  0.0588  0.9710  3.0588 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.68281    0.07898  59.288  < 2e-16 ***
## Risk_dummy             1.17807    0.15797   7.458 1.21e-12 ***
## Help_dummy             0.39546    0.15797   2.503   0.0129 *  
## Risk_dummy:Help_dummy  0.18052    0.31594   0.571   0.5682    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.307 on 270 degrees of freedom
## Multiple R-squared:  0.1866, Adjusted R-squared:  0.1776 
## F-statistic: 20.65 on 3 and 270 DF,  p-value: 4.489e-12
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Risk_dummy + Help_dummy
## Model 2: Heroism ~ Risk_dummy * Help_dummy
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    271 462.06                           
## 2    270 461.50  1   0.55803 0.3265 0.5682
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "Risk_dummy = Help_dummy") 
## 
## Linear hypothesis test:
## Risk_dummy - Help_dummy = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ Risk_dummy + Help_dummy
## 
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1    272 483.19                                  
## 2    271 462.06  1     21.13 12.393 0.0005054 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

For welders: No advantage for an interactive model. Risk contributes significantly more to the model than the motivation part – note that controlling for credibility does not influence this result.


Job Analysis and Character attribution : Interactive vs Additive components of heroism?

Having established that the Personality attributes (Bravery/Selflessness) are vaguely interactive and the model of Situational evaluation (Risk/Helpfulness) is clearly additive, we now want to assess the relative contribution of each Heroism elements (Brave vs Selfless and Danger vs Helpful) in predicting heroism.

In the next section, for each occupation, I systematically 1) Compare the fit of an interactive vs additive model in each occupation, and 2) compare the betas of the two predictors in the best model.

Regarding (1), I fit an interactive (Heroism ~ FeatureA * FeatureB) and an additive (Heroism ~ FeatureA + FeatureB) model. I then compare their RSS in an ANOVA to decide which model provides the best fit.

Regarding (2), I use the best model (see Point 1 just above). Then, to test that one feature of heroism contribute significantly more than the other, I use car::linearHypothesis: I force the model betas of the two predictors to be equivalent (e.g., Brave = Selfless) and see if this constrained model provides a better fit than the model where the betas are let loose. If the non-constrained model provides a better fit, it points to the relative superiority of one beta over the other in explaining the outcome.

Firefighters

summary(modAdd<-lm(Heroism ~ Brave_scale + Selfless_scale, data = Firef))
## 
## Call:
## lm(formula = Heroism ~ Brave_scale + Selfless_scale, data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.9551 -0.4150  0.2968  0.5850  2.8556 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     5.42227    0.07127  76.077  < 2e-16 ***
## Brave_scale     0.32257    0.11490   2.807  0.00536 ** 
## Selfless_scale  0.61293    0.09898   6.193 2.19e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.925 on 270 degrees of freedom
## Multiple R-squared:  0.3652, Adjusted R-squared:  0.3605 
## F-statistic: 77.66 on 2 and 270 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ Brave_scale * Selfless_scale, data = Firef))
## 
## Call:
## lm(formula = Heroism ~ Brave_scale * Selfless_scale, data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.0064 -0.5562  0.4438  0.4438  2.0932 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 5.12141    0.08912  57.464  < 2e-16 ***
## Brave_scale                 0.65106    0.12642   5.150 5.04e-07 ***
## Selfless_scale              0.52142    0.09609   5.426 1.28e-07 ***
## Brave_scale:Selfless_scale  0.22611    0.04327   5.226 3.48e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.883 on 269 degrees of freedom
## Multiple R-squared:  0.4237, Adjusted R-squared:  0.4173 
## F-statistic: 65.92 on 3 and 269 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ Brave_scale + Selfless_scale
## Model 2: Heroism ~ Brave_scale * Selfless_scale
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1    270 231.04                                  
## 2    269 209.74  1    21.296 27.312 3.477e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
paste0("The interactive model (smaller RSS, significant F-statistic) is the best fit.")
## [1] "The interactive model (smaller RSS, significant F-statistic) is the best fit."
car::linearHypothesis(modInt, "Brave_scale = Selfless_scale") 
## 
## Linear hypothesis test:
## Brave_scale - Selfless_scale = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ Brave_scale * Selfless_scale
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    270 210.05                           
## 2    269 209.74  1   0.30747 0.3943 0.5306

In firefighters, 1) Interactive model is a better fit than additive model. 2) In the interactive model (note: and in the additive one), Bravery and Selflessness provide similar contributions to heroism.

Nurses

summary(modAdd<-lm(Heroism ~ scale(Brave) + scale(Selfless), data = Nurses))
## 
## Call:
## lm(formula = Heroism ~ scale(Brave) + scale(Selfless), data = Nurses)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.3011 -0.6296  0.1016  0.6989  4.7280 
## 
## Coefficients:
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)      5.48339    0.07225  75.898  < 2e-16 ***
## scale(Brave)     0.33159    0.11337   2.925  0.00374 ** 
## scale(Selfless)  0.53942    0.11337   4.758  3.2e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.189 on 268 degrees of freedom
## Multiple R-squared:  0.3251, Adjusted R-squared:   0.32 
## F-statistic: 64.54 on 2 and 268 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Brave) * scale(Selfless), data = Nurses))
## 
## Call:
## lm(formula = Heroism ~ scale(Brave) * scale(Selfless), data = Nurses)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.5977 -0.4969  0.4023  0.5031  2.9977 
## 
## Coefficients:
##                              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                   5.30036    0.07737  68.510  < 2e-16 ***
## scale(Brave)                  0.52032    0.11409   4.560 7.78e-06 ***
## scale(Selfless)               0.63472    0.10974   5.784 2.04e-08 ***
## scale(Brave):scale(Selfless)  0.23868    0.04575   5.217 3.66e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.135 on 267 degrees of freedom
## Multiple R-squared:  0.3875, Adjusted R-squared:  0.3806 
## F-statistic: 56.31 on 3 and 267 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Brave) + scale(Selfless)
## Model 2: Heroism ~ scale(Brave) * scale(Selfless)
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1    268 379.09                                  
## 2    267 344.02  1    35.062 27.212 3.662e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
paste0("The interactive model (smaller RSS, significant F-statistic) is the best fit.")
## [1] "The interactive model (smaller RSS, significant F-statistic) is the best fit."
car::linearHypothesis(modInt, "scale(Brave) = scale(Selfless)") 
## 
## Linear hypothesis test:
## scale(Brave) - scale(Selfless) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Brave) * scale(Selfless)
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    268 344.43                           
## 2    267 344.02  1    0.4038 0.3134 0.5761

For nurses: 1) Interactive model (RSS = 344.02) is a better fit than additive model (RSS = 379.09), F (2, 267) = 27.21, p < .00001. 2) Linear hypothesis model indicates that not restricting the beta provides a better fit. It can be concluded that Selflessness and Bravery contributes equally to heroism for nurses.

Police

summary(modAdd<-lm(Heroism ~scale(Brave) + scale(Selfless), data = Pol))
## 
## Call:
## lm(formula = Heroism ~ scale(Brave) + scale(Selfless), data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.8167 -0.4534  0.1833  0.5759  3.6647 
## 
## Coefficients:
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)      4.52574    0.06587  68.704  < 2e-16 ***
## scale(Brave)     0.41133    0.08430   4.879 1.83e-06 ***
## scale(Selfless)  0.45824    0.08430   5.436 1.23e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.086 on 269 degrees of freedom
## Multiple R-squared:  0.3438, Adjusted R-squared:  0.3389 
## F-statistic: 70.46 on 2 and 269 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Brave) * scale(Selfless), data = Pol))
## 
## Call:
## lm(formula = Heroism ~ scale(Brave) * scale(Selfless), data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.7383 -0.4540  0.1713  0.5759  3.8594 
## 
## Coefficients:
##                              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                   4.55066    0.07291  62.416  < 2e-16 ***
## scale(Brave)                  0.39739    0.08614   4.613 6.15e-06 ***
## scale(Selfless)               0.45904    0.08437   5.441 1.20e-07 ***
## scale(Brave):scale(Selfless) -0.04020    0.05025  -0.800    0.424    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.087 on 268 degrees of freedom
## Multiple R-squared:  0.3453, Adjusted R-squared:  0.338 
## F-statistic: 47.12 on 3 and 268 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Brave) + scale(Selfless)
## Model 2: Heroism ~ scale(Brave) * scale(Selfless)
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    269 317.49                           
## 2    268 316.74  1   0.75653 0.6401 0.4244
paste0("The interactive model (smaller RSS, significant F-statistic) is the best fit. But it's really slim")
## [1] "The interactive model (smaller RSS, significant F-statistic) is the best fit. But it's really slim"
car::linearHypothesis(modInt, "scale(Brave) = scale(Selfless)") 
## 
## Linear hypothesis test:
## scale(Brave) - scale(Selfless) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Brave) * scale(Selfless)
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    269 316.93                           
## 2    268 316.74  1   0.19168 0.1622 0.6875

For police officers: there is no model that stands out as better, but a small advantage for interactive model (p = .42). In this interactive model, it appears that Selflessness and bravery provide equal contribution to heroism.

Psychiatrists

summary(modAdd<-lm(Heroism ~ scale(Brave) + scale(Selfless), data = Psych))
## 
## Call:
## lm(formula = Heroism ~ scale(Brave) + scale(Selfless), data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.3660 -0.6487  0.1919  0.7935  4.1995 
## 
## Coefficients:
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)      4.47407    0.07313  61.180  < 2e-16 ***
## scale(Brave)     0.59979    0.09072   6.611 2.06e-10 ***
## scale(Selfless)  0.27008    0.09072   2.977  0.00318 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.202 on 267 degrees of freedom
## Multiple R-squared:  0.3032, Adjusted R-squared:  0.298 
## F-statistic:  58.1 on 2 and 267 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Brave) * scale(Selfless), data = Psych))
## 
## Call:
## lm(formula = Heroism ~ scale(Brave) * scale(Selfless), data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.2357 -0.6637  0.2474  0.7643  3.2342 
## 
## Coefficients:
##                              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                   4.35846    0.07809  55.813  < 2e-16 ***
## scale(Brave)                  0.71198    0.09378   7.592 5.33e-13 ***
## scale(Selfless)               0.25024    0.08883   2.817 0.005212 ** 
## scale(Brave):scale(Selfless)  0.19676    0.05354   3.675 0.000287 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.174 on 266 degrees of freedom
## Multiple R-squared:  0.3369, Adjusted R-squared:  0.3294 
## F-statistic: 45.05 on 3 and 266 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Brave) + scale(Selfless)
## Model 2: Heroism ~ scale(Brave) * scale(Selfless)
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1    267 385.53                                  
## 2    266 366.90  1     18.63 13.507 0.0002875 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
paste0("The interactive model (smaller RSS, significant F-statistic) is the best fit.")
## [1] "The interactive model (smaller RSS, significant F-statistic) is the best fit."
car::linearHypothesis(modInt, "scale(Brave) = scale(Selfless)") 
## 
## Linear hypothesis test:
## scale(Brave) - scale(Selfless) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Brave) * scale(Selfless)
## 
##   Res.Df    RSS Df Sum of Sq      F   Pr(>F)   
## 1    267 378.08                                
## 2    266 366.90  1    11.187 8.1104 0.004745 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

For psychiatrists, the interactive model (RSS = 366.9) provides a better fit than the additive model (RSS= 383.53), p = .0003. Bravery contributes significantly more to heroism than Selflessness, p = .005.

Welders

summary(modAdd<-lm(Heroism ~ scale(Brave) + scale(Selfless), data = Weld))
## 
## Call:
## lm(formula = Heroism ~ scale(Brave) + scale(Selfless), data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.6541 -0.6731  0.1077  0.8034  4.0906 
## 
## Coefficients:
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)      4.68248    0.07409  63.196  < 2e-16 ***
## scale(Brave)     0.35643    0.08326   4.281 2.58e-05 ***
## scale(Selfless)  0.53434    0.08326   6.418 6.14e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.226 on 271 degrees of freedom
## Multiple R-squared:  0.2815, Adjusted R-squared:  0.2762 
## F-statistic: 53.09 on 2 and 271 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Brave) * scale(Selfless), data = Weld))
## 
## Call:
## lm(formula = Heroism ~ scale(Brave) * scale(Selfless), data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.8247 -0.7899  0.1753  0.7481  4.6123 
## 
## Coefficients:
##                              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                   4.59765    0.07732  59.465  < 2e-16 ***
## scale(Brave)                  0.45945    0.08770   5.239 3.26e-07 ***
## scale(Selfless)               0.52031    0.08193   6.351 9.03e-10 ***
## scale(Brave):scale(Selfless)  0.18801    0.05764   3.262  0.00125 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.205 on 270 degrees of freedom
## Multiple R-squared:  0.3087, Adjusted R-squared:  0.3011 
## F-statistic:  40.2 on 3 and 270 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Brave) + scale(Selfless)
## Model 2: Heroism ~ scale(Brave) * scale(Selfless)
##   Res.Df    RSS Df Sum of Sq      F  Pr(>F)   
## 1    271 407.66                               
## 2    270 392.20  1    15.453 10.638 0.00125 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
paste0("The interactive model (smaller RSS, significant F-statistic) is the best fit.")
## [1] "The interactive model (smaller RSS, significant F-statistic) is the best fit."
car::linearHypothesis(modInt, "scale(Brave) = scale(Selfless)") 
## 
## Linear hypothesis test:
## scale(Brave) - scale(Selfless) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Brave) * scale(Selfless)
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    271 392.46                           
## 2    270 392.20  1   0.25945 0.1786 0.6729

For welders, interactive model provides a better fit than additive model (RSS = 392.2 vs RSS = 407.66), p = .00125. In the interactive model, Selflessness and bravery provides equal contribution to heroism.

Job Analysis and Situation attribution : Interactive vs Additive components of heroism?

We repeat the same analyses but using our second manipulation check: Situation attribution (Physical danger & Helpfulness of the occupation).

Firefighters

summary(modAdd<-lm(Heroism ~ scale(Danger) + scale(Helpfulness), data = Firef))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) + scale(Helpfulness), data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.3375 -0.3375  0.2788  0.6625  1.8097 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         5.93773    0.05913 100.426  < 2e-16 ***
## scale(Danger)       0.25907    0.07917   3.272  0.00121 ** 
## scale(Helpfulness)  0.42234    0.07917   5.335 2.03e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.9769 on 270 degrees of freedom
## Multiple R-squared:  0.292,  Adjusted R-squared:  0.2867 
## F-statistic: 55.67 on 2 and 270 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Danger) * scale(Helpfulness), data = Firef))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) * scale(Helpfulness), data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.3549 -0.3549  0.3133  0.6451  1.7724 
## 
## Coefficients:
##                                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                       5.92377    0.06204  95.482  < 2e-16 ***
## scale(Danger)                     0.26680    0.07990   3.339  0.00096 ***
## scale(Helpfulness)                0.45664    0.09152   4.990 1.09e-06 ***
## scale(Danger):scale(Helpfulness)  0.02112    0.02820   0.749  0.45454    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.9777 on 269 degrees of freedom
## Multiple R-squared:  0.2935, Adjusted R-squared:  0.2856 
## F-statistic: 37.24 on 3 and 269 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Danger) + scale(Helpfulness)
## Model 2: Heroism ~ scale(Danger) * scale(Helpfulness)
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    270 257.68                           
## 2    269 257.14  1    0.5362 0.5609 0.4545
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "scale(Danger) = scale(Helpfulness)") 
## 
## Linear hypothesis test:
## scale(Danger) - scale(Helpfulness) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Danger) + scale(Helpfulness)
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    271 258.90                           
## 2    270 257.68  1    1.2201 1.2784 0.2592

In firefighters, No model stands out as a better fit. In the interactive fit (chosen for parsimony reasons), Danger and helpfulness provide equal contributions to heroism.

Nurses

summary(modAdd<-lm(Heroism ~ scale(Danger) + scale(Helpfulness), data = Nurses))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) + scale(Helpfulness), data = Nurses)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.0613 -0.5000  0.1621  0.6580  2.9932 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         5.48339    0.06990  78.444  < 2e-16 ***
## scale(Danger)       0.34514    0.07893   4.373 1.76e-05 ***
## scale(Helpfulness)  0.66065    0.07893   8.371 3.21e-15 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.151 on 268 degrees of freedom
## Multiple R-squared:  0.3682, Adjusted R-squared:  0.3635 
## F-statistic: 78.08 on 2 and 268 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Danger) * scale(Helpfulness), data = Nurses))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) * scale(Helpfulness), data = Nurses)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.0583 -0.5187  0.1772  0.6729  3.0262 
## 
## Coefficients:
##                                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                       5.49175    0.07663  71.667  < 2e-16 ***
## scale(Danger)                     0.34346    0.07931   4.331 2.10e-05 ***
## scale(Helpfulness)                0.65103    0.08681   7.499 9.52e-13 ***
## scale(Danger):scale(Helpfulness) -0.01819    0.06774  -0.268    0.789    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.153 on 267 degrees of freedom
## Multiple R-squared:  0.3683, Adjusted R-squared:  0.3612 
## F-statistic:  51.9 on 3 and 267 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Danger) + scale(Helpfulness)
## Model 2: Heroism ~ scale(Danger) * scale(Helpfulness)
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    268 354.89                           
## 2    267 354.79  1  0.095766 0.0721 0.7886
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "scale(Danger) = scale(Helpfulness)") 
## 
## Linear hypothesis test:
## scale(Danger) - scale(Helpfulness) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Danger) + scale(Helpfulness)
## 
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)  
## 1    269 362.13                             
## 2    268 354.89  1    7.2413 5.4684 0.0201 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

In nurses: No real difference here in fit between additive and interactive. We observe a barely significant superior contribution of Helpfulness in comparison to Danger, p = .02.

Police

summary(modAdd<-lm(Heroism ~ scale(Danger) + scale(Helpfulness), data = Pol))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) + scale(Helpfulness), data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.7606 -0.5506  0.0884  0.7790  2.9778 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         4.52574    0.06390  70.824   <2e-16 ***
## scale(Danger)       0.12546    0.07922   1.584    0.114    
## scale(Helpfulness)  0.74619    0.07922   9.419   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.054 on 269 degrees of freedom
## Multiple R-squared:  0.3825, Adjusted R-squared:  0.3779 
## F-statistic:  83.3 on 2 and 269 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Danger) * scale(Helpfulness), data = Pol))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) * scale(Helpfulness), data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.8871 -0.5108  0.0161  0.8233  3.0095 
## 
## Coefficients:
##                                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                       4.48612    0.07166  62.599   <2e-16 ***
## scale(Danger)                     0.14025    0.08008   1.751    0.081 .  
## scale(Helpfulness)                0.76996    0.08153   9.444   <2e-16 ***
## scale(Danger):scale(Helpfulness)  0.06749    0.05547   1.217    0.225    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.053 on 268 degrees of freedom
## Multiple R-squared:  0.3859, Adjusted R-squared:  0.379 
## F-statistic: 56.13 on 3 and 268 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Danger) + scale(Helpfulness)
## Model 2: Heroism ~ scale(Danger) * scale(Helpfulness)
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    269 298.77                           
## 2    268 297.13  1    1.6416 1.4806 0.2247
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "scale(Danger) = scale(Helpfulness)") 
## 
## Linear hypothesis test:
## scale(Danger) - scale(Helpfulness) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Danger) + scale(Helpfulness)
## 
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1    270 320.23                                  
## 2    269 298.77  1    21.454 19.316 1.596e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

For police officers, again, the interactive model does not provide any advantage over an additive model.

Helpfulness contributes significantly more than Danger to heroism of police officers.

Psychiatrists

summary(modAdd<-lm(Heroism ~ scale(Danger) + scale(Helpfulness), data = Psych))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) + scale(Helpfulness), data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.4245 -0.6204  0.1836  0.7033  2.3796 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         4.47407    0.07219  61.975   <2e-16 ***
## scale(Danger)       0.18109    0.07909   2.290   0.0228 *  
## scale(Helpfulness)  0.72224    0.07909   9.132   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.186 on 267 degrees of freedom
## Multiple R-squared:  0.321,  Adjusted R-squared:  0.3159 
## F-statistic: 63.11 on 2 and 267 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Danger) * scale(Helpfulness), data = Psych))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) * scale(Helpfulness), data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.4511 -0.6129  0.1978  0.6907  2.3997 
## 
## Coefficients:
##                                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                       4.46569    0.07640  58.454  < 2e-16 ***
## scale(Danger)                     0.18022    0.07926   2.274   0.0238 *  
## scale(Helpfulness)                0.73211    0.08436   8.678  4.1e-16 ***
## scale(Danger):scale(Helpfulness)  0.02080    0.06115   0.340   0.7340    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.188 on 266 degrees of freedom
## Multiple R-squared:  0.3213, Adjusted R-squared:  0.3136 
## F-statistic: 41.97 on 3 and 266 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Danger) + scale(Helpfulness)
## Model 2: Heroism ~ scale(Danger) * scale(Helpfulness)
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    267 375.71                           
## 2    266 375.54  1   0.16333 0.1157  0.734
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "scale(Danger) = scale(Helpfulness)") 
## 
## Linear hypothesis test:
## scale(Danger) - scale(Helpfulness) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Danger) + scale(Helpfulness)
## 
##   Res.Df    RSS Df Sum of Sq      F    Pr(>F)    
## 1    268 399.16                                  
## 2    267 375.71  1    23.452 16.667 5.892e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

For psychiatrists: here also, interactive and additive are not distinguished. Helpfulness contributes more to heroism than Danger.

Welders

summary(modAdd<-lm(Heroism ~ scale(Danger) + scale(Helpfulness), data = Weld))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) + scale(Helpfulness), data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.7382 -0.7222  0.0124  0.7630  3.0146 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         4.68248    0.07322  63.948  < 2e-16 ***
## scale(Danger)       0.43648    0.07606   5.738 2.56e-08 ***
## scale(Helpfulness)  0.55001    0.07606   7.231 4.91e-12 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.212 on 271 degrees of freedom
## Multiple R-squared:  0.2983, Adjusted R-squared:  0.2931 
## F-statistic: 57.61 on 2 and 271 DF,  p-value: < 2.2e-16
summary(modInt<-lm(Heroism ~ scale(Danger) * scale(Helpfulness), data = Weld))
## 
## Call:
## lm(formula = Heroism ~ scale(Danger) * scale(Helpfulness), data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.6892 -0.6892  0.0038  0.7559  2.9158 
## 
## Coefficients:
##                                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                       4.69611    0.07579  61.961  < 2e-16 ***
## scale(Danger)                     0.42995    0.07669   5.606 5.11e-08 ***
## scale(Helpfulness)                0.54876    0.07616   7.206 5.78e-12 ***
## scale(Danger):scale(Helpfulness) -0.05174    0.07329  -0.706    0.481    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.213 on 270 degrees of freedom
## Multiple R-squared:  0.2996, Adjusted R-squared:  0.2918 
## F-statistic:  38.5 on 3 and 270 DF,  p-value: < 2.2e-16
anova(modAdd, modInt)
## Analysis of Variance Table
## 
## Model 1: Heroism ~ scale(Danger) + scale(Helpfulness)
## Model 2: Heroism ~ scale(Danger) * scale(Helpfulness)
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    271 398.12                           
## 2    270 397.39  1   0.73357 0.4984 0.4808
paste0("Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive")
## [1] "Model do not significantly differ. Consistency motive: we keep interaction; or Parsimony motive: we go with additive"
car::linearHypothesis(modAdd, "scale(Danger) = scale(Helpfulness)") 
## 
## Linear hypothesis test:
## scale(Danger) - scale(Helpfulness) = 0
## 
## Model 1: restricted model
## Model 2: Heroism ~ scale(Danger) + scale(Helpfulness)
## 
##   Res.Df    RSS Df Sum of Sq     F Pr(>F)
## 1    272 399.41                          
## 2    271 398.12  1    1.2942 0.881 0.3488

For welders: No advantage for an interactive model. Helpfulness and danger provide equal contribution to heroism.


Final Conclusion

Hurray, this second attempt was quite a success. The manipulation checks indicate that the manipulations worked, and we observed all the principal registered effects.

However, looking in details, the manipulation worked in psychiatrists and welders, but not the other occupations – indicating that heroism ratings might be mainly driven by strongly held opinions of the heroism of an occupation. Because we have strong views of how heroic firefighters, nurses, and police officers are, our manipulation was unsuccessful.

Moreover, while we did observe a significant effect of motivation, it was close to non-significance In contrast, the effect of risk exposure was quite large – perhaps reflecting an effect stemming from the contrast with boredom: Can we really imagine a hero being bored?

Replicating the previous findings: both the character evaluation (selflessness and bravery) and the situation evaluation (Danger and helpfulness) predicted heroism. However, in contrast to the previous findings, the contributions were more balanced with the exception of psychiatrists’ heroism being mainly predicted by bravery, rather than selflessness (H2). Welders also showed a strong effect of the risk manipulation relative to the motivation condition (H1). Replicating the previous study: Helpfulness is a stronger predictor of heroism compared to physical danger – we can have a dangerous profession without necessarily being typecasted as a hero. This is further emphasised by Danger being the least correlated variable of the set with heroism (see correlation matrix).

The plots below describe the Beta comparisons reported in the exploratory analyses:

# Right, so we're going to plot the betas difference for each occupation
# We create a list of df
datasets <- list(
  Firefighters   = Firef,
  Nurses     = Nurses,
  Police       = Pol,
  Psychiatrists  = Psych,
  Welders = Weld
)

# Storing the results of the models
results <- list()

# for loop: in each dataset, fit the model, and extract the betas to substract them
for(model_name in names(datasets)) {
  data_ <- datasets[[model_name]]
  
  # Fitting mod
  mod <- lm(Heroism ~ Brave * Selfless, data = data_)
  
  # Extract coefficients and the variance-covariance matrix (used for 95%CI)
  coefs <- coef(mod)
  vc   <- vcov(mod)
  
  # Compute the difference between coefficients "Brave" and "Selfless"
  diff_coef <- coefs["Brave"] - coefs["Selfless"]
  
  # Compute standard error of the difference:
  # Var(Brave - Selfless) = Var(Brave) + Var(Selfless) - 2*Cov(Brave,Selfless)
  se_diff <- sqrt(vc["Brave", "Brave"] + vc["Selfless", "Selfless"] - 2 * vc["Brave", "Selfless"])
  
  # Compute the 95% confidence interval using a normal approximation (z = 1.96)
  crit <- qnorm(0.975)
  lower <- diff_coef - crit * se_diff
  upper <- diff_coef + crit * se_diff
  
  # Store the results in a data frame
  results[[model_name]] <- data.frame(
    Model  = model_name,
    diff   = diff_coef,
    lower  = lower,
    upper  = upper
  )
}

# Combine the list of data frames into one
df_results <- do.call(rbind, results)
df_results$Model <- factor(df_results$Model, levels = names(datasets))

ggplot(df_results, aes(x = Model, y = diff)) +
  geom_point(size = 3) +
  geom_errorbar(aes(ymin = lower, ymax = upper), width = 0.2) +
  geom_hline(yintercept = 0, linetype = "dashed", color = "red") +
  labs(
    title = "Relative contribution of Bravery vs Selflessness in predicting Heroism",
    y = "Coefficient Difference (Brave - Selfless)",
    x = "Occupation"
  ) +
  theme_minimal()

# Right, so we're going to plot the betas difference for each occupation
# We create a list of df
datasets <- list(
  Firefighters   = Firef,
  Nurses     = Nurses,
  Police       = Pol,
  Psychiatrists  = Psych,
  Welders = Weld
)

# Storing the results of the models
results <- list()
Firef$Danger <- scale(Firef$Danger)
Firef$Helpfulness <- scale(Firef$Helpfulness)

Nurses$Danger <- scale(Nurses$Danger)
Nurses$Helpfulness <- scale(Nurses$Helpfulness)

Weld$Danger <- scale(Weld$Danger)
Weld$Helpfulness <- scale(Weld$Helpfulness)

Pol$Danger <- scale(Pol$Danger)
Pol$Helpfulness <- scale(Pol$Helpfulness)

Psych$Danger <- scale(Psych$Danger)
Psych$Helpful <- scale(Psych$Helpfulness)


# for loop: in each dataset, fit the model, and extract the betas to substract them
for(model_name in names(datasets)) {
  data_ <- datasets[[model_name]]
  
  # Fitting mod
  mod <- lm(Heroism ~ Danger + Helpfulness, data = data_)
  
  # Extract coefficients and the variance-covariance matrix (used for 95%CI)
  coefs <- coef(mod)
  vc   <- vcov(mod)
  
  # Compute the difference between coefficients "Brave" and "Selfless"
  diff_coef <- coefs["Danger"] - coefs["Helpfulness"]
  
  # Compute standard error of the difference:
  se_diff <- sqrt(vc["Danger", "Danger"] + vc["Helpfulness", "Helpfulness"] - 2 * vc["Danger", "Helpfulness"])
  
  # Compute the 95% confidence interval using a normal approximation (z = 1.96)
  crit <- qnorm(0.975)
  lower <- diff_coef - crit * se_diff
  upper <- diff_coef + crit * se_diff
  
  # Store the results in a data frame
  results[[model_name]] <- data.frame(
    Model  = model_name,
    diff   = diff_coef,
    lower  = lower,
    upper  = upper
  )
}

# Combine the list of data frames into one
df_results <- do.call(rbind, results)
df_results$Model <- factor(df_results$Model, levels = names(datasets))

ggplot(df_results, aes(x = Model, y = diff)) +
  geom_point(size = 3) +
  geom_errorbar(aes(ymin = lower, ymax = upper), width = 0.2) +
  geom_hline(yintercept = 0, linetype = "dashed", color = "red") +
  labs(
    title = "Relative contribution of Risk vs Helpfulness in predicting Heroism",
    y = "Coefficient Difference (Risk - Helpfulness)",
    x = "Occupation"
  ) +
  theme_minimal()

# Storing the results of the models
results <- list()

# for loop: in each dataset, fit the model, and extract the betas to substract them
for(model_name in names(datasets)) {
  data_ <- datasets[[model_name]]
  
  # Fitting mod
  mod <- lm(Heroism ~ Risk_dummy * Help_dummy, data = data_)
  
  # Extract coefficients and the variance-covariance matrix (used for 95%CI)
  coefs <- coef(mod)
  vc   <- vcov(mod)
  
  # Compute the difference between coefficients "Brave" and "Selfless"
  diff_coef <- coefs["Risk_dummy"] - coefs["Help_dummy"]
  
  # Compute standard error of the difference:
  # Var(Brave - Selfless) = Var(Brave) + Var(Help_dummy) - 2*Cov(Brave,Help_dummy)
  se_diff <- sqrt(vc["Risk_dummy", "Risk_dummy"] + vc["Help_dummy", "Help_dummy"] - 2 * vc["Risk_dummy", "Help_dummy"])
  
  # Compute the 95% confidence interval using a normal approximation (z = 1.96)
  crit <- qnorm(0.975)
  lower <- diff_coef - crit * se_diff
  upper <- diff_coef + crit * se_diff
  
  # Store the results in a data frame
  results[[model_name]] <- data.frame(
    Model  = model_name,
    diff   = diff_coef,
    lower  = lower,
    upper  = upper
  )
}

# Combine the list of data frames into one
df_results <- do.call(rbind, results)
df_results$Model <- factor(df_results$Model, levels = names(datasets))

ggplot(df_results, aes(x = Model, y = diff)) +
  geom_point(size = 3) +
  geom_errorbar(aes(ymin = lower, ymax = upper), width = 0.2) +
  geom_hline(yintercept = 0, linetype = "dashed", color = "red") +
  labs(
    title = "Relative contribution of Risk condition vs Motivation condition in predicting Heroism",
    y = "Coefficient Difference (Risk cond - Motivation)",
    x = "Occupation"
  ) +
  theme_minimal()

Any question can be addressed to Jean Monéger (My contact can be easily found using Google).

Our manipulation globally worked - although the Motivation variable barely worked





Appendix

Data wrangling

Below is the code for organising the dataset from the Qualtrics output to the neat dataframe used to run this document:

  1. isolating Demographics
  2. putting the reported participants’ occupation in a single column (Part_job)
  3. Changing the response labels from characters to numeric: Very negative –> 1, etc.
  4. Recoding condition in two different ways to enable verification that there was no bug (there were none of course)
  5. ‘Aligning columns’: different columns code for the same variables depending on the experimental conditions. So, we recode things to have, all the heroism rating in the same columns, etc.
  6. Conditions are dummy coded: Boredom -> -0.5, Risk -> 0.5 / Self-improvement -> -0.5, Selflessness -> 0.5

The wrangling process requires that you download the datafile Raw_Qualtrics_Data_HeroFactory_April2025.csv available from our OSF webpage.

Set <- read.csv("Raw_Qualtrics_Data_HeroFactory_April2025.csv", comment.char="#")
Set <- subset(Set, Set$Q233 != "No, I won't")
Set <- Set[-c(1:2),]
Set <- subset(Set, Set$FailedComp == "No") # Keep only participants who succeeded the completion checks

# Two participants timed out but completed the survey. Let's remove them to not threaten representativity and stick closely to registration:

Set <- subset(Set, !(Set$Prol_ID %in% c("5cb6f38ffdc7fa0013f809a3", "5cf84c0f4b639a0016a45a54")))  

QualCheck <- Set[, c(21:26)] # Quality evaluation is Good. There are 8 participant flagged as duplicatated, but I do not trust this opaque quality check

Demographics <- Set[, 38:48] # Keeping demographics aside
# First, add a unique row identifier if one does not exist already:
job_cols <- c("Job_match_1", "Job_match_2", "Job_match_3", 
              "Job_match_4", "Job_match_5", "Job_match_7", "Job_match_6")

Demographics$JOB <- apply(Demographics[, job_cols], 1, function(x) {
  # Remove empty strings
  reported_jobs <- x[x != ""]
  # If no jobs reported, assign NA; else paste the jobs together
  if (length(reported_jobs) == 0) NA else paste(reported_jobs, collapse = ", ")
})

Set$Credibility[Set$Credibility == "Very unbelievable"] <- 1
Set$Credibility[Set$Credibility == "Quite unbelievable"] <- 2
Set$Credibility[Set$Credibility == "Somewhat unbelievable"] <- 3
Set$Credibility[Set$Credibility == "Neutral"] <- 4
Set$Credibility[Set$Credibility == "Somewhat believable"] <- 5
Set$Credibility[Set$Credibility == "Quite believable"] <- 6
Set$Credibility[Set$Credibility == "Very believable"] <- 7

Set <- Set %>%
  dplyr::mutate(across(
    where(is.character),
    ~ dplyr::case_match(.,
                        "83% of the workers reported feeling bored most of the time" ~ "Bored",
                        "83% of the workers reported that their life was at risk in the past 12 months" ~ "Risk",
                        "74% of the workers identified \"helping people\" as their primary motivation" ~ "Help",
                        "74% of the workers identified \"self-improvement\" as their primary motivation" ~ "Self",
                        .default = .
    )
  ))


# I need to code the condition. It can be retrieved using non-empty heroism ratings. e.g., NRH is Nurses, Risk, Help. Note that it can also be re-computed from succeeded Comprehension checks (AC___1, AC____2, and AC_____3)

# I concatenate Comprehension checks (AC1, AC2, and AC3)
Set_long <- Set %>%
  # Gather all columns matching the pattern
  pivot_longer(
    cols = matches("^AC.*[123]$"),
    names_to = c("base", "suffix"),
    names_pattern = "^(AC.*?)([123])$",
    values_to = "value"
  ) %>%
  # Now spread the suffix into separate columns.
  pivot_wider(
    names_from = suffix,
    values_from = value,
    names_prefix = "AC"
  )



# Identify all columns ending with "h_1" and use it to fill the column "CONDITION"
h1_cols <- grep("_h_1$", names(Set_long), value = TRUE)
# Creating the condition
Set_long_filtered <- Set_long %>%
  rowwise() %>%  # process each row individually
  mutate(
    # Count how many of the h1_cols are non-empty ("")
    non_empty_count = sum(c_across(all_of(h1_cols)) != ""),
    
    # Determine the Condition based on non-empty count
    Condition_2 = if (non_empty_count == 1) {
      # Get the name of the column that is non-empty
      selected_col <- h1_cols[which(c_across(all_of(h1_cols)) != "")]
      # Remove the "_h_1" suffix to get the condition (e.g., "NRS" from "NRS_h_1")
      str_remove(selected_col, "_h_1")
    } else {
      "Error"
    }
  ) %>%
  ungroup() %>%  # exit rowwise mode
  select(-non_empty_count) %>% distinct(ResponseId, .keep_all=T) # remove the helper column if no longer needed



# Now I'll neatly create the main data frame. I create a helper function that extract all non empty values and apply it on all rows ending with our codes: 
# - _h_1 is heroism rating, _m_1_1 is first item of MC1, m_1_2, is second item of mc2, _at is attitude etc.


# Helper function: Given a vector, return the only non-empty element (or NA if none/few found)
extract_value <- function(x) {
  non_empty <- x[x != ""]
  if(length(non_empty) == 1) return(non_empty)
  else return(NA)
}

# Create the final data frame
final_df <- Set_long_filtered %>%
  mutate(
    # Extract the non-empty value from each group of columns:
    Heroism = apply(select(., ends_with("_h_1")), 1, extract_value),
    Danger        = apply(select(., ends_with("m1_1")), 1, extract_value),
    Helpfulness        = apply(select(., ends_with("m1_2")), 1, extract_value),
    Selfless        = apply(select(., ends_with("m2_1")), 1, extract_value),
    Brave        = apply(select(., ends_with("m2_2")), 1, extract_value),
    Attitude     = apply(select(., matches("(_at)$")), 1, extract_value)
  ) %>%
  # Select only the required columns in the final data frame
  select(ResponseId, Condition, Condition_2, Heroism, Danger, Helpfulness, Selfless, Brave, Attitude, Credibility, Gender, Age)

# Recoding values:


final_df <- final_df %>%
  # 1. Split Condition into Job, Risk, and Help
  mutate(
    Job  = if_else(str_sub(Condition_2, 1, 2) == "Ps", "Ps", str_sub(Condition_2, 1, 1)),
    Risk = if_else(str_sub(Condition_2, 1, 2) == "Ps", str_sub(Condition_2, 3, 3), str_sub(Condition_2, 2, 2)),
    Help = if_else(str_sub(Condition_2, 1, 2) == "Ps", str_sub(Condition_2, 4, 4), str_sub(Condition_2, 3, 3))
  ) %>%
  # 2. Dummy code Risk and Help: R -> 0.5, B -> -0.5; H -> 0.5, S -> -0.5
  mutate(
    Risk_dummy = case_when(
      Risk == "R" ~ 0.5,
      Risk == "B" ~ -0.5,
      TRUE ~ NA_real_
    ),
    Help_dummy = case_when(
      Help == "H" ~ 0.5,
      Help == "S" ~ -0.5,
      TRUE ~ NA_real_
    )
  ) %>%
  # 3. Recode rating items (HeroismScore, MC1_1, MC1_2, MC2_1, MC2_2)
  mutate_at(vars(Heroism, Danger, Helpfulness, Selfless, Brave), 
            ~ case_when(
              . == "1 - Strongly disagree" ~ "1",
              . == "7 - Strongly agree"    ~ "7",
              TRUE ~ .
            )) %>%
  # Optionally convert these recoded columns to numeric
  mutate_at(vars(Heroism, Danger, Helpfulness, Selfless, Brave), as.numeric) %>%
  # 4. Recode Attitude values
  mutate(
    Attitude = case_when(
      Attitude == "Very negative"    ~ 1,
      Attitude == "Quite negative" ~ 2,
      Attitude == "Somewhat negative" ~ 3,
      Attitude == "Neutral"           ~ 4,
      Attitude == "Somewhat positive" ~ 5,
      Attitude == "Quite positive" ~ 6,
      Attitude == "Very positive" ~ 7,
      TRUE ~ NA_real_
    )
  ) 

final_df$Part_Job <- Demographics$JOB
Set<-final_df

#write.csv(Set, "DF_HeroFactory_April2025.csv", row.names = F)

Credibility test

Below are some additional insights into credibility of the manipulation and how it 1) is influenced by the conclusion, and 2) how it influences our results. There is solid ground to consider that the manipulation was:

  • Credible
  • Weaker for nurses and firefighters
  • The more credible, the better the effect of our manipulation.
summary(lm(Credibility ~ Job, data = Set))
## 
## Call:
## lm(formula = Credibility ~ Job, data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.2747 -0.9926  0.0996  1.0074  2.0996 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  5.27473    0.09188  57.407   <2e-16 ***
## JobN        -0.37436    0.13018  -2.876   0.0041 ** 
## JobP        -0.19384    0.13006  -1.490   0.1364    
## JobPs       -0.28213    0.13030  -2.165   0.0305 *  
## JobW        -0.03385    0.12982  -0.261   0.7943    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.518 on 1355 degrees of freedom
## Multiple R-squared:  0.008818,   Adjusted R-squared:  0.005892 
## F-statistic: 3.014 on 4 and 1355 DF,  p-value: 0.01728
summary(lm(Credibility ~ Risk_dummy * Help_dummy, data = Firef))
## 
## Call:
## lm(formula = Credibility ~ Risk_dummy * Help_dummy, data = Firef)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.3235 -0.3235  0.6765  0.7794  2.6812 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            5.27824    0.08766  60.211  < 2e-16 ***
## Risk_dummy             1.01705    0.17532   5.801 1.85e-08 ***
## Help_dummy             0.91411    0.17532   5.214 3.69e-07 ***
## Risk_dummy:Help_dummy  0.02472    0.35065   0.071    0.944    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.448 on 269 degrees of freedom
## Multiple R-squared:  0.185,  Adjusted R-squared:  0.1759 
## F-statistic: 20.35 on 3 and 269 DF,  p-value: 6.455e-12
by(Firef$Credibility, Firef$Risk_dummy, mean)
## Firef$Risk_dummy: -0.5
## [1] 4.766423
## ------------------------------------------------------------ 
## Firef$Risk_dummy: 0.5
## [1] 5.786765
by(Firef$Credibility, Firef$Risk_dummy, sd)
## Firef$Risk_dummy: -0.5
## [1] 1.60084
## ------------------------------------------------------------ 
## Firef$Risk_dummy: 0.5
## [1] 1.42157
by(Firef$Credibility, Firef$Help_dummy, mean)
## Firef$Help_dummy: -0.5
## [1] 4.817518
## ------------------------------------------------------------ 
## Firef$Help_dummy: 0.5
## [1] 5.735294
by(Firef$Credibility, Firef$Help_dummy, sd)
## Firef$Help_dummy: -0.5
## [1] 1.737218
## ------------------------------------------------------------ 
## Firef$Help_dummy: 0.5
## [1] 1.289475
ggplot(Firef, aes(x = interaction(Risk, Help, sep = " & "), y = Credibility)) +
  geom_boxplot() +
  labs(x = "Condition (Risk_dummy & Help_dummy)",
       y = "Credibility",
       title = "FIREFIGHTERS: Credibility by Risk and Help Conditions") +
  theme_minimal()

summary(lm(Credibility ~ Risk_dummy * Help_dummy, data = Psych))
## 
## Call:
## lm(formula = Credibility ~ Risk_dummy * Help_dummy, data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.7794 -1.0211  0.2206  1.0896  2.8030 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)           4.986200   0.087312  57.108  < 2e-16 ***
## Risk_dummy            0.864983   0.174624   4.953  1.3e-06 ***
## Help_dummy            0.717459   0.174624   4.109  5.3e-05 ***
## Risk_dummy:Help_dummy 0.007963   0.349248   0.023    0.982    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.434 on 266 degrees of freedom
## Multiple R-squared:  0.1339, Adjusted R-squared:  0.1241 
## F-statistic: 13.71 on 3 and 266 DF,  p-value: 2.43e-08
by(Psych$Credibility, Psych$Risk_dummy, mean)
## Psych$Risk_dummy: -0.5
## [1] 4.556391
## ------------------------------------------------------------ 
## Psych$Risk_dummy: 0.5
## [1] 5.416058
by(Psych$Credibility, Psych$Risk_dummy, sd)
## Psych$Risk_dummy: -0.5
## [1] 1.611575
## ------------------------------------------------------------ 
## Psych$Risk_dummy: 0.5
## [1] 1.326441
by(Psych$Credibility, Psych$Help_dummy, mean)
## Psych$Help_dummy: -0.5
## [1] 4.637037
## ------------------------------------------------------------ 
## Psych$Help_dummy: 0.5
## [1] 5.348148
by(Psych$Credibility, Psych$Help_dummy, sd)
## Psych$Help_dummy: -0.5
## [1] 1.637287
## ------------------------------------------------------------ 
## Psych$Help_dummy: 0.5
## [1] 1.334494
ggplot(Psych, aes(x = interaction(Risk, Help, sep = " & "), y = Credibility)) +
  geom_boxplot() +
  labs(x = "Condition (Risk_dummy & Help_dummy)",
       y = "Credibility",
       title = "PSYCH: Credibility by Risk and Help Conditions") +
  theme_minimal()

summary(lm(Credibility ~ Risk_dummy * Help_dummy, data = Nurses))
## 
## Call:
## lm(formula = Credibility ~ Risk_dummy * Help_dummy, data = Nurses)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.9565 -0.9565  0.0435  1.0435  3.2206 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)             4.8967     0.0906  54.046  < 2e-16 ***
## Risk_dummy              1.3871     0.1812   7.655 3.55e-13 ***
## Help_dummy              0.7901     0.1812   4.360 1.86e-05 ***
## Risk_dummy:Help_dummy  -0.1148     0.3624  -0.317    0.752    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.491 on 267 degrees of freedom
## Multiple R-squared:  0.227,  Adjusted R-squared:  0.2184 
## F-statistic: 26.14 on 3 and 267 DF,  p-value: 7.395e-15
ggplot(Nurses, aes(x = interaction(Risk, Help, sep = " & "), y = Credibility)) +
  geom_boxplot() +
  labs(x = "Condition (Risk_dummy & Help_dummy)",
       y = "Credibility",
       title = "NURSES: Credibility by Risk and Help Conditions") +
  theme_minimal()

summary(lm(Credibility ~ Risk_dummy * Help_dummy, data = Weld))
## 
## Call:
## lm(formula = Credibility ~ Risk_dummy * Help_dummy, data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.8971 -0.7794  0.2206  1.1029  2.2029 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            5.24158    0.08274  63.354  < 2e-16 ***
## Risk_dummy             0.78900    0.16547   4.768 3.04e-06 ***
## Help_dummy             0.09335    0.16547   0.564    0.573    
## Risk_dummy:Help_dummy  0.38662    0.33094   1.168    0.244    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.369 on 270 degrees of freedom
## Multiple R-squared:  0.08282,    Adjusted R-squared:  0.07263 
## F-statistic: 8.127 on 3 and 270 DF,  p-value: 3.357e-05
by(Weld$Credibility, Weld$Risk_dummy, mean)
## Weld$Risk_dummy: -0.5
## [1] 4.846715
## ------------------------------------------------------------ 
## Weld$Risk_dummy: 0.5
## [1] 5.635036
by(Weld$Credibility, Weld$Risk_dummy, sd)
## Weld$Risk_dummy: -0.5
## [1] 1.439416
## ------------------------------------------------------------ 
## Weld$Risk_dummy: 0.5
## [1] 1.294081
by(Weld$Credibility, Weld$Help_dummy, mean)
## Weld$Help_dummy: -0.5
## [1] 5.19708
## ------------------------------------------------------------ 
## Weld$Help_dummy: 0.5
## [1] 5.284672
by(Weld$Credibility, Weld$Help_dummy, sd)
## Weld$Help_dummy: -0.5
## [1] 1.418571
## ------------------------------------------------------------ 
## Weld$Help_dummy: 0.5
## [1] 1.429464
ggplot(Weld, aes(x = interaction(Risk, Help, sep = " & "), y = Credibility)) +
  geom_boxplot() +
  labs(x = "Condition (Risk_dummy & Help_dummy)",
       y = "Credibility",
       title = "WELDERS: Credibility by Risk and Help Conditions") +
  theme_minimal()

summary(lm(Credibility ~ Risk_dummy * Help_dummy, data = Pol))
## 
## Call:
## lm(formula = Credibility ~ Risk_dummy * Help_dummy, data = Pol)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.7761 -0.7761  0.2239  1.0746  2.2239 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            5.07755    0.07893  64.326  < 2e-16 ***
## Risk_dummy             0.45360    0.15787   2.873  0.00439 ** 
## Help_dummy             0.33550    0.15787   2.125  0.03449 *  
## Risk_dummy:Help_dummy  0.37249    0.31574   1.180  0.23916    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.302 on 268 degrees of freedom
## Multiple R-squared:  0.05045,    Adjusted R-squared:  0.03982 
## F-statistic: 4.746 on 3 and 268 DF,  p-value: 0.003046
ggplot(Pol, aes(x = interaction(Risk, Help, sep = " & "), y = Credibility)) +
  geom_boxplot() +
  labs(x = "Condition (Risk_dummy & Help_dummy)",
       y = "Credibility",
       title = "POLICE: Credibility by Risk and Help Conditions") +
  theme_minimal()

by(Pol$Credibility, Pol$Risk_dummy, mean)
## Pol$Risk_dummy: -0.5
## [1] 4.850746
## ------------------------------------------------------------ 
## Pol$Risk_dummy: 0.5
## [1] 5.304348
by(Pol$Credibility, Pol$Risk_dummy, sd)
## Pol$Risk_dummy: -0.5
## [1] 1.422206
## ------------------------------------------------------------ 
## Pol$Risk_dummy: 0.5
## [1] 1.19371
by(Pol$Credibility, Pol$Help_dummy, mean)
## Pol$Help_dummy: -0.5
## [1] 4.911765
## ------------------------------------------------------------ 
## Pol$Help_dummy: 0.5
## [1] 5.25
by(Pol$Credibility, Pol$Help_dummy, sd)
## Pol$Help_dummy: -0.5
## [1] 1.395605
## ------------------------------------------------------------ 
## Pol$Help_dummy: 0.5
## [1] 1.239773

Using credibility as a covariate in our analyses:

summary(lm(as.numeric(Heroism) ~ Risk_dummy * Help_dummy * scale(as.numeric(Credibility)), data = Set))
## 
## Call:
## lm(formula = as.numeric(Heroism) ~ Risk_dummy * Help_dummy * 
##     scale(as.numeric(Credibility)), data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.9967 -0.8037  0.1086  1.0033  3.3730 
## 
## Coefficients:
##                                                      Estimate Std. Error
## (Intercept)                                           4.91688    0.04096
## Risk_dummy                                            0.17465    0.08192
## Help_dummy                                           -0.02904    0.08192
## scale(as.numeric(Credibility))                        0.48601    0.04229
## Risk_dummy:Help_dummy                                -0.47792    0.16384
## Risk_dummy:scale(as.numeric(Credibility))             0.57391    0.08458
## Help_dummy:scale(as.numeric(Credibility))             0.19451    0.08458
## Risk_dummy:Help_dummy:scale(as.numeric(Credibility))  0.12725    0.16915
##                                                      t value Pr(>|t|)    
## (Intercept)                                          120.043  < 2e-16 ***
## Risk_dummy                                             2.132  0.03319 *  
## Help_dummy                                            -0.354  0.72303    
## scale(as.numeric(Credibility))                        11.493  < 2e-16 ***
## Risk_dummy:Help_dummy                                 -2.917  0.00359 ** 
## Risk_dummy:scale(as.numeric(Credibility))              6.786 1.72e-11 ***
## Help_dummy:scale(as.numeric(Credibility))              2.300  0.02161 *  
## Risk_dummy:Help_dummy:scale(as.numeric(Credibility))   0.752  0.45202    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.389 on 1352 degrees of freedom
## Multiple R-squared:  0.129,  Adjusted R-squared:  0.1245 
## F-statistic:  28.6 on 7 and 1352 DF,  p-value: < 2.2e-16

Controlling for Credibility nullifies the effect of the Help condition. Furthermore, when using Credibility as a moderator:

summary(lm(as.numeric(Heroism) ~ Risk_dummy * Help_dummy + scale(as.numeric(Credibility)), data = Set))
## 
## Call:
## lm(formula = as.numeric(Heroism) ~ Risk_dummy * Help_dummy + 
##     scale(as.numeric(Credibility)), data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.6315 -0.8353  0.0726  0.9848  3.2553 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     5.02064    0.03831 131.059  < 2e-16 ***
## Risk_dummy                      0.23374    0.08036   2.909  0.00369 ** 
## Help_dummy                      0.02988    0.07813   0.382  0.70221    
## scale(as.numeric(Credibility))  0.41192    0.04093  10.064  < 2e-16 ***
## Risk_dummy:Help_dummy          -0.14148    0.15327  -0.923  0.35613    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.413 on 1355 degrees of freedom
## Multiple R-squared:  0.0975, Adjusted R-squared:  0.09484 
## F-statistic:  36.6 on 4 and 1355 DF,  p-value: < 2.2e-16

the more credible, the more effective the boredom condition. Same conclusions for the Help condition, but it is smaller.

Details for Job Decomposition (H1)

NURSES

Nurses <- subset(Set, Set$Job == "N")

#report(lm(Heroism ~ Risk_dummy * Help_dummy, data = Nurses))

IN THE NURSE CONDITION:

  • The effect of Risk dummy is statistically non-significant (beta = 0.20, 95% CI [-0.15, 0.54], t(267) = 1.12, p = 0.264; Std. beta = 0.07, 95% CI [-0.05, 0.19])
  • The effect of Help dummy is statistically non-significant (beta = 0.05, 95% CI [-0.30, 0.39], t(267) = 0.27, p = 0.789; Std. beta = 0.02, 95% CI [-0.10, 0.14])
  • The effect of Risk dummy × Help dummy is statistically non-significant (beta = -0.51, 95% CI [-1.20, 0.18], t(267) = -1.46, p = 0.146; Std. beta = -0.09, 95% CI [-0.21, 0.03])

==> our manipulations (both physical risks and motivation type) did not work in the Nurses condition

For Nurses, the effects sizes are:

paste0("Nurses: Cohen's d for the effect of risk is:")
## [1] "Nurses: Cohen's d for the effect of risk is:"
effectsize::cohens_d(Nurses$Heroism ~ relevel(as.factor(Nurses$Risk), ref = "R"))
## Cohen's d |        95% CI
## -------------------------
## 0.14      | [-0.10, 0.37]
## 
## - Estimated using pooled SD.
paste0("Nurses: Cohen's d for the effect of Motivation is:")
## [1] "Nurses: Cohen's d for the effect of Motivation is:"
effectsize::cohens_d(Nurses$Heroism ~ Nurses$Help)
## Cohen's d |        95% CI
## -------------------------
## 0.03      | [-0.20, 0.27]
## 
## - Estimated using pooled SD.

POLICE OFFICERS

Pol <- subset(Set, Set$Job == "P")

#report(lm(Heroism ~ Risk_dummy * Help_dummy, data = Pol))

IN THE POLICE CONDITION:

  • The effect of Risk dummy is statistically non-significant (beta = 0.24, 95% CI [-0.08, 0.56], t(268) = 1.49, p = 0.137; Std. beta = 0.09, 95% CI [-0.03, 0.21])
  • The effect of Help dummy is statistically non-significant (beta = 0.12, 95% CI [-0.19, 0.44], t(268) = 0.77, p = 0.442; Std. beta = 0.05, 95% CI [-0.07, 0.17])
  • The effect of Risk dummy × Help dummy is statistically non-significant (beta = 0.01, 95% CI [-0.63, 0.65], t(268) = 0.03, p = 0.973; Std. beta = 2.07e-03, 95% CI [-0.12, 0.12])

==> our manipulations (both physical risks and motivation type) did not work in the Police Officers condition

For Police officers, the effects sizes are:

paste0("Police: Cohen's d for the effect of risk is:")
## [1] "Police: Cohen's d for the effect of risk is:"
effectsize::cohens_d(Pol$Heroism ~ relevel(as.factor(Pol$Risk), ref = "R"))
## Cohen's d |        95% CI
## -------------------------
## 0.18      | [-0.06, 0.42]
## 
## - Estimated using pooled SD.
paste0("Police: Cohen's d for the effect of Motivation is:")
## [1] "Police: Cohen's d for the effect of Motivation is:"
effectsize::cohens_d(Pol$Heroism ~ Pol$Help)
## Cohen's d |        95% CI
## -------------------------
## 0.09      | [-0.14, 0.33]
## 
## - Estimated using pooled SD.

FIREFIGHTERS

Firef <- subset(Set, Set$Job == "F")

#report(lm(Heroism ~ Risk_dummy * Help_dummy, data = Firef))

IN THE FIREFIGHTER CONDITION:

  • The effect of Risk dummy is statistically significant and positive (beta = 0.29, 95% CI [0.01, 0.56], t(269) = 2.04, p = 0.042; Std. beta = 0.12, 95% CI [4.40e-03, 0.24])
  • The effect of Help dummy is statistically non-significant (beta = 5.86e-03, 95% CI [-0.27, 0.28], t(269) = 0.04, p = 0.967; Std. beta = 2.52e-03, 95% CI [-0.12, 0.12])
  • The effect of Risk dummy × Help dummy is statistically non-significant (beta = 0.02, 95% CI [-0.53, 0.57], t(269) = 0.06, p = 0.950; Std. beta = 3.84e-03, 95% CI [-0.12, 0.12])

==> The manipulation of Risk kinda worked, but not the manipulation of Motivation in the firefighters condition.

For Firefighters, the effects sizes are:

paste0("Fire: Cohen's d for the effect of risk is:")
## [1] "Fire: Cohen's d for the effect of risk is:"
effectsize::cohens_d(Firef$Heroism ~ relevel(as.factor(Firef$Risk), ref = "R"))
## Cohen's d |       95% CI
## ------------------------
## 0.25      | [0.01, 0.49]
## 
## - Estimated using pooled SD.
paste0("Fire: Cohen's d for the effect of Motivation is:")
## [1] "Fire: Cohen's d for the effect of Motivation is:"
effectsize::cohens_d(Firef$Heroism ~ Firef$Help)
## Cohen's d |        95% CI
## -------------------------
## 5.93e-03  | [-0.23, 0.24]
## 
## - Estimated using pooled SD.

PSYCHIATRISTS

Psych <- subset(Set, Set$Job == "Ps")

#report(lm(Heroism ~ Risk_dummy * Help_dummy, data = Psych))

IN THE Psychiatrists CONDITION:

  • The effect of Risk dummy is statistically significant and positive (beta = 0.52, 95% CI [0.19, 0.86], t(266) = 3.05, p = 0.002; Std. beta = 0.18, 95% CI [0.06, 0.30])
  • The effect of Help dummy is statistically significant and positive (beta = 0.36, 95% CI [0.02, 0.70], t(266) = 2.11, p = 0.035; Std. beta = 0.13, 95% CI [7.95e-03, 0.24])
  • The effect of Risk dummy × Help dummy is statistically non-significant (beta = -0.29, 95% CI [-0.97, 0.38], t(266) = -0.86, p = 0.392; Std. beta = -0.05, 95% CI [-0.17, 0.07])

==> The manipulations of Risk and Motivation worked, in the Psychiatrists condition. For Psychiatrists, the effects sizes are:

paste0("PSYCH: Cohen's d for the effect of risk is:")
## [1] "PSYCH: Cohen's d for the effect of risk is:"
effectsize::cohens_d(Psych$Heroism ~ relevel(as.factor(Psych$Risk), ref = "R"))
## Cohen's d |       95% CI
## ------------------------
## 0.37      | [0.13, 0.61]
## 
## - Estimated using pooled SD.
paste0("PSYCH: Cohen's d for the effect of Motivation is:")
## [1] "PSYCH: Cohen's d for the effect of Motivation is:"
effectsize::cohens_d(Psych$Heroism ~ Psych$Help)
## Cohen's d |       95% CI
## ------------------------
## 0.25      | [0.01, 0.49]
## 
## - Estimated using pooled SD.

WELDERS

Weld <- subset(Set, Set$Job == "W")

report(lm(Heroism ~ Risk_dummy * Help_dummy, data = Weld))
## We fitted a linear model (estimated using OLS) to predict Heroism with
## Risk_dummy and Help_dummy (formula: Heroism ~ Risk_dummy * Help_dummy). The
## model explains a statistically significant and moderate proportion of variance
## (R2 = 0.19, F(3, 270) = 20.65, p < .001, adj. R2 = 0.18). The model's
## intercept, corresponding to Risk_dummy = 0 and Help_dummy = 0, is at 4.68 (95%
## CI [4.53, 4.84], t(270) = 59.29, p < .001). Within this model:
## 
##   - The effect of Risk dummy is statistically significant and positive (beta =
## 1.18, 95% CI [0.87, 1.49], t(270) = 7.46, p < .001; Std. beta = 0.41, 95% CI
## [0.30, 0.52])
##   - The effect of Help dummy is statistically significant and positive (beta =
## 0.40, 95% CI [0.08, 0.71], t(270) = 2.50, p = 0.013; Std. beta = 0.14, 95% CI
## [0.03, 0.25])
##   - The effect of Risk dummy × Help dummy is statistically non-significant and
## positive (beta = 0.18, 95% CI [-0.44, 0.80], t(270) = 0.57, p = 0.568; Std.
## beta = 0.03, 95% CI [-0.08, 0.14])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.

IN THE WELDERS CONDITION:

  • The effect of Risk dummy is statistically significant and positive (beta = 1.18, 95% CI [0.87, 1.49], t(270) = 7.46, p < .001; Std. beta = 0.41, 95% CI [0.30, 0.52])
    • The effect of Help dummy is statistically significant and positive (beta = 0.40, 95% CI [0.08, 0.71], t(270) = 2.50, p = 0.013; Std. beta = 0.14, 95% CI [0.03, 0.25])
    • The effect of Risk dummy × Help dummy is statistically non-significant and positive (beta = 0.18, 95% CI [-0.44, 0.80], t(270) = 0.57, p = 0.568; Std. beta = 0.03, 95% CI [-0.08, 0.14])

==> The manipulations of Risk and Motivation worked, in the Underwater Welders condition. For welders, the effects sizes are:

paste0("WELDERS: Cohen's d for the effect of risk is:")
## [1] "WELDERS: Cohen's d for the effect of risk is:"
effectsize::cohens_d(Weld$Heroism ~ relevel(as.factor(Weld$Risk), ref = "R"))
## Cohen's d |       95% CI
## ------------------------
## 0.89      | [0.64, 1.14]
## 
## - Estimated using pooled SD.
paste0("WELDERS: Cohen's d for the effect of Motivation is:")
## [1] "WELDERS: Cohen's d for the effect of Motivation is:"
effectsize::cohens_d(Weld$Heroism ~ Weld$Help)
## Cohen's d |       95% CI
## ------------------------
## 0.27      | [0.03, 0.51]
## 
## - Estimated using pooled SD.

Additional robustness checks

Robust models

Robust models are less sensitive to deviations from assumptions (linearity, normality, homoscedasticity) and account for outliers by weighting residuals based on their distance to the bulk of the data. To re-weight residuals, we use a Huber-function that iteratively re-weight residuals until convergence – the aim if an efficient analysis coupled with a more robust breakdown point.

summary(modrob<-lmrob(Set$Heroism ~ Set$Risk_dummy * Set$Help_dummy))
## 
## Call:
## lmrob(formula = Set$Heroism ~ Set$Risk_dummy * Set$Help_dummy)
##  \--> method = "MM"
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -4.40872 -0.96122  0.03878  1.03878  2.25711 
## 
## Coefficients:
##                               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                    5.10065    0.04178 122.071  < 2e-16 ***
## Set$Risk_dummy                 0.49720    0.08155   6.097 1.41e-09 ***
## Set$Help_dummy                 0.16863    0.08137   2.072   0.0384 *  
## Set$Risk_dummy:Set$Help_dummy -0.09939    0.16268  -0.611   0.5413    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 1.462 
## Multiple R-squared:  0.03149,    Adjusted R-squared:  0.02935 
## Convergence in 10 IRWLS iterations
## 
## Robustness weights: 
##  93 weights are ~= 1. The remaining 1267 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.3432  0.8792  0.9545  0.9088  0.9852  0.9972 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         7.353e-05         1.819e-12         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
summary(modrob<-lmrob(Heroism ~ Selfless_scale * Brave_scale, data = Set))
## 
## Call:
## lmrob(formula = Heroism ~ Selfless_scale * Brave_scale, data = Set)
##  \--> method = "MM"
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.5042 -0.5042  0.1853  0.5754  4.6326 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.97647    0.05490  90.653  < 2e-16 ***
## Selfless_scale              0.72545    0.06639  10.927  < 2e-16 ***
## Brave_scale                 0.52584    0.06403   8.212 5.03e-16 ***
## Selfless_scale:Brave_scale  0.20698    0.09187   2.253   0.0244 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 0.8809 
## Multiple R-squared:  0.5328, Adjusted R-squared:  0.5318 
## Convergence in 41 IRWLS iterations
## 
## Robustness weights: 
##  8 observations c(289,471,487,512,626,798,1095,1150)
##   are outliers with |weight| = 0 ( < 7.4e-05); 
##  57 weights are ~= 1. The remaining 1295 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.01441 0.84180 0.96150 0.87600 0.97890 0.99890 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         7.353e-05         2.302e-11         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
summary(modrob<-lmrob(Heroism ~ Danger_scale * Helpfulness_scale, data = Set))
## 
## Call:
## lmrob(formula = Heroism ~ Danger_scale * Helpfulness_scale, data = Set)
##  \--> method = "MM"
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -5.35359 -0.61788  0.01426  0.64641  3.96659 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     5.10176    0.03226 158.125  < 2e-16 ***
## Danger_scale                    0.39099    0.04168   9.380  < 2e-16 ***
## Helpfulness_scale               0.83700    0.03579  23.385  < 2e-16 ***
## Danger_scale:Helpfulness_scale  0.10235    0.02523   4.056 5.27e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 0.9275 
## Multiple R-squared:  0.5241, Adjusted R-squared:  0.523 
## Convergence in 11 IRWLS iterations
## 
## Robustness weights: 
##  14 observations c(128,306,363,379,471,626,689,798,936,1036,1150,1230,1305,1309)
##   are outliers with |weight| = 0 ( < 7.4e-05); 
##  113 weights are ~= 1. The remaining 1233 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.02519 0.84750 0.95620 0.88450 0.97740 0.99800 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         7.353e-05         2.841e-11         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
summary(modrob<-lmrob(Psych$Heroism ~ Psych$Risk_dummy * Psych$Help_dummy))
## 
## Call:
## lmrob(formula = Psych$Heroism ~ Psych$Risk_dummy * Psych$Help_dummy)
##  \--> method = "MM"
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -3.90584 -0.90584  0.09416  0.97848  2.97848 
## 
## Coefficients:
##                                   Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                         4.5334     0.0884  51.284  < 2e-16 ***
## Psych$Risk_dummy                    0.5094     0.1736   2.934  0.00364 ** 
## Psych$Help_dummy                    0.3749     0.1734   2.163  0.03144 *  
## Psych$Risk_dummy:Psych$Help_dummy  -0.2787     0.3473  -0.803  0.42292    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 1.311 
## Multiple R-squared:  0.0522, Adjusted R-squared:  0.04151 
## Convergence in 9 IRWLS iterations
## 
## Robustness weights: 
##  40 weights are ~= 1. The remaining 230 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.3550  0.8168  0.9477  0.8882  0.9848  0.9942 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         3.704e-04         1.819e-12         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
summary(modrob<-lmrob(Heroism ~ Selfless_scale * Brave_scale, data = Psych))
## 
## Call:
## lmrob(formula = Heroism ~ Selfless_scale * Brave_scale, data = Psych)
##  \--> method = "MM"
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.4974 -0.7255  0.2356  0.5895  5.0017 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.83271    0.08928  54.128  < 2e-16 ***
## Selfless_scale              0.69814    0.12356   5.650 4.12e-08 ***
## Brave_scale                 0.57529    0.10354   5.556 6.69e-08 ***
## Selfless_scale:Brave_scale  0.32230    0.14320   2.251   0.0252 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 0.8669 
## Multiple R-squared:  0.4666, Adjusted R-squared:  0.4606 
## Convergence in 30 IRWLS iterations
## 
## Robustness weights: 
##  2 observations c(59,217) are outliers with |weight| = 0 ( < 0.00037); 
##  13 weights are ~= 1. The remaining 255 ones are summarized as
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## 0.0008491 0.8256000 0.9393000 0.8608000 0.9900000 0.9984000 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         3.704e-04         2.302e-11         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
summary(modrob<-lmrob(Heroism ~ Danger_scale * Helpfulness_scale, data = Psych))
## 
## Call:
## lmrob(formula = Heroism ~ Danger_scale * Helpfulness_scale, data = Psych)
##  \--> method = "MM"
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.6713 -0.7236  0.1489  0.6068  2.5455 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     4.64138    0.07844  59.168   <2e-16 ***
## Danger_scale                    0.21585    0.08465   2.550   0.0113 *  
## Helpfulness_scale               0.82881    0.08958   9.253   <2e-16 ***
## Danger_scale:Helpfulness_scale  0.05611    0.04256   1.318   0.1885    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 1.088 
## Multiple R-squared:  0.3782, Adjusted R-squared:  0.3712 
## Convergence in 10 IRWLS iterations
## 
## Robustness weights: 
##  18 weights are ~= 1. The remaining 252 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.02566 0.86870 0.96010 0.89720 0.98510 0.99880 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         3.704e-04         2.841e-11         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
summary(modrob<-lmrob(Weld$Heroism ~ Weld$Risk_dummy * Weld$Help_dummy))
## 
## Call:
## lmrob(formula = Weld$Heroism ~ Weld$Risk_dummy * Weld$Help_dummy)
##  \--> method = "MM"
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -4.13058 -0.91152 -0.01771  0.86942  2.98229 
## 
## Coefficients:
##                                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                      4.74029    0.07967  59.500  < 2e-16 ***
## Weld$Risk_dummy                  1.24293    0.15786   7.874 8.41e-14 ***
## Weld$Help_dummy                  0.33229    0.15982   2.079   0.0386 *  
## Weld$Risk_dummy:Weld$Help_dummy  0.26014    0.31956   0.814   0.4163    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 1.232 
## Multiple R-squared:  0.2081, Adjusted R-squared:  0.1993 
## Convergence in 10 IRWLS iterations
## 
## Robustness weights: 
##  27 weights are ~= 1. The remaining 247 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2384  0.8536  0.9430  0.8970  0.9846  0.9990 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         3.650e-04         1.819e-12         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
summary(modrob<-lmrob(Heroism ~ Selfless_scale * Brave_scale, data = Weld))
## 
## Call:
## lmrob(formula = Heroism ~ Selfless_scale * Brave_scale, data = Weld)
##  \--> method = "MM"
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.0740 -0.8428  0.1035  0.7110  5.4670 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.50143    0.08279  54.375  < 2e-16 ***
## Selfless_scale              0.71285    0.11941   5.970 7.45e-09 ***
## Brave_scale                 0.46554    0.12411   3.751 0.000216 ***
## Selfless_scale:Brave_scale  0.31653    0.05177   6.115 3.38e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 1.071 
## Multiple R-squared:  0.4064, Adjusted R-squared:  0.3998 
## Convergence in 13 IRWLS iterations
## 
## Robustness weights: 
##  2 observations c(92,97) are outliers with |weight| = 0 ( < 0.00036); 
##  25 weights are ~= 1. The remaining 247 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1438  0.8669  0.9420  0.8990  0.9893  0.9985 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         3.650e-04         2.302e-11         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)
summary(modrob<-lmrob(Heroism ~ Danger_scale * Helpfulness_scale, data = Weld))
## 
## Call:
## lmrob(formula = Heroism ~ Danger_scale * Helpfulness_scale, data = Weld)
##  \--> method = "MM"
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.0476 -0.7062 -0.0476  0.7441  3.2782 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     4.99155    0.08402  59.410  < 2e-16 ***
## Danger_scale                    0.45602    0.09793   4.657 5.04e-06 ***
## Helpfulness_scale               0.60925    0.07965   7.649 3.59e-13 ***
## Danger_scale:Helpfulness_scale  0.04838    0.09267   0.522    0.602    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Robust residual standard error: 1.019 
## Multiple R-squared:  0.4196, Adjusted R-squared:  0.4132 
## Convergence in 14 IRWLS iterations
## 
## Robustness weights: 
##  2 observations c(26,92) are outliers with |weight| = 0 ( < 0.00036); 
##  24 weights are ~= 1. The remaining 248 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.07906 0.87780 0.93280 0.88790 0.98100 0.99880 
## Algorithmic parameters: 
##        tuning.chi                bb        tuning.psi        refine.tol 
##         1.548e+00         5.000e-01         4.685e+00         1.000e-07 
##           rel.tol         scale.tol         solve.tol          zero.tol 
##         1.000e-07         1.000e-10         1.000e-07         1.000e-10 
##       eps.outlier             eps.x warn.limit.reject warn.limit.meanrw 
##         3.650e-04         1.334e-11         5.000e-01         5.000e-01 
##      nResample         max.it       best.r.s       k.fast.s          k.max 
##            500             50              2              1            200 
##    maxit.scale      trace.lev            mts     compute.rd fast.s.large.n 
##            200              0           1000              0           2000 
##                   psi           subsampling                   cov 
##            "bisquare"         "nonsingular"         ".vcov.avar1" 
## compute.outlier.stats 
##                  "SM" 
## seed : int(0)

CLM models

Cumulative Link models model the association between DV and IV at each level of the Ordinal DV. They are designed to account for non-numeric DV – in our case, an ordinal variable (Likert item).

summary(clm(ordered(Heroism) ~ Risk_dummy * Help_dummy, data = Set, link = "logit"))
## formula: ordered(Heroism) ~ Risk_dummy * Help_dummy
## data:    Set
## 
##  link  threshold nobs logLik   AIC     niter max.grad cond.H 
##  logit flexible  1360 -2328.78 4675.56 5(0)  5.44e-08 3.8e+01
## 
## Coefficients:
##                       Estimate Std. Error z value Pr(>|z|)    
## Risk_dummy             0.59565    0.09725   6.125 9.07e-10 ***
## Help_dummy             0.20359    0.09628   2.115   0.0345 *  
## Risk_dummy:Help_dummy -0.10979    0.19233  -0.571   0.5681    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Threshold coefficients:
##     Estimate Std. Error z value
## 1|2 -3.83582    0.18501 -20.733
## 2|3 -2.66230    0.10855 -24.527
## 3|4 -1.84560    0.07873 -23.444
## 4|5 -0.71009    0.05811 -12.220
## 5|6  0.42414    0.05592   7.584
## 6|7  1.50898    0.07038  21.441
summary(clm(ordered(Heroism) ~ Brave_scale * Selfless_scale, data = Set, link = "logit"))
## formula: ordered(Heroism) ~ Brave_scale * Selfless_scale
## data:    Set
## 
##  link  threshold nobs logLik   AIC     niter max.grad cond.H 
##  logit flexible  1360 -1939.75 3897.50 6(0)  3.94e-11 5.4e+01
## 
## Coefficients:
##                            Estimate Std. Error z value Pr(>|z|)    
## Brave_scale                 0.98821    0.07960   12.41   <2e-16 ***
## Selfless_scale              1.12946    0.07918   14.26   <2e-16 ***
## Brave_scale:Selfless_scale  0.45021    0.04471   10.07   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Threshold coefficients:
##     Estimate Std. Error z value
## 1|2 -4.56671    0.19789  -23.08
## 2|3 -3.30968    0.12539  -26.39
## 3|4 -2.37036    0.09618  -24.64
## 4|5 -0.87703    0.07253  -12.09
## 5|6  0.88164    0.07410   11.90
## 6|7  2.55186    0.10281   24.82
summary(clm(ordered(Heroism) ~ Danger_scale * Helpfulness_scale, data = Set, link = "logit"))
## formula: ordered(Heroism) ~ Danger_scale * Helpfulness_scale
## data:    Set
## 
##  link  threshold nobs logLik   AIC     niter max.grad cond.H 
##  logit flexible  1360 -1974.77 3967.55 6(0)  1.56e-11 5.8e+01
## 
## Coefficients:
##                                Estimate Std. Error z value Pr(>|z|)    
## Danger_scale                    0.64401    0.05983  10.764  < 2e-16 ***
## Helpfulness_scale               1.27886    0.06532  19.580  < 2e-16 ***
## Danger_scale:Helpfulness_scale  0.18919    0.04359   4.341 1.42e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Threshold coefficients:
##     Estimate Std. Error z value
## 1|2 -5.00003    0.21767  -22.97
## 2|3 -3.60312    0.13650  -26.40
## 3|4 -2.55879    0.10059  -25.44
## 4|5 -0.98444    0.07063  -13.94
## 5|6  0.67737    0.06771   10.01
## 6|7  2.16274    0.08871   24.38
summary(clm(ordered(Heroism) ~ Risk_dummy * Help_dummy, data = Psych, link = "logit"))
## formula: ordered(Heroism) ~ Risk_dummy * Help_dummy
## data:    Psych
## 
##  link  threshold nobs logLik  AIC    niter max.grad cond.H 
##  logit flexible  270  -459.18 936.37 5(0)  5.42e-07 3.0e+01
## 
## Coefficients:
##                       Estimate Std. Error z value Pr(>|z|)   
## Risk_dummy              0.6572     0.2204   2.982  0.00286 **
## Help_dummy              0.4650     0.2188   2.126  0.03354 * 
## Risk_dummy:Help_dummy  -0.3336     0.4355  -0.766  0.44370   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Threshold coefficients:
##     Estimate Std. Error z value
## 1|2 -3.33007    0.32433 -10.268
## 2|3 -2.25915    0.20560 -10.988
## 3|4 -1.44064    0.15518  -9.284
## 4|5 -0.03374    0.12383  -0.272
## 5|6  1.26357    0.14750   8.566
## 6|7  2.48835    0.22454  11.082
summary(clm(ordered(Heroism) ~ Risk_dummy * Help_dummy, data = Weld, link = "logit"))
## formula: ordered(Heroism) ~ Risk_dummy * Help_dummy
## data:    Weld
## 
##  link  threshold nobs logLik  AIC    niter max.grad cond.H 
##  logit flexible  274  -444.69 907.38 6(0)  1.58e-14 2.7e+01
## 
## Coefficients:
##                       Estimate Std. Error z value Pr(>|z|)    
## Risk_dummy              1.6868     0.2377   7.096 1.28e-12 ***
## Help_dummy              0.4837     0.2177   2.222   0.0263 *  
## Risk_dummy:Help_dummy   0.3995     0.4337   0.921   0.3570    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Threshold coefficients:
##     Estimate Std. Error z value
## 1|2  -4.1270     0.4186  -9.859
## 2|3  -2.7397     0.2320 -11.808
## 3|4  -1.7832     0.1693 -10.530
## 4|5  -0.3359     0.1316  -2.552
## 5|6   1.1065     0.1468   7.536
## 6|7   2.2814     0.1991  11.458

Additional Covariant analyses

Attitude

To account for a possible Halo effect, it is important to use Attitude as a covariant.

summary(mod_cov2<-lm(Heroism ~ Risk_dummy * Help_dummy + Attitude_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy + Attitude_scale, 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.1585 -0.4928 -0.0421  0.8415  3.6071 
## 
## Coefficients:
##                        Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            5.020511   0.029503 170.171  < 2e-16 ***
## Risk_dummy             0.276350   0.059317   4.659 3.49e-06 ***
## Help_dummy            -0.008258   0.059290  -0.139    0.889    
## Attitude_scale         0.989244   0.029823  33.171  < 2e-16 ***
## Risk_dummy:Help_dummy  0.012139   0.118064   0.103    0.918    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.088 on 1355 degrees of freedom
## Multiple R-squared:  0.4647, Adjusted R-squared:  0.4631 
## F-statistic: 294.1 on 4 and 1355 DF,  p-value: < 2.2e-16
summary(mod_cov2<-lm(Heroism ~ Brave_scale * Selfless_scale + Attitude_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Brave_scale * Selfless_scale + Attitude_scale, 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.4204 -0.4442  0.1491  0.5796  2.8274 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.91561    0.03090 159.075  < 2e-16 ***
## Brave_scale                 0.34209    0.04221   8.104 1.18e-15 ***
## Selfless_scale              0.29629    0.04015   7.379 2.76e-13 ***
## Attitude_scale              0.67719    0.03679  18.408  < 2e-16 ***
## Brave_scale:Selfless_scale  0.15862    0.02058   7.708 2.46e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.021 on 1355 degrees of freedom
## Multiple R-squared:  0.5284, Adjusted R-squared:  0.527 
## F-statistic: 379.6 on 4 and 1355 DF,  p-value: < 2.2e-16
summary(mod_cov2<-lm(Heroism ~ Danger_scale * Helpfulness_scale + Attitude_scale , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Danger_scale * Helpfulness_scale + Attitude_scale, 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.3125 -0.4388  0.1466  0.6875  2.9213 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     4.98466    0.03007 165.753  < 2e-16 ***
## Danger_scale                    0.22601    0.03256   6.941 6.03e-12 ***
## Helpfulness_scale               0.34542    0.03916   8.820  < 2e-16 ***
## Attitude_scale                  0.70355    0.03910  17.993  < 2e-16 ***
## Danger_scale:Helpfulness_scale  0.08330    0.02389   3.487 0.000505 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.039 on 1355 degrees of freedom
## Multiple R-squared:  0.5118, Adjusted R-squared:  0.5104 
## F-statistic: 355.1 on 4 and 1355 DF,  p-value: < 2.2e-16
summary(mod_cov2<-lm(Heroism ~ Risk_dummy * Help_dummy + Attitude_scale , data = Psych))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy + Attitude_scale, 
##     data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.0586 -0.6244  0.1249  0.7649  3.0070 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.70130    0.07066  66.531   <2e-16 ***
## Risk_dummy             0.33675    0.13712   2.456   0.0147 *  
## Help_dummy             0.09748    0.13795   0.707   0.4804    
## Attitude_scale         0.89116    0.07187  12.399   <2e-16 ***
## Risk_dummy:Help_dummy -0.04083    0.27335  -0.149   0.8814    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.12 on 265 degrees of freedom
## Multiple R-squared:  0.3996, Adjusted R-squared:  0.3905 
## F-statistic: 44.09 on 4 and 265 DF,  p-value: < 2.2e-16
summary(mod_cov2<-lm(Heroism ~ Risk_dummy * Help_dummy + Attitude_scale , data = Weld))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy + Attitude_scale, 
##     data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.6889 -0.5874  0.2206  0.8196  2.7121 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.65480    0.06998  66.513  < 2e-16 ***
## Risk_dummy             0.70622    0.14997   4.709 3.99e-06 ***
## Help_dummy             0.19772    0.14166   1.396    0.164    
## Attitude_scale         0.83309    0.09578   8.698 3.43e-16 ***
## Risk_dummy:Help_dummy  0.24516    0.27974   0.876    0.382    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.157 on 269 degrees of freedom
## Multiple R-squared:  0.3651, Adjusted R-squared:  0.3557 
## F-statistic: 38.68 on 4 and 269 DF,  p-value: < 2.2e-16

The motivation manipulation is nullified when accounting for attitude – the effect of motivation might first and foremost influence attitude, and without this part of the variance, the effect on heroism is non existent.

If we designed a Motivation manipulation that did not influence attitude, we might not find the effects…

Credibility

Maybe our effects are conditionned by the credibility of our manipulation. We can model our effects using credibility as a covariant

summary(mod_cov2<-lm(Heroism ~ Risk_dummy * Help_dummy + scale(Credibility) , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy + scale(Credibility), 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.6315 -0.8353  0.0726  0.9848  3.2553 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            5.02064    0.03831 131.059  < 2e-16 ***
## Risk_dummy             0.23374    0.08036   2.909  0.00369 ** 
## Help_dummy             0.02988    0.07813   0.382  0.70221    
## scale(Credibility)     0.41192    0.04093  10.064  < 2e-16 ***
## Risk_dummy:Help_dummy -0.14148    0.15327  -0.923  0.35613    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.413 on 1355 degrees of freedom
## Multiple R-squared:  0.0975, Adjusted R-squared:  0.09484 
## F-statistic:  36.6 on 4 and 1355 DF,  p-value: < 2.2e-16
summary(mod_cov2<-lm(Heroism ~ Brave_scale * Selfless_scale + scale(Credibility) , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Brave_scale * Selfless_scale + scale(Credibility), 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.3242 -0.4410  0.1551  0.6758  4.0543 
## 
## Coefficients:
##                            Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                 4.89365    0.03419 143.117  < 2e-16 ***
## Brave_scale                 0.54646    0.04495  12.158  < 2e-16 ***
## Selfless_scale              0.54633    0.04150  13.165  < 2e-16 ***
## scale(Credibility)          0.17203    0.03224   5.335 1.12e-07 ***
## Brave_scale:Selfless_scale  0.19157    0.02277   8.412  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.13 on 1355 degrees of freedom
## Multiple R-squared:  0.4226, Adjusted R-squared:  0.4209 
## F-statistic:   248 on 4 and 1355 DF,  p-value: < 2.2e-16
summary(mod_cov2<-lm(Heroism ~ Danger_scale * Helpfulness_scale + scale(Credibility) , data = Set))
## 
## Call:
## lm(formula = Heroism ~ Danger_scale * Helpfulness_scale + scale(Credibility), 
##     data = Set)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.1271 -0.5484  0.1199  0.7531  3.5143 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     5.00642    0.03290 152.172  < 2e-16 ***
## Danger_scale                    0.30532    0.03553   8.592  < 2e-16 ***
## Helpfulness_scale               0.71268    0.03573  19.946  < 2e-16 ***
## scale(Credibility)              0.22028    0.03225   6.830 1.28e-11 ***
## Danger_scale:Helpfulness_scale  0.03386    0.02607   1.299    0.194    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.137 on 1355 degrees of freedom
## Multiple R-squared:  0.4153, Adjusted R-squared:  0.4136 
## F-statistic: 240.6 on 4 and 1355 DF,  p-value: < 2.2e-16
summary(mod_cov2<-lm(Heroism ~ Risk_dummy * Help_dummy + scale(Credibility) , data = Psych))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy + scale(Credibility), 
##     data = Psych)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.1603 -0.6325  0.1036  0.9199  2.8335 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)             4.4714     0.0825  54.199  < 2e-16 ***
## Risk_dummy              0.2938     0.1724   1.704   0.0896 .  
## Help_dummy              0.1723     0.1701   1.012   0.3123    
## scale(Credibility)      0.4045     0.0888   4.555 7.99e-06 ***
## Risk_dummy:Help_dummy  -0.2952     0.3300  -0.895   0.3718    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.355 on 265 degrees of freedom
## Multiple R-squared:  0.1202, Adjusted R-squared:  0.1069 
## F-statistic: 9.049 on 4 and 265 DF,  p-value: 7.262e-07
summary(mod_cov2<-lm(Heroism ~ Risk_dummy * Help_dummy + scale(Credibility) , data = Weld))
## 
## Call:
## lm(formula = Heroism ~ Risk_dummy * Help_dummy + scale(Credibility), 
##     data = Weld)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.1640 -0.5734  0.0314  0.9641  2.7653 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)            4.68262    0.07599  61.618  < 2e-16 ***
## Risk_dummy             0.96808    0.15826   6.117 3.35e-09 ***
## Help_dummy             0.37062    0.15208   2.437   0.0155 *  
## scale(Credibility)     0.37849    0.07949   4.761 3.15e-06 ***
## Risk_dummy:Help_dummy  0.07762    0.30474   0.255   0.7991    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.258 on 269 degrees of freedom
## Multiple R-squared:  0.2498, Adjusted R-squared:  0.2387 
## F-statistic:  22.4 on 4 and 269 DF,  p-value: 5.608e-16

The motivation manipulation is nullified when accounting for credibility – the effect of motivation might first and foremost influence credibility, and without this part of the variance, the effect on heroism is non existent.

If we designed a Motivation manipulation that did not influence credibility, we might not find the effects…

Mediation analyses

We just observed that using credibility or attitude as covariant in the models inferring the effects of our manipulations, resulted in null effects of Motivation manipulation.

Given that our Motivation -> Heroism effect is causal, this does not mean that there is a confounder effect with attitude (i.e, it’s not a halo effect) – rather it means that our effect of motivation might be mediated by attitude or credibility.

With all the caution we need to exercise about this approach (see Julia Rohrer’s multiple rants) - we can evaluate mediation in our models. I use a SEM approach here.

# 1. Specify the mediation model in lavaan syntax
med.model <- '
  # a path
  Attitude     ~ a*Help_dummy
  
  # b and c paths
  Heroism      ~ b*Attitude + cp*Help_dummy
  
  # indirect and total effects
  indirect     := a*b
  total        := cp + (a*b)
'

# 2. Fit the model
fit <- sem(med.model, data = Set, se = "bootstrap", bootstrap = 1000)

# 3. Inspect results
summary(fit, standardized = TRUE, fit.measures = TRUE,
        ci = TRUE, rsquare = TRUE)
## lavaan 0.6-19 ended normally after 1 iteration
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                         5
## 
##   Number of observations                          1360
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 0.000
##   Degrees of freedom                                 0
## 
## Model Test Baseline Model:
## 
##   Test statistic                               841.183
##   Degrees of freedom                                 3
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       1.000
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4211.338
##   Loglikelihood unrestricted model (H1)      -4211.338
##                                                       
##   Akaike (AIC)                                8432.675
##   Bayesian (BIC)                              8458.751
##   Sample-size adjusted Bayesian (SABIC)       8442.868
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.000
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.000
##   P-value H_0: RMSEA <= 0.050                       NA
##   P-value H_0: RMSEA >= 0.080                       NA
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.000
## 
## Parameter Estimates:
## 
##   Standard errors                            Bootstrap
##   Number of requested bootstrap draws             1000
##   Number of successful bootstrap draws             957
## 
## Regressions:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##   Attitude ~                                                            
##     Help_dmmy  (a)    0.231    0.066    3.524    0.000    0.100    0.353
##   Heroism ~                                                             
##     Attitude   (b)    0.844    0.024   35.378    0.000    0.797    0.889
##     Help_dmmy (cp)   -0.011    0.059   -0.187    0.852   -0.125    0.109
##    Std.lv  Std.all
##                   
##     0.231    0.097
##                   
##     0.844    0.676
##    -0.011   -0.004
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##    .Attitude          1.400    0.065   21.503    0.000    1.277    1.534
##    .Heroism           1.198    0.065   18.512    0.000    1.077    1.322
##    Std.lv  Std.all
##     1.400    0.991
##     1.198    0.544
## 
## R-Square:
##                    Estimate
##     Attitude          0.009
##     Heroism           0.456
## 
## Defined Parameters:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##     indirect          0.195    0.056    3.506    0.000    0.084    0.298
##     total             0.184    0.080    2.311    0.021    0.032    0.351
##    Std.lv  Std.all
##     0.195    0.066
##     0.184    0.062

In a bootstrapped SEM (1 000 draws), Motivation_dummy significantly predicted Attitude, b = 0.231, SE = 0.065, z = 3.56, p < .001, and Attitude predicted Heroism, b = 0.844, SE = 0.025, z = 34.04, p < .001. The direct effect of Motivation_dummy on Heroism was not significant, b = –0.011, SE = 0.061, p = .86. The indirect effect (a × b) was 0.195, 95% CI [0.091, 0.310], p < .001, indicating full mediation. R² for Heroism was 0.456.

# 1. Specify the mediation model in lavaan syntax
med.model <- '
  # a path
  Credibility     ~ a*Help_dummy
  
  # b and c paths
  Heroism      ~ b*Credibility + cp*Help_dummy
  
  # indirect and total effects
  indirect     := a*b
  total        := cp + (a*b)
'

# 2. Fit the model
fit <- sem(med.model, data = Set, se = "bootstrap", bootstrap = 1000)

# 3. Inspect results
summary(fit, standardized = TRUE, fit.measures = TRUE,
        ci = TRUE, rsquare = TRUE)
## lavaan 0.6-19 ended normally after 1 iteration
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                         5
## 
##   Number of observations                          1360
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 0.000
##   Degrees of freedom                                 0
## 
## Model Test Baseline Model:
## 
##   Test statistic                               178.808
##   Degrees of freedom                                 3
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       1.000
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4878.571
##   Loglikelihood unrestricted model (H1)      -4878.571
##                                                       
##   Akaike (AIC)                                9767.142
##   Bayesian (BIC)                              9793.218
##   Sample-size adjusted Bayesian (SABIC)       9777.335
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.000
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.000
##   P-value H_0: RMSEA <= 0.050                       NA
##   P-value H_0: RMSEA >= 0.080                       NA
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.000
## 
## Parameter Estimates:
## 
##   Standard errors                            Bootstrap
##   Number of requested bootstrap draws             1000
##   Number of successful bootstrap draws             896
## 
## Regressions:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##   Credibility ~                                                         
##     Help_dmmy  (a)    0.571    0.082    6.936    0.000    0.409    0.724
##   Heroism ~                                                             
##     Crediblty  (b)    0.294    0.028   10.608    0.000    0.239    0.346
##     Help_dmmy (cp)    0.016    0.078    0.210    0.834   -0.135    0.173
##    Std.lv  Std.all
##                   
##     0.571    0.187
##                   
##     0.294    0.301
##     0.016    0.005
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##    .Credibility       2.235    0.078   28.802    0.000    2.072    2.395
##    .Heroism           2.002    0.079   25.437    0.000    1.832    2.149
##    Std.lv  Std.all
##     2.235    0.965
##     2.002    0.909
## 
## R-Square:
##                    Estimate
##     Credibility       0.035
##     Heroism           0.091
## 
## Defined Parameters:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##     indirect          0.168    0.029    5.848    0.000    0.114    0.223
##     total             0.184    0.080    2.300    0.021    0.037    0.356
##    Std.lv  Std.all
##     0.168    0.056
##     0.184    0.062

In a bootstrapped SEM (1 000 draws), Motivation_dummy significantly predicted Credibility, b = 0.571, SE = 0.081, z = 7.068, p < .001, and Credibility predicted Heroism, b = 0.294, SE = 0.080, z = 27.802, p < .001. The direct effect of Motivation_dummy on Heroism was not significant, b = 0.016, SE = 0.078, p = 0.834. The indirect effect (a × b) was 0.168, 95% CI [0.021, 0.348], p < .001, indicating full mediation. R² for Heroism was 0.091.

# 1. Specify the mediation model in lavaan syntax
med.model <- '
  # a path
  Heroism     ~ a*Help_dummy
  
  # b and c paths
  Attitude      ~ b*Heroism + cp*Help_dummy
  
  # indirect and total effects
  indirect     := a*b
  total        := cp + (a*b)
'

# 2. Fit the model
fit <- sem(med.model, data = Set, se = "bootstrap", bootstrap = 1000)

# 3. Inspect results
summary(fit, standardized = TRUE, fit.measures = TRUE,
        ci = TRUE, rsquare = TRUE)
## lavaan 0.6-19 ended normally after 1 iteration
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                         5
## 
##   Number of observations                          1360
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 0.000
##   Degrees of freedom                                 0
## 
## Model Test Baseline Model:
## 
##   Test statistic                               841.183
##   Degrees of freedom                                 3
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       1.000
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4211.338
##   Loglikelihood unrestricted model (H1)      -4211.338
##                                                       
##   Akaike (AIC)                                8432.675
##   Bayesian (BIC)                              8458.751
##   Sample-size adjusted Bayesian (SABIC)       8442.868
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.000
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.000
##   P-value H_0: RMSEA <= 0.050                       NA
##   P-value H_0: RMSEA >= 0.080                       NA
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.000
## 
## Parameter Estimates:
## 
##   Standard errors                            Bootstrap
##   Number of requested bootstrap draws             1000
##   Number of successful bootstrap draws             938
## 
## Regressions:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##   Heroism ~                                                             
##     Help_dmmy  (a)    0.184    0.080    2.306    0.021    0.033    0.351
##   Attitude ~                                                            
##     Heroism    (b)    0.538    0.021   25.661    0.000    0.493    0.578
##     Help_dmmy (cp)    0.132    0.047    2.778    0.005    0.034    0.221
##    Std.lv  Std.all
##                   
##     0.184    0.062
##                   
##     0.538    0.672
##     0.132    0.055
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##    .Heroism           2.195    0.079   27.686    0.000    2.033    2.341
##    .Attitude          0.764    0.038   19.896    0.000    0.692    0.841
##    Std.lv  Std.all
##     2.195    0.996
##     0.764    0.541
## 
## R-Square:
##                    Estimate
##     Heroism           0.004
##     Attitude          0.459
## 
## Defined Parameters:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##     indirect          0.099    0.043    2.289    0.022    0.017    0.188
##     total             0.231    0.065    3.558    0.000    0.098    0.352
##    Std.lv  Std.all
##     0.099    0.042
##     0.231    0.097
library(lavaan)

med.model.cov <- '
  # structural regressions  
  Attitude ~ a*Help + d*Credibility  
  Heroism  ~ b*Attitude + cp*Help + e*Credibility  

  # indirect & total effects  
  indirect := a * b  
  total    := cp + (a * b)  
'

fit.cov <- sem(  
  med.model.cov,  
  data      = Set,  
  se        = "bootstrap",  
  bootstrap = 1000  
)

summary(fit.cov, standardized=TRUE, fit.measures=TRUE, ci=TRUE)
## lavaan 0.6-19 ended normally after 1 iteration
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                         7
## 
##   Number of observations                          1360
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 0.000
##   Degrees of freedom                                 0
## 
## Model Test Baseline Model:
## 
##   Test statistic                               976.331
##   Degrees of freedom                                 5
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       1.000
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4143.763
##   Loglikelihood unrestricted model (H1)             NA
##                                                       
##   Akaike (AIC)                                8301.527
##   Bayesian (BIC)                              8338.034
##   Sample-size adjusted Bayesian (SABIC)       8315.797
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.000
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.000
##   P-value H_0: RMSEA <= 0.050                       NA
##   P-value H_0: RMSEA >= 0.080                       NA
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.000
## 
## Parameter Estimates:
## 
##   Standard errors                            Bootstrap
##   Number of requested bootstrap draws             1000
##   Number of successful bootstrap draws            1000
## 
## Regressions:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##   Attitude ~                                                            
##     Help       (a)   -0.113    0.064   -1.780    0.075   -0.234    0.015
##     Crediblty  (d)    0.206    0.022    9.305    0.000    0.162    0.251
##   Heroism ~                                                             
##     Attitude   (b)    0.801    0.026   31.364    0.000    0.751    0.848
##     Help      (cp)    0.074    0.059    1.266    0.206   -0.045    0.188
##     Crediblty  (e)    0.128    0.022    5.722    0.000    0.084    0.172
##    Std.lv  Std.all
##                   
##    -0.113   -0.048
##     0.206    0.264
##                   
##     0.801    0.642
##     0.074    0.025
##     0.128    0.132
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##    .Attitude          1.305    0.062   21.124    0.000    1.188    1.425
##    .Heroism           1.164    0.062   18.671    0.000    1.044    1.286
##    Std.lv  Std.all
##     1.305    0.923
##     1.164    0.528
## 
## Defined Parameters:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##     indirect         -0.091    0.051   -1.774    0.076   -0.189    0.011
##     total            -0.016    0.078   -0.210    0.834   -0.172    0.136
##    Std.lv  Std.all
##    -0.091   -0.031
##    -0.016   -0.005
AIC(fit.cov)    # lower = better
## [1] 8301.527
BIC(fit.cov)
## [1] 8338.034
library(lavaan)

med.model.cov <- '
  # structural regressions  
  Heroism ~ a*Help + d*Credibility  
  Attitude  ~ b*Heroism + cp*Help + e*Credibility  

  # indirect & total effects  
  indirect := a * b  
  total    := cp + (a * b)  
'

fit.reverse.cov <- sem(  
  med.model.cov,  
  data      = Set,  
  se        = "bootstrap",  
  bootstrap = 1000  
)

summary(fit.reverse.cov, standardized=TRUE, fit.measures=TRUE, ci=TRUE)
## lavaan 0.6-19 ended normally after 1 iteration
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                         7
## 
##   Number of observations                          1360
## 
## Model Test User Model:
##                                                       
##   Test statistic                                 0.000
##   Degrees of freedom                                 0
## 
## Model Test Baseline Model:
## 
##   Test statistic                               976.331
##   Degrees of freedom                                 5
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    1.000
##   Tucker-Lewis Index (TLI)                       1.000
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4143.763
##   Loglikelihood unrestricted model (H1)             NA
##                                                       
##   Akaike (AIC)                                8301.527
##   Bayesian (BIC)                              8338.034
##   Sample-size adjusted Bayesian (SABIC)       8315.797
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.000
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.000
##   P-value H_0: RMSEA <= 0.050                       NA
##   P-value H_0: RMSEA >= 0.080                       NA
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.000
## 
## Parameter Estimates:
## 
##   Standard errors                            Bootstrap
##   Number of requested bootstrap draws             1000
##   Number of successful bootstrap draws            1000
## 
## Regressions:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##   Heroism ~                                                             
##     Help       (a)   -0.016    0.078   -0.210    0.834   -0.172    0.136
##     Crediblty  (d)    0.294    0.028   10.492    0.000    0.237    0.348
##   Attitude ~                                                            
##     Heroism    (b)    0.522    0.022   23.954    0.000    0.478    0.564
##     Help      (cp)   -0.105    0.048   -2.182    0.029   -0.197   -0.009
##     Crediblty  (e)    0.053    0.017    3.119    0.002    0.021    0.086
##    Std.lv  Std.all
##                   
##    -0.016   -0.005
##     0.294    0.301
##                   
##     0.522    0.652
##    -0.105   -0.044
##     0.053    0.068
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##    .Heroism           2.002    0.079   25.491    0.000    1.836    2.150
##    .Attitude          0.759    0.038   19.853    0.000    0.684    0.833
##    Std.lv  Std.all
##     2.002    0.909
##     0.759    0.537
## 
## Defined Parameters:
##                    Estimate  Std.Err  z-value  P(>|z|) ci.lower ci.upper
##     indirect         -0.009    0.041   -0.210    0.834   -0.094    0.070
##     total            -0.113    0.064   -1.779    0.075   -0.234    0.015
##    Std.lv  Std.all
##    -0.009   -0.004
##    -0.113   -0.048
AIC(fit.reverse.cov)    # lower = better
## [1] 8301.527
BIC(fit.reverse.cov)
## [1] 8338.034
anova(fit.cov, fit.reverse.cov)  # if nested, gives Δχ² test
## Warning: lavaan->lavTestLRT():  
##    some models have the same degrees of freedom
## 
## Chi-Squared Difference Test
## 
##                 Df    AIC  BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
## fit.cov          0 8301.5 8338     0                                    
## fit.reverse.cov  0 8301.5 8338     0          0     0       0

Halo effects further analyses

We can assess how Risk manipulation influence helpfulness and bravery, and conversely how Motivation manipulation influences perceptions of bravery and exposure to danger. Let’s just look at Cohen’s d.

outcomes <- c("Danger", "Helpfulness", "Selfless", "Brave")
predictors <- c("Risk", "Help")
# Ensure correct factor level for Risk
Set$Risk <- factor(Set$Risk, levels = c("R", "B"))

# Initialize dataframe
effect_sizes <- data.frame()

# Loop through all predictor-outcome combinations
for (outcome in outcomes) {
  for (predictor in predictors) {
    if (length(unique(Set[[predictor]])) == 2) {
      Set[[predictor]] <- as.factor(Set[[predictor]])
      
      # Get levels
      levels_pred <- levels(Set[[predictor]])
      
      # Compute group means
      group_means <- Set %>%
        group_by(!!sym(predictor)) %>%
        summarise(mean_outcome = mean(!!sym(outcome), na.rm = TRUE)) %>%
        pivot_wider(names_from = !!sym(predictor), values_from = mean_outcome, names_prefix = "Mean_")
      
      # Compute Cohen's d (positive if level[2] > level[1])
      d <- cohens_d(as.formula(paste(outcome, "~", predictor)), data = Set, pooled_sd = TRUE, ci = 0.95)
      
      # Build mean label
      mean_label <- paste0(
        levels_pred[1], " = ", round(group_means[[paste0("Mean_", levels_pred[1])]], 2), ", ",
        levels_pred[2], " = ", round(group_means[[paste0("Mean_", levels_pred[2])]], 2)
      )
      
      # Store results
      effect_sizes <- bind_rows(effect_sizes, tibble(
        Effect = paste0(predictor, " → ", outcome),
        Cohen_d = d$Cohens_d,
        CI = paste0("[", round(d$CI_low, 2), ", ", round(d$CI_high, 2), "]"),
        Group = predictor,
        Group_Means = mean_label
      ))
    }
  }
}

# Format table
effect_sizes_gt <- effect_sizes %>%
  gt(groupname_col = "Group") %>%
  fmt_number(
    columns = c("Cohen_d"),
    decimals = 2
  ) %>%
  tab_header(
    title = "Effect Sizes (Cohen's d)"
  ) %>%
  cols_label(
    Effect = "Effect",
    Cohen_d = "Cohen's d",
    CI = "95% CI",
    Group_Means = "Group Means"
  )

effect_sizes_gt
Effect Sizes (Cohen's d)
Effect Cohen's d 95% CI Group Means
Risk
Risk → Danger 0.99 [0.87, 1.1] R = 6.28, B = 5.11
Risk → Helpfulness 0.25 [0.15, 0.36] R = 6.1, B = 5.81
Risk → Selfless 0.17 [0.07, 0.28] R = 5.61, B = 5.38
Risk → Brave 0.28 [0.18, 0.39] R = 5.98, B = 5.62
Help
Help → Danger 0.00 [-0.1, 0.11] H = 5.7, S = 5.7
Help → Helpfulness 0.27 [0.17, 0.38] H = 6.11, S = 5.8
Help → Selfless 0.26 [0.15, 0.37] H = 5.67, S = 5.32
Help → Brave 0.18 [0.07, 0.29] H = 5.92, S = 5.69

Analysis of Heroism scores in the Non-Hero condition (Bored and self-centered)

What does heroism in each occupation look like when we attempt to frame them as bored and self-centered?

NonHeroes <- subset(Set, Set$Risk == "B" & Set$Help == "S")

df_summary <- NonHeroes %>%
  group_by(Job) %>%
  summarize(
    mean_score = mean(Heroism, na.rm = TRUE),
    sd_score   = sd(Heroism, na.rm = TRUE),
    .groups = "drop"
  )

# 2. Create the ggplot using the long format data
ggplot(NonHeroes, aes(x = Heroism)) +
  geom_histogram(aes(fill = after_stat(count)),
                 binwidth = 1,
                 color = "black", show.legend = FALSE) +
  facet_grid( ~ Job, scales = "free") +
  scale_fill_gradientn(
    colours = brewer.pal(9, "YlOrBr"),
    name = "Count"
  ) +
  labs(
    title = "Histograms of Variable by Occupation",
    x = "Score",
    y = "Count"
  ) +
  # Annotate each facet with the mean and standard deviation
  geom_text(data = df_summary,
            aes(x = 7, y = Inf,
                label = paste0("Mean = ", round(mean_score, 2),
                               "\nSD = ", round(sd_score, 2))),
            vjust = 1.5, hjust = 1.1, size = 3) +
  theme_classic() +
  theme(panel.grid.major.y = element_line(linewidth = 0.5),
        panel.grid.minor.y = element_line(linewidth = 0.5))