Introduction

Recently, entire occupations have been granted the label of “Heroes”. For example, during the recent COVID-19 pandemic, as well as in previous health crises (such as the SARS and H1N1 pandemics), healthcare workers were applauded – sometimes literally – as ‘heroes’. Military personnel in the US, but also in the UK, are also often called ‘heroes’. So are police, firefighters, and other first responders. But how malleable is this perception of heroism in these occupations? Can we manipulate the public perception toward recognising “heroism” in occupations not typically perceived as such (e.g., psychiatrists)?

In this study, we tested the effect of heroism perception six occupations on impunity. More specifically, we evaluated if perceived heroism across occupations correlated with the acceptability of regulations violations. We hypothesised

Perception of heroism is positively associated with the accessibility of regulations violations

This was tested across two types of measures of the accessibility of regulations violations: a context independent measure (e.g., ‘Journalists can do their job better when they’re not forced to follow standard procedures’), and one context-dependent measures (specific dilemmas describing workers who broke the rules).

In addition to this main hypothesis, we tested:

Perception of Villains is negatively associated with the accessibility of regulations violations

I.e., we are more severe toward villains breaking the rules.

and

Context-dependent and context-independent measures of the acceptability of regulations violations should correlate, r > .3, within each occupation

This latter hypothesis was intented to assess if future studies should include or not the context-dependent measures, which is very specific to the target occupation, and as such might be harder to generalise from.

This study was registered: https://osf.io/2xtn8

Loading Packages

if(!require("dplyr")) install.packages("dplyr")
if(!require("tidyr")) install.packages("tidyr")
if(!require("stringr")) install.packages("stringr")
if(!require("ggplot2")) install.packages("ggplot2")
if(!require("lmerTest")) install.packages("lmerTest")
if(!require("emmeans")) install.packages("emmeans")
if(!require("data.table")) install.packages("data.table")
if(!require("PerformanceAnalytics")) install.packages("PerformanceAnalytics")
if(!require("interactions")) install.packages("interactions")
if(!require("robustlmm")) install.packages("robustlmm")
if(!require("car")) install.packages("car")
if(!require("effectsize")) install.packages("effectsize")
if(!require("RColorBrewer")) install.packages("RColorBrewer")
if(!require("effectsize")) install.packages("effectsize")
if(!require("report")) install.packages("report")
if(!require("sjPlot")) install.packages("sjPlot")

if(!require("purrr")) install.packages("purrr")
if(!require("gt")) install.packages("gt")
if(!require("broom")) install.packages("broom")

My environment

sessionInfo()
## R version 4.2.2 (2022-10-31)
## Platform: aarch64-apple-darwin20 (64-bit)
## Running under: macOS 14.7.1
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] broom_1.0.8                gt_1.0.0                  
##  [3] purrr_1.0.4                sjPlot_2.8.17             
##  [5] report_0.6.1               RColorBrewer_1.1-3        
##  [7] effectsize_1.0.0           car_3.1-3                 
##  [9] carData_3.0-5              robustlmm_3.3-1           
## [11] interactions_1.2.0         PerformanceAnalytics_2.0.8
## [13] xts_0.14.1                 zoo_1.8-14                
## [15] data.table_1.17.0          emmeans_1.11.1            
## [17] lmerTest_3.1-3             lme4_1.1-35.1             
## [19] Matrix_1.6-5               ggplot2_3.5.2             
## [21] stringr_1.5.1              tidyr_1.3.1               
## [23] dplyr_1.1.4               
## 
## loaded via a namespace (and not attached):
##  [1] nlme_3.1-164        insight_1.2.0       numDeriv_2016.8-1.1
##  [4] tools_4.2.2         backports_1.5.0     bslib_0.9.0        
##  [7] R6_2.6.1            sjlabelled_1.2.0    withr_3.0.2        
## [10] tidyselect_1.2.1    compiler_4.2.2      performance_0.13.0 
## [13] cli_3.6.5           xml2_1.3.8          sandwich_3.1-1     
## [16] bayestestR_0.15.3   sass_0.4.10         scales_1.4.0       
## [19] DEoptimR_1.1-3-1    mvtnorm_1.2-4       robustbase_0.99-2  
## [22] quadprog_1.5-8      digest_0.6.37       minqa_1.2.6        
## [25] rmarkdown_2.29      pkgconfig_2.0.3     htmltools_0.5.8.1  
## [28] parallelly_1.44.0   fastmap_1.2.0       rlang_1.1.6        
## [31] rstudioapi_0.17.1   jquerylib_0.1.4     farver_2.1.2       
## [34] generics_0.1.4      jsonlite_2.0.0      broom.mixed_0.2.9.6
## [37] magrittr_2.0.3      Formula_1.2-5       parameters_0.25.0  
## [40] Rcpp_1.0.14         abind_1.4-8         lifecycle_1.0.4    
## [43] fastGHQuad_1.0.1    furrr_0.3.1         stringi_1.8.7      
## [46] multcomp_1.4-28     yaml_2.3.10         MASS_7.3-60.0.1    
## [49] grid_4.2.2          parallel_4.2.2      listenv_0.9.1      
## [52] sjmisc_2.8.10       forcats_1.0.0       lattice_0.22-7     
## [55] ggeffects_2.2.1     splines_4.2.2       jtools_2.3.0       
## [58] pander_0.6.6        sjstats_0.19.0      knitr_1.50         
## [61] pillar_1.10.2       boot_1.3-31         estimability_1.5.1 
## [64] codetools_0.2-20    glue_1.8.0          evaluate_1.0.4     
## [67] vctrs_0.6.5         nloptr_2.0.3        gtable_0.3.6       
## [70] future_1.40.0       datawizard_1.0.2    cachem_1.1.0       
## [73] xfun_0.53           xtable_1.8-4        coda_0.19-4.1      
## [76] survival_3.8-3      tibble_3.2.1        globals_0.18.0     
## [79] TH.data_1.1-3

Loading data

Please adjust the path if you are running this script in your local machine.

df<-read.csv("~/Downloads/Data_Dilemma_Test_Sept2025.csv", comment.char="#")

Data-cleaning process – going from a wide- to a long-format. Click ‘Show’ to see the details of the cleaning process.

df<-subset(df, df$commit != "No, I won't")
df<- subset(df, df$Attentive != "Failed")
df<-df[-c(1:2),]



likert_map <- c(
  "Strongly disagree"           = 1,
  "Moderately disagree"         = 2,
  "Slightly disagree"           = 3,
  "Neither disagree, nor agree" = 4,
  "Slightly agree"              = 5,
  "Moderately agree"            = 6,
  "Strongly agree"              = 7
)

df <- df %>%
  mutate(across(
    # only transform columns that actually contain Likert text
    where(~ any(. %in% names(likert_map))),
    ~ as.numeric(likert_map[as.character(.)])
  ))


AttentionCheck <- c("Spec_Villains_1", "GenVillWeld_3", "SpecVillSold_1")

df <- df[ , !(colnames(df) %in% AttentionCheck)]

df <- df %>%
  mutate(
    # ---- Specific group means ----
    Mean_Spec_F = rowMeans(select(., starts_with("SpecVillFire")), na.rm = TRUE),
    Mean_Gen_F  = rowMeans(select(., starts_with("GenVillFire")), na.rm = TRUE),
    
    Mean_Spec_S = rowMeans(select(., starts_with("SpecVillSold")), na.rm = TRUE),
    Mean_Gen_S  = rowMeans(select(., starts_with("GenVillSold")), na.rm = TRUE),
    
    Mean_Spec_P = rowMeans(select(., starts_with("SpecVillPsych")), na.rm = TRUE),
    Mean_Gen_P  = rowMeans(select(., starts_with("GenVillPsych")), na.rm = TRUE),
    
    Mean_Spec_J = rowMeans(select(., starts_with("Spec_Villains")), na.rm = TRUE),
    Mean_Gen_J  = rowMeans(select(., starts_with("General.Villains")), na.rm = TRUE),
    
    Mean_Spec_W = rowMeans(select(., starts_with("SpecVillWeld")), na.rm = TRUE),
    Mean_Gen_W  = rowMeans(select(., starts_with("GenVillWeld")), na.rm = TRUE),
    
    Mean_Spec_N = rowMeans(select(., starts_with("SpecVillNurses")), na.rm = TRUE),
    Mean_Gen_N  = rowMeans(select(., starts_with("GenVillNurses")), na.rm = TRUE),
    
    # ---- Overall category means ----
    Mean_General  = rowMeans(select(., starts_with("Gen", ignore.case = FALSE)), na.rm = TRUE),
    Mean_Specific = rowMeans(select(., starts_with("Spe")), na.rm = TRUE)
  )


colnames(df)[colnames(df) == "General.Villains_1"] <- "GenVillJournalist_1"
colnames(df)[colnames(df) == "General.Villains_2"] <- "GenVillJournalist_2"
colnames(df)[colnames(df) == "General.Villains_3"] <- "GenVillJournalist_3"
colnames(df)[colnames(df) == "Spec_Villains_2"] <- "SpecVillJournalist_2"
colnames(df)[colnames(df) == "Spec_Villains_3"] <- "SpecVillJournalist_3"
colnames(df)[colnames(df) == "Spec_Villains_4"] <- "SpecVillJournalist_4"


colnames(df)[colnames(df) == "Q244_1"] <- "Hero_F"
colnames(df)[colnames(df) == "Q244_2"] <- "Hero_N"
colnames(df)[colnames(df) == "Q244_3"] <- "Hero_S"
colnames(df)[colnames(df) == "Q244_4"] <- "Hero_P"
colnames(df)[colnames(df) == "Q244_5"] <- "Hero_W"
colnames(df)[colnames(df) == "Q244_6"] <- "Hero_J"

colnames(df)[colnames(df) == "Q245_1"] <- "Danger_F"
colnames(df)[colnames(df) == "Q245_2"] <- "Danger_N"
colnames(df)[colnames(df) == "Q245_3"] <- "Danger_S"
colnames(df)[colnames(df) == "Q245_4"] <- "Danger_P"
colnames(df)[colnames(df) == "Q245_5"] <- "Danger_W"
colnames(df)[colnames(df) == "Q245_6"] <- "Danger_J"

colnames(df)[colnames(df) == "Q246_1"] <- "Helpf_F"
colnames(df)[colnames(df) == "Q246_2"] <- "Helpf_N"
colnames(df)[colnames(df) == "Q246_3"] <- "Helpf_S"
colnames(df)[colnames(df) == "Q246_4"] <- "Helpf_P"
colnames(df)[colnames(df) == "Q246_5"] <- "Helpf_W"
colnames(df)[colnames(df) == "Q246_6"] <- "Helpf_J"

colnames(df)[colnames(df) == "Q247_1"] <- "Att_F"
colnames(df)[colnames(df) == "Q247_2"] <- "Att_N"
colnames(df)[colnames(df) == "Q247_3"] <- "Att_S"
colnames(df)[colnames(df) == "Q247_4"] <- "Att_P"
colnames(df)[colnames(df) == "Q247_5"] <- "Att_W"
colnames(df)[colnames(df) == "Q247_6"] <- "Att_J"

colnames(df)[colnames(df) == "Q248_1"] <- "Vill_F"
colnames(df)[colnames(df) == "Q248_2"] <- "Vill_N"
colnames(df)[colnames(df) == "Q248_3"] <- "Vill_S"
colnames(df)[colnames(df) == "Q248_4"] <- "Vill_P"
colnames(df)[colnames(df) == "Q248_5"] <- "Vill_W"
colnames(df)[colnames(df) == "Q248_6"] <- "Vill_J"



hero_map <- c(
  "Not at all heroes"           = 1,
  "Very little"         = 2,
  "A little"           = 3,
  "Somewhat" = 4,
  "Quite a bit"              = 5,
  "A lot"            = 6,
  "Very much heroes"              = 7
)

df <- df %>%
  mutate(across(
    65:70,
    ~ as.numeric(hero_map[as.character(.)])
  ))

vill_map <- c(
  "Not at all villains"           = 1,
  "Very little"         = 2,
  "A little"           = 3,
  "Somewhat" = 4,
  "Quite a bit"              = 5,
  "A lot"            = 6,
  "Very much villains"              = 7
)

df <- df %>%
  mutate(across(
    71:76,
    ~ as.numeric(vill_map[as.character(.)])
  ))

help_map <- c(
  "Not at all helpful"           = 1,
  "Very little"         = 2,
  "A little"           = 3,
  "Somewhat" = 4,
  "Quite a bit"              = 5,
  "A lot"            = 6,
  "Very much helpful"              = 7
)

df <- df %>%
  mutate(across(
    83:88,
    ~ as.numeric(help_map[as.character(.)])
  ))


danger_map <- c(
  "Not at all exposed to danger"           = 1,
  "Very little"         = 2,
  "A little"           = 3,
  "Somewhat" = 4,
  "Quite a bit"              = 5,
  "A lot"            = 6,
  "Very much exposed to danger"              = 7
)

df <- df %>%
  mutate(across(
    77:82,
    ~ as.numeric(danger_map[as.character(.)])
  ))





Att_map <- c(
  "Very negative"           = 1,
  "Quite negative"         = 2,
  "Slightly negative"           = 3,
  "Neutral" = 4,
  "Slightly positive"              = 5,
  "Quite positive"            = 6,
  "Very positive"              = 7
)


df <- df %>%
  mutate(across(
    89:94,
    ~ as.numeric(Att_map[as.character(.)])
  ))




library(dplyr)
library(tidyr)
library(stringr)
library(readr)   # for parse_number if needed
## Warning: package 'readr' was built under R version 4.2.3
# 1) Add participant ID
df$Part <- df$prolID

# 2) Long table for the main measures (Mean_Spec_* and Mean_Gen_*)
means_long <- df %>%
  pivot_longer(
    cols = matches("^Mean_(Spec|Gen)_[FSPJWN]$"),
    names_to   = c("Measure", "Context"),
    names_pattern = "^Mean_(Spec|Gen)_([FSPJWN])$",
    values_to  = "Mean"
  )

# 3) Long → wide for the contextual attributes:
#    Hero_*, Vill_*, Danger_*, Helpf_*, Att_*  (each has suffix _F/_S/_P/_J/_W/_N)
attrs_wide <- df %>%
  pivot_longer(
    cols = matches("^(Hero|Vill|Danger|Helpf|Att)_[FSPJWN]$"),
    names_to   = c("Attribute", "Context"),
    names_pattern = "^(Hero|Vill|Danger|Helpf|Att)_([FSPJWN])$",
    values_to  = "value"
  ) %>%
  pivot_wider(
    names_from = Attribute,
    values_from = value
  )

# 4) Join them by participant and context
long_df <- means_long %>%
  select(Part, Measure, Context, Mean) %>%
  left_join(
    attrs_wide %>% select(Part, Context, Hero, Vill, Danger, Helpf, Att),
    by = c("Part", "Context")
  ) %>%
  # (optional) keep your preferred ordering
  mutate(
    Measure = factor(Measure, levels = c("Spec", "Gen"), labels = c("Mean_Spec", "Mean_Gen")),
    Context = factor(Context, levels = c("F","S","P","J","W","N"))
  ) %>%
  arrange(Part, Context, Measure)

Confirmatory analyses (as registered)

Descriptives

We describe our sample: their age, and gender with regard to the target occupations in the study.

paste0("Mean age in the sample is ", mean(as.numeric(as.character(df$age))), ", SD = ", sd(as.numeric(as.character(df$age))))
## [1] "Mean age in the sample is 46.2666666666667, SD = 15.0882206892479"
## Gender

df %>% group_by(gender) %>% summarise(N=n()) %>%
  ggplot(aes(x=gender,y=N,fill=gender))+
  geom_bar(stat = 'identity',color='black')+
  scale_y_continuous(labels = scales::comma_format(accuracy = 2))+
  geom_text(aes(label=N),vjust=-0.25,fontface='bold')+
  theme_bw()+
  theme(axis.text = element_text(color='black',face='bold'),
        axis.title = element_text(color='black',face='bold'),
        legend.text = element_text(color='black',face='bold'),
        legend.title = element_text(color='black',face='bold')) +
  ggtitle("Gender distribution")

Note that 2 participants failed attention checks and are excluded from the present report.

Main variables plot

How each occupation was evaluated in our sample (N = 300):

vars <- c("Hero", "Vill", "Danger", "Helpf", "Att")

df_long <- long_df %>%
  pivot_longer(all_of(vars), names_to = "Role", values_to = "Score") %>%
  mutate(
    Job   = Context,                     # to mirror your facet code
    Role  = factor(Role, levels = vars), # row order
    Score = as.numeric(Score)
  )

# If you want just one scale (e.g., Mean_Spec), uncomment:
# df_long <- df_long %>% filter(Measure == "Mean_Spec")

# ---- 2) Summary stats per facet (for the overlay text) ----
df_summary <- df_long %>%
  group_by(Role, Job) %>%
  summarise(
    mean_score = mean(Score, na.rm = TRUE),
    sd_score   = sd(Score, na.rm = TRUE),
    .groups = "drop"
  )

Job_labels <- c(
  "N" = "Nurses",
  "F" = "Firefighters",
  "W" = "Underwater welders",
  "S" = "Soldiers",
  "J" = "Journalists",
  "P" = "Psychiatrists"
)


# ---- 3) Plot: rows = Role, cols = Job (Context) ----
ggplot(df_long, aes(x = Score)) +
  geom_histogram(
    aes(fill = after_stat(count)),
    binwidth = 1,
    boundary = 0.5, closed = "left",
    color = "black", show.legend = FALSE,
    na.rm = TRUE
  ) +
  facet_grid(Role ~ Job, scales = "free_y", labeller = labeller(Job = Job_labels)) +
  scale_x_continuous(breaks = 1:7, limits = c(0.5, 7.5)) +
  scale_fill_gradientn(colours = brewer.pal(9, "YlOrBr"), name = "Count") +
  labs(
    title = "Histograms of Heroes, Villains, Danger, Helpfulness, Attitudes by Context",
    x = "Score",
    y = "Count"
  ) +
  geom_text(
    data = df_summary,
    aes(x = 7, y = Inf,
        label = paste0("Mean = ", round(mean_score, 2),
                       "\nSD = ", round(sd_score, 2))),
    vjust = 1.2, hjust = 1.05, size = 3
  ) +
  theme_classic() +
  theme(
    panel.grid.major.y = element_line(linewidth = 0.5),
    panel.grid.minor.y = element_line(linewidth = 0.5)
  )

Firefighters, as always in our studies, epitomise the heroic occupation. They are absolutely not villainous - they are highly exposed to danger, and are higly helpful. In addition - people have overwhelmingly positive attitudes toward them.

In contrast, Journalists are not popular in the sample. Not very helpful, and sage, they are not heroised, and even appear to be closer to villains. Attitudes are very mixed.

Soldiers are heroised, and in danger, and quite helpful. They are appreciated, and only very moderately villified.

Nurses are heroised, they are helpful, and modedrately in danger. They are not villified.

Psychiatrists and welders are associated to heterogeneous evaluations of heroism. Welders are more associated to danger. Both are helpful. Generally positive attitudes - especially welders who collect a lot of neutral responses, but very few negative attitudes. ___

Principal analyses

Let’s test our hypotheses:

  • H1a: Heroism perceptions is positively associated with greater context-independent acceptability of regulations violations

  • H1b: Heroism perceptions is positively associated with greater context-dependent acceptability of regulations violations

  • H2a: Villain perceptions is negatively associated with greater context-independent acceptability of regulations violations

  • H2b: Villain perceptions is negatively associated with greater context-dependent acceptability of regulations violations

  • H3: Context-dependent and Context independent measures are associated within occupations with r > .3

H1a: Positive association between perceived heroism and context-independent ARV

This tab contains:

Because of the repeated measures structure of the model, it is necessary to account for dependence of the observations across participants by conducting a mixed model with Participant as a random intercept.

From registration:

For H1a, H1b, H2a, H2b, the acceptability measure (either context dependent or independent)  will be used as the dependent variable in a mixed model with the target perception (Heroism or Villain) as predictor, and the occupation (categorical code) as a covariate. We will further allow a random intercept at the participant level to account for the repeated-measure design.

For H1a:
DV: Acceptability, context independent
IV: Villain perception 
Covariate: Occupation
Random intercept: participants

Model: lmer(RVAdilemma ~ Villain + Occupation + (1 | Participant))

H1 Main registered model

We standardise each variable, and compute our mixed models (and additional diagnostics: qq-plot for normality of residuals; fit v residuals plot for homoscedasticity)

Gen_df <- subset(long_df, long_df$Measure == "Mean_Gen")

Gen_df$Mean_scaled <- scale(Gen_df$Mean)
Gen_df$Hero_s <- scale(Gen_df$Hero)
Gen_df$Villain_scaled <- scale(Gen_df$Vill)
Gen_df$Danger_scaled <- scale(Gen_df$Danger)
Gen_df$Helpf_scaled <- scale(Gen_df$Helpf)
Gen_df$Att_scaled <- scale(Gen_df$Att)

#summary(lmer(Gen_df$Mean ~ Gen_df$Context + (1 | Gen_df$Part)))

summary(m<-lmer(Gen_df$Mean ~ Gen_df$Hero_s + Gen_df$Context + (1 | Gen_df$Part)))
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Gen_df$Mean ~ Gen_df$Hero_s + Gen_df$Context + (1 | Gen_df$Part)
## 
## REML criterion at convergence: 6233.4
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.6592 -0.6369 -0.0426  0.6484  3.8120 
## 
## Random effects:
##  Groups      Name        Variance Std.Dev.
##  Gen_df$Part (Intercept) 0.9995   0.9997  
##  Residual                1.4030   1.1845  
## Number of obs: 1800, groups:  Gen_df$Part, 300
## 
## Fixed effects:
##                   Estimate Std. Error         df t value Pr(>|t|)    
## (Intercept)        3.88375    0.09807 1130.17089  39.603  < 2e-16 ***
## Gen_df$Hero_s      0.51911    0.04643 1788.35342  11.180  < 2e-16 ***
## Gen_df$ContextS   -1.01672    0.09909 1515.00703 -10.261  < 2e-16 ***
## Gen_df$ContextP   -0.35949    0.11534 1617.41888  -3.117  0.00186 ** 
## Gen_df$ContextJ   -0.11973    0.12713 1662.44272  -0.942  0.34645    
## Gen_df$ContextW   -0.31423    0.11347 1608.43707  -2.769  0.00568 ** 
## Gen_df$ContextN   -0.63345    0.09778 1503.80921  -6.478 1.25e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) Gn_$H_ Gn_$CS Gn_$CP Gn_$CJ Gn_$CW
## Gen_df$Hr_s -0.409                                   
## Gn_df$CntxS -0.570  0.218                            
## Gn_df$CntxP -0.636  0.545  0.528                     
## Gn_df$CntxJ -0.641  0.649  0.512  0.673              
## Gn_df$CntxW -0.634  0.523  0.530  0.642  0.664       
## Gn_df$CntxN -0.548  0.147  0.515  0.495  0.472  0.499
m <- lmer(Mean ~ Hero_s + Context + (1 | Part), data = Gen_df)
report(m)
## We fitted a linear mixed model (estimated using REML and nloptwrap optimizer)
## to predict Mean with Hero_s and Context (formula: Mean ~ Hero_s + Context). The
## model included Part as random effect (formula: ~1 | Part). The model's total
## explanatory power is substantial (conditional R2 = 0.49) and the part related
## to the fixed effects alone (marginal R2) is of 0.12. The model's intercept,
## corresponding to Hero_s = 0 and Context = F, is at 3.88 (95% CI [3.69, 4.08],
## t(1791) = 39.60, p < .001). Within this model:
## 
##   - The effect of Hero s is statistically significant and positive (beta = 0.52,
## 95% CI [0.43, 0.61], t(1791) = 11.18, p < .001; Std. beta = 0.31, 95% CI [0.26,
## 0.37])
##   - The effect of Context [S] is statistically significant and negative (beta =
## -1.02, 95% CI [-1.21, -0.82], t(1791) = -10.26, p < .001; Std. beta = -0.62,
## 95% CI [-0.73, -0.50])
##   - The effect of Context [P] is statistically significant and negative (beta =
## -0.36, 95% CI [-0.59, -0.13], t(1791) = -3.12, p = 0.002; Std. beta = -0.22,
## 95% CI [-0.36, -0.08])
##   - The effect of Context [J] is statistically non-significant and negative (beta
## = -0.12, 95% CI [-0.37, 0.13], t(1791) = -0.94, p = 0.346; Std. beta = -0.07,
## 95% CI [-0.22, 0.08])
##   - The effect of Context [W] is statistically significant and negative (beta =
## -0.31, 95% CI [-0.54, -0.09], t(1791) = -2.77, p = 0.006; Std. beta = -0.19,
## 95% CI [-0.33, -0.06])
##   - The effect of Context [N] is statistically significant and negative (beta =
## -0.63, 95% CI [-0.83, -0.44], t(1791) = -6.48, p < .001; Std. beta = -0.38, 95%
## CI [-0.50, -0.27])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
confint(m)
## Computing profile confidence intervals ...
##                  2.5 %     97.5 %
## .sig01       0.9041304  1.1026180
## .sigma       1.1411067  1.2257789
## (Intercept)  3.6917484  4.0757500
## Hero_s       0.4282314  0.6099824
## ContextS    -1.2106718 -0.8227681
## ContextP    -0.5852549 -0.1337279
## ContextJ    -0.3685538  0.1290977
## ContextW    -0.5363302 -0.0921290
## ContextN    -0.8248473 -0.4420633
qqnorm(resid(m))
qqline(resid(m))

plot(fitted(m),residuals(m))
abline(h = 0)

anova(m)
## Type III Analysis of Variance Table with Satterthwaite's method
##         Sum Sq Mean Sq NumDF  DenDF F value    Pr(>F)    
## Hero_s  175.37 175.370     1 1788.3 124.994 < 2.2e-16 ***
## Context 197.06  39.413     5 1536.4  28.091 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Model Linearity

# Fitted values from your model
fitted_vals <- fitted(m)

# Plot observed values against fitted values
plot(fitted_vals, Gen_df$Mean,
     xlab = "Fitted Values",
     ylab = "Observed ARV",
     main = "Observed vs Fitted Values")
abline(0, 1, col = "blue", lty = 2)

As we can see from the above outputs: the model is linear, there is no evidence for non-normal distribution of the residuals and heteroscedasticity. The model provides evidence that Heroism predicts positively the Acceptability of Regulations Violations (ARV) across occupations. Std Beta show a substantial slope (std B = 0.31 - for 1SD deviation in heroism, ARV increases by 0.31 points, on the scale of 1 to 7).

==> Strong positive association between perceived heroism and context independent ARV


H1 Outliers analyses

Note on Outlier management: As part of a sensitivity analysis, We can compare our model output with a robust version of the model, using robust lmer. I have excluded the test from the evaluated chunks of the .rmd document, because it takes ages to run, but the output is pasted below. You can re-include the code chunk by setting eval = T in the code chunk parameters on the rmd doc.

summary(robust_m<-rlmer(Mean ~ Hero_s + Context + (1 | Part), data = Gen_df))
compare(m, robust_m)
Robust linear mixed model fit by DAStau 
Formula: Mean ~ Hero_s + Context + (1 | Part) 
   Data: Gen_df 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-2.9162 -0.5957 -0.0147  0.6427  4.2918 

Random effects:
 Groups   Name        Variance Std.Dev.
 Part     (Intercept) 1.035    1.018   
 Residual             1.350    1.162   
Number of obs: 1800, groups: Part, 300

Fixed effects:
            Estimate Std. Error t value
(Intercept)  3.83830    0.10004   38.37
Hero_s       0.52510    0.04692   11.19
ContextS    -1.03904    0.09971  -10.42
ContextP    -0.36254    0.11620   -3.12
ContextJ    -0.14467    0.12814   -1.13
ContextW    -0.30869    0.11430   -2.70
ContextN    -0.62961    0.09839   -6.40

Correlation of Fixed Effects:
         (Intr) Hero_s CntxtS CntxtP CntxtJ CntxtW
Hero_s   -0.405                                   
ContextS -0.563  0.218                            
ContextP -0.629  0.547  0.528                     
ContextJ -0.633  0.651  0.513  0.674              
ContextW -0.627  0.525  0.530  0.643  0.665       
ContextN -0.541  0.148  0.515  0.495  0.472  0.499

Robustness weights for the residuals: 
 1511 weights are ~= 1. The remaining 289 ones are summarized as
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  0.313   0.695   0.825   0.795   0.927   0.999 

Robustness weights for the random effects: 
 249 weights are ~= 1. The remaining 51 ones are summarized as
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  0.378   0.665   0.814   0.789   0.926   0.998 

Rho functions used for fitting:
  Residuals:
    eff: smoothed Huber (k = 1.345, s = 10) 
    sig: smoothed Huber, Proposal 2 (k = 1.345, s = 10) 
  Random Effects, variance component 1 (Part):
    eff: smoothed Huber (k = 1.345, s = 10) 
    vcp: smoothed Huber, Proposal 2 (k = 1.345, s = 10) 
                   m               robust_m                                      
Coef                                                                             
(Intercept)         3.884 (0.0981)  3.838 (0.1000)                               
Hero_s              0.519 (0.0464)  0.525 (0.0469)                               
ContextS           -1.017 (0.0991) -1.039 (0.0997)                               
ContextP           -0.359 (0.1153) -0.363 (0.1162)                               
ContextJ           -0.120 (0.1271) -0.145 (0.1281)                               
ContextW           -0.314 (0.1135) -0.309 (0.1143)                               
ContextN           -0.633 (0.0978) -0.630 (0.0984)                               
                                                                                 
VarComp                                                                          
(Intercept) | Part 1.00            1.02                                          
                                                                                 
sigma              1.18            1.16                                          
                                                                                 
REML               6233                                                          
rho.e                              smoothed Huber (k = 1.345, s = 10)            
rho.sigma.e                        smoothed Huber, Proposal 2 (k = 1.345, s = 10)
rho.b_1                            smoothed Huber (k = 1.345, s = 10)            
rho.sigma.b_1                      smoothed Huber, Proposal 2 (k = 1.345, s = 10)

No large discrepancy.. This is encouraging: handling outliers using Huber weighted regressions does not change any inference.


H1 Job Decomposition

We can explore how our variables are associated within each occupations. Are some occupations particularly sensitive to this hero vs ARV link?

paste0("Firefighter analysis")


FireRole<- subset(Gen_df, Gen_df$Context == "F")

#mod <-lmer(Heroes ~ Risk_dummy * Motiv_dummy + (1|ResponseId), data = df_Roles)
FireRole$Hero_s <- scale(FireRole$Hero_s)
summary(FireMod<-lm(Mean ~ Hero_s, data = FireRole))
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE, alternative = "two.sided")


paste0("Nurses analysis")

HCrole<- subset(Gen_df, Gen_df$Context == "N")
HCrole$Hero_s <- scale(HCrole$Hero_s)

summary(HCMod<-lm(Mean ~ Hero_s, data = HCrole))
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE, alternative = "two.sided")



paste0("Soldier analysis")

MilitaryRole<- subset(Gen_df, Gen_df$Context == "S")
MilitaryRole$Hero_s <- scale(MilitaryRole$Hero_s)

summary(MilitaryMod<-lm(Mean ~ Hero_s, data = MilitaryRole))
HCMil_typeIII <- car::Anova(MilitaryMod, type = "III")
eta_squared(HCMil_typeIII, partial = TRUE, alternative = "two.sided")

paste0("Journalists analysis")

JourRole<- subset(Gen_df, Gen_df$Context == "J")
JourRole$Hero_s <- scale(JourRole$Hero_s)

summary(JourMod<-lm(Mean ~ Hero_s, data = JourRole))
JourtypeIII <- car::Anova(JourMod, type = "III")
eta_squared(JourtypeIII, partial = TRUE, alternative = "two.sided")


paste0("Psy analysis")

PsyRole<- subset(Gen_df, Gen_df$Context == "P")
PsyRole$Hero_s <- scale(PsyRole$Hero_s)

summary(PsyMod<-lm(Mean ~ Hero_s, data = PsyRole))
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE, alternative = "two.sided")

paste0("Welders analysis")

WeldRole<- subset(Gen_df, Gen_df$Context == "W")
WeldRole$Hero_s <- scale(WeldRole$Hero_s)

summary(WeldMod<-lm(Mean ~ Hero_s, data = WeldRole))
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE, alternative = "two.sided")
# Make sure your data frame is named Gen_df and has columns: Context, Hero_s, Mean
# Contexts to analyze
contexts <- c("F","N","P","S","J","W")

# Helper: nice p-value formatting
fmt_p <- function(p) ifelse(is.na(p), NA_character_,
                            ifelse(p < .001, "< .001", sprintf("%.3f", p)))

# Loop over contexts, fit models, collect stats
results_tbl <- map_dfr(contexts, function(ctx) {
  df_ctx <- Gen_df %>%
    filter(Context == ctx) %>%
    select(Mean, Hero_s) %>%
    tidyr::drop_na(Mean, Hero_s) %>%
    mutate(Hero_s_z = as.numeric(scale(Hero_s)))

  # Linear model
  mod <- lm(Mean ~ Hero_s_z, data = df_ctx)

  # Coef table (grab the slope for Hero_s_z)
  coef_row <- tidy(mod) %>%
    filter(term == "Hero_s_z") %>%
    transmute(
      Context = ctx,
      Beta = estimate,
      t = statistic,
      p = p.value
    )

  # Type III ANOVA + partial eta^2
  a3 <- car::Anova(mod, type = "III")
  et <- effectsize::eta_squared(a3, partial = TRUE, alternative = "two.sided") %>%
    as.data.frame()

  # Pull partial eta^2 for the Hero_s_z effect
  pe2 <- et %>%
    dplyr::filter(Parameter == "Hero_s_z") %>%
    dplyr::pull(Eta2)

  coef_row %>%
    mutate(partial_eta2 = pe2)
})
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
# Present a clean HTML table
a_tab<-results_tbl %>%
  mutate(
    Beta = round(Beta, 3),
    t = round(t, 3),
    p = fmt_p(p),
    partial_eta2 = round(partial_eta2, 3)
  ) %>%
  gt() %>%
  tab_header(
    title = md("**Effect of heroism on General ARV within each occupation**")
  ) %>%
  cols_label(
    Context = "Context",
    Beta = "β (slope)",
    t = "t",
    p = "p",
    partial_eta2 = "partial η² (Type III)"
  ) %>%
  fmt_missing(everything(), missing_text = "—") %>%
  tab_options(table.font.size = px(14))
## Warning: Since gt v0.6.0 `fmt_missing()` is deprecated and will soon be removed.
## ℹ Use `sub_missing()` instead.
## This warning is displayed once every 8 hours.
a_tab
Effect of heroism on General ARV within each occupation
Context β (slope) t p partial η² (Type III)
F 0.315 3.460 < .001 0.039
N 0.185 1.995 0.047 0.013
P 0.268 3.072 0.002 0.031
S 0.673 7.552 < .001 0.161
J 0.710 8.015 < .001 0.177
W 0.240 2.831 0.005 0.026

As we can see, there are massive effects for journalists and soldiers. These are the most villified jobs. These massive effect sizes might be explained by these occupations varying along a larger continuum of “Villains – Heroism” than the other occupations. All occupations show acceptable effect sizes (~ 3%), with the exception of nurses being associated to somewhat lower effect sizes (eta2p = 1%).


H1 Conclusion

Hypothesis 1 receive support. The effect is evident across and within all occupations. In particular, soldiers and journalists are associated to quite massive effect sizes. We can ask wether these effects sizes would remain substantial if we were to evacuate the share of the variance explained by attitude. Indeed, the more positive we feel about a group, the more likely we should be to grant them freedom in their work. These analyses will be reported in the Appendix of the report.


H1b: Positive association between perceived heroism and context-dependent ARV

This tab contains:

We repeat the analyses using the attributes rating (Manipulation check #1) as predictors:

H1b Main registered model

We standardise each variable, and compute our mixed models (and additional diagnostics: qq-plot for normality of residuals; fit v residuals plot for homoscedasticity)

Spec_df <- subset(long_df, long_df$Measure == "Mean_Spec")

Spec_df$Mean_scaled <- scale(Spec_df$Mean)
Spec_df$Hero_s <- scale(Spec_df$Hero)
Spec_df$Villain_scaled <- scale(Spec_df$Vill)
Spec_df$Danger_scaled <- scale(Spec_df$Danger)
Spec_df$Helpf_scaled <- scale(Spec_df$Helpf)
Spec_df$Att_scaled <- scale(Spec_df$Att)

#summary(lmer(Spec_df$Mean ~ Spec_df$Context + (1 | Spec_df$Part)))

summary(m<-lmer(Spec_df$Mean ~ Spec_df$Hero_s + Spec_df$Context + (1 | Spec_df$Part)))
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Spec_df$Mean ~ Spec_df$Hero_s + Spec_df$Context + (1 | Spec_df$Part)
## 
## REML criterion at convergence: 6732.7
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -3.11648 -0.66795  0.03268  0.68668  2.79295 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  Spec_df$Part (Intercept) 0.7561   0.8695  
##  Residual                 2.0085   1.4172  
## Number of obs: 1800, groups:  Spec_df$Part, 300
## 
## Fixed effects:
##                    Estimate Std. Error         df t value Pr(>|t|)    
## (Intercept)         3.71335    0.10642 1407.04552  34.892  < 2e-16 ***
## Spec_df$Hero_s      0.43855    0.05318 1758.29635   8.247 3.15e-16 ***
## Spec_df$ContextS    1.23254    0.11832 1522.55700  10.417  < 2e-16 ***
## Spec_df$ContextP    0.82145    0.13628 1659.96121   6.028 2.05e-09 ***
## Spec_df$ContextJ   -0.91733    0.14940 1715.90368  -6.140 1.02e-09 ***
## Spec_df$ContextW    1.32168    0.13420 1648.27844   9.848  < 2e-16 ***
## Spec_df$ContextN    0.04712    0.11688 1507.31781   0.403    0.687    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) Sp_$H_ Sp_$CS Sp_$CP Sp_$CJ Sp_$CW
## Spc_df$Hr_s -0.432                                   
## Spc_df$CntS -0.622  0.209                            
## Spc_df$CntP -0.690  0.528  0.525                     
## Spc_df$CntJ -0.694  0.633  0.511  0.663              
## Spc_df$CntW -0.687  0.507  0.527  0.634  0.654       
## Spc_df$CntN -0.599  0.141  0.514  0.495  0.473  0.498
m <- lmer(Mean ~ Hero_s + Context + (1 | Part), data = Spec_df)
report(m)
## We fitted a linear mixed model (estimated using REML and nloptwrap optimizer)
## to predict Mean with Hero_s and Context (formula: Mean ~ Hero_s + Context). The
## model included Part as random effect (formula: ~1 | Part). The model's total
## explanatory power is substantial (conditional R2 = 0.45) and the part related
## to the fixed effects alone (marginal R2) is of 0.24. The model's intercept,
## corresponding to Hero_s = 0 and Context = F, is at 3.71 (95% CI [3.50, 3.92],
## t(1791) = 34.89, p < .001). Within this model:
## 
##   - The effect of Hero s is statistically significant and positive (beta = 0.44,
## 95% CI [0.33, 0.54], t(1791) = 8.25, p < .001; Std. beta = 0.23, 95% CI [0.18,
## 0.28])
##   - The effect of Context [S] is statistically significant and positive (beta =
## 1.23, 95% CI [1.00, 1.46], t(1791) = 10.42, p < .001; Std. beta = 0.65, 95% CI
## [0.52, 0.77])
##   - The effect of Context [P] is statistically significant and positive (beta =
## 0.82, 95% CI [0.55, 1.09], t(1791) = 6.03, p < .001; Std. beta = 0.43, 95% CI
## [0.29, 0.57])
##   - The effect of Context [J] is statistically significant and negative (beta =
## -0.92, 95% CI [-1.21, -0.62], t(1791) = -6.14, p < .001; Std. beta = -0.48, 95%
## CI [-0.63, -0.33])
##   - The effect of Context [W] is statistically significant and positive (beta =
## 1.32, 95% CI [1.06, 1.58], t(1791) = 9.85, p < .001; Std. beta = 0.69, 95% CI
## [0.56, 0.83])
##   - The effect of Context [N] is statistically non-significant and positive (beta
## = 0.05, 95% CI [-0.18, 0.28], t(1791) = 0.40, p = 0.687; Std. beta = 0.02, 95%
## CI [-0.10, 0.14])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
confint(m)
## Computing profile confidence intervals ...
##                  2.5 %     97.5 %
## .sig01       0.7707927  0.9738769
## .sigma       1.3653354  1.4666472
## (Intercept)  3.5050457  3.9216894
## Hero_s       0.3344528  0.5426317
## ContextS     1.0009432  1.4641496
## ContextP     0.5547228  1.0882193
## ContextJ    -1.2097404 -0.6248889
## ContextW     1.0590187  1.5843857
## ContextN    -0.1816700  0.2759157
qqnorm(resid(m))
qqline(resid(m))

plot(fitted(m),residuals(m))
abline(h = 0)

anova(m)
## Type III Analysis of Variance Table with Satterthwaite's method
##          Sum Sq Mean Sq NumDF  DenDF F value    Pr(>F)    
## Hero_s   136.59  136.59     1 1758.3  68.008 3.151e-16 ***
## Context 1111.22  222.25     5 1551.4 110.652 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Model Linearity

# Fitted values from your model
fitted_vals <- fitted(m)

# Plot observed values against fitted values
plot(fitted_vals, Spec_df$Mean,
     xlab = "Fitted Values",
     ylab = "Observed ARV",
     main = "Observed vs Fitted Values")
abline(0, 1, col = "blue", lty = 2)

==> Strong positive association between perceived heroism and context dependent ARV. Responses to our dilemmas was predicted by heroism scores.


H1b Outliers analyses

Note on Outlier management: As part of a sensitivity analysis, We can compare our model output with a robust version of the model, using robust lmer.

summary(robust_m<-rlmer(Mean ~ Hero_s + Context + (1 | Part), data = Spec_df))
## Robust linear mixed model fit by DAStau 
## Formula: Mean ~ Hero_s + Context + (1 | Part) 
##    Data: Spec_df 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.3079 -0.6412  0.0223  0.6308  2.9480 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Part     (Intercept) 0.7562   0.8696  
##  Residual             2.0792   1.4419  
## Number of obs: 1800, groups: Part, 300
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  3.73901    0.11058   33.81
## Hero_s       0.45912    0.05535    8.30
## ContextS     1.27068    0.12346   10.29
## ContextP     0.84287    0.14211    5.93
## ContextJ    -0.96909    0.15574   -6.22
## ContextW     1.33001    0.13995    9.50
## ContextN     0.04686    0.12197    0.38
## 
## Correlation of Fixed Effects:
##          (Intr) Hero_s CntxtS CntxtP CntxtJ CntxtW
## Hero_s   -0.432                                   
## ContextS -0.624  0.208                            
## ContextP -0.692  0.527  0.525                     
## ContextJ -0.696  0.632  0.511  0.662              
## ContextW -0.690  0.506  0.527  0.633  0.654       
## ContextN -0.601  0.141  0.513  0.495  0.473  0.498
## 
## Robustness weights for the residuals: 
##  1476 weights are ~= 1. The remaining 324 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   0.407   0.728   0.878   0.834   0.960   0.999 
## 
## Robustness weights for the random effects: 
##  262 weights are ~= 1. The remaining 38 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   0.409   0.709   0.833   0.792   0.931   0.999 
## 
## Rho functions used for fitting:
##   Residuals:
##     eff: smoothed Huber (k = 1.345, s = 10) 
##     sig: smoothed Huber, Proposal 2 (k = 1.345, s = 10) 
##   Random Effects, variance component 1 (Part):
##     eff: smoothed Huber (k = 1.345, s = 10) 
##     vcp: smoothed Huber, Proposal 2 (k = 1.345, s = 10)
compare(m, robust_m)
##                    m               
## Coef                               
## (Intercept)         3.7133 (0.1064)
## Hero_s              0.4385 (0.0532)
## ContextS            1.2325 (0.1183)
## ContextP            0.8215 (0.1363)
## ContextJ           -0.9173 (0.1494)
## ContextW            1.3217 (0.1342)
## ContextN            0.0471 (0.1169)
##                                    
## VarComp                            
## (Intercept) | Part 0.87            
##                                    
## sigma              1.42            
##                                    
## REML               6733            
## rho.e                              
## rho.sigma.e                        
## rho.b_1                            
## rho.sigma.b_1                      
##                    robust_m                                      
## Coef                                                             
## (Intercept)         3.7390 (0.1106)                              
## Hero_s              0.4591 (0.0553)                              
## ContextS            1.2707 (0.1235)                              
## ContextP            0.8429 (0.1421)                              
## ContextJ           -0.9691 (0.1557)                              
## ContextW            1.3300 (0.1400)                              
## ContextN            0.0469 (0.1220)                              
##                                                                  
## VarComp                                                          
## (Intercept) | Part 0.87                                          
##                                                                  
## sigma              1.44                                          
##                                                                  
## REML                                                             
## rho.e              smoothed Huber (k = 1.345, s = 10)            
## rho.sigma.e        smoothed Huber, Proposal 2 (k = 1.345, s = 10)
## rho.b_1            smoothed Huber (k = 1.345, s = 10)            
## rho.sigma.b_1      smoothed Huber, Proposal 2 (k = 1.345, s = 10)

Again - the betas are really similar between outliers sensitive and outliers insensitive models.


H1b Decomposition of the effects within job

We can explore how our variables are associated within each occupations

paste0("Firefighter analysis")


FireRole<- subset(Spec_df, Spec_df$Context == "F")

#mod <-lmer(Heroes ~ Risk_dummy * Motiv_dummy + (1|ResponseId), data = df_Roles)
FireRole$Hero_s <- scale(FireRole$Hero_s)
summary(FireMod<-lm(Mean ~ Hero_s, data = FireRole))
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)


paste0("Nurses analysis")

HCrole<- subset(Spec_df, Spec_df$Context == "N")
HCrole$Hero_s <- scale(HCrole$Hero_s)

summary(HCMod<-lm(Mean ~ Hero_s, data = HCrole))
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)



paste0("Soldier analysis")

MilitaryRole<- subset(Spec_df, Spec_df$Context == "S")
MilitaryRole$Hero_s <- scale(MilitaryRole$Hero_s)

summary(MilitaryMod<-lm(Mean ~ Hero_s, data = MilitaryRole))
HCMil_typeIII <- car::Anova(MilitaryMod, type = "III")
eta_squared(HCMil_typeIII, partial = TRUE)

paste0("Journalists analysis")

JourRole<- subset(Spec_df, Spec_df$Context == "J")
JourRole$Hero_s <- scale(JourRole$Hero_s)

summary(JourMod<-lm(Mean ~ Hero_s, data = JourRole))
JourtypeIII <- car::Anova(JourMod, type = "III")
eta_squared(JourtypeIII, partial = TRUE)


paste0("Psy analysis")

PsyRole<- subset(Spec_df, Spec_df$Context == "P")
PsyRole$Hero_s <- scale(PsyRole$Hero_s)

summary(PsyMod<-lm(Mean ~ Hero_s, data = PsyRole))
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)

paste0("Welders analysis")

WeldRole<- subset(Spec_df, Spec_df$Context == "W")
WeldRole$Hero_s <- scale(WeldRole$Hero_s)

summary(WeldMod<-lm(Mean ~ Hero_s, data = WeldRole))
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)
# Loop over contexts, fit models, collect stats
results_tbl <- map_dfr(contexts, function(ctx) {
  df_ctx <- Spec_df %>%
    filter(Context == ctx) %>%
    select(Mean, Hero_s) %>%
    tidyr::drop_na(Mean, Hero_s) %>%
    mutate(Hero_s_z = as.numeric(scale(Hero_s)))

  # Linear model
  mod <- lm(Mean ~ Hero_s_z, data = df_ctx)

  # Coef table (grab the slope for Hero_s_z)
  coef_row <- tidy(mod) %>%
    filter(term == "Hero_s_z") %>%
    transmute(
      Context = ctx,
      Beta = estimate,
      t = statistic,
      p = p.value
    )

  # Type III ANOVA + partial eta^2
  a3 <- car::Anova(mod, type = "III")
  et <- effectsize::eta_squared(a3, partial = TRUE) %>%
    as.data.frame()

  # Pull partial eta^2 for the Hero_s_z effect
  pe2 <- et %>%
    dplyr::filter(Parameter == "Hero_s_z") %>%
    dplyr::pull(Eta2)

  coef_row %>%
    mutate(partial_eta2 = pe2)
})
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
# Present a clean HTML table
b_tab<-results_tbl %>%
  mutate(
    Beta = round(Beta, 3),
    t = round(t, 3),
    p = fmt_p(p),
    partial_eta2 = round(partial_eta2, 3)
  ) %>%
  gt() %>%
  tab_header(
    title = md("**Effect of heroism on Context dependent ARV (dilemmas) within each occupation**")
  ) %>%
  cols_label(
    Context = "Context",
    Beta = "β (slope)",
    t = "t",
    p = "p",
    partial_eta2 = "partial η² (Type III)"
  ) %>%
  fmt_missing(everything(), missing_text = "—") %>%
  tab_options(table.font.size = px(14))

b_tab
Effect of heroism on Context dependent ARV (dilemmas) within each occupation
Context β (slope) t p partial η² (Type III)
F 0.209 2.116 0.035 0.015
N 0.185 1.678 0.094 0.009
P 0.288 2.840 0.005 0.026
S 0.540 6.042 < .001 0.109
J 0.473 5.792 < .001 0.101
W 0.350 3.793 < .001 0.046

This time, Nurses are not associated to any significant effect. Firefighters show a small effect of heroism on ARV. Psychiatrists and welders show small-to-medium sized effect sizes, and journalists and soldiers are again associated to large correlations.


H1b Conclusion

Hypothesis 1 receives support.


H2a: Villains vs context independent ARV

This tab contains:

We will now assess how ARV is associated with perceptions of occupations as villains. Note that in our sample, only soldiers and journalists seem to be associated with a bit of variability. Otherwise, most people would agree that none of these groups can qualify as being made of a bunch of villains (see Descriptives).

H2a Main registered model

We standardise each variable, and compute our mixed models (and additional diagnostics: qq-plot for normality of residuals; fit v residuals plot for homoscedasticity)

Gen_df <- subset(long_df, long_df$Measure == "Mean_Gen")

Gen_df$Mean_scaled <- scale(Gen_df$Mean)
Gen_df$Villain_scaled <- scale(Gen_df$Vill)
Gen_df$Danger_scaled <- scale(Gen_df$Danger)
Gen_df$Helpf_scaled <- scale(Gen_df$Helpf)
Gen_df$Att_scaled <- scale(Gen_df$Att)

#summary(lmer(Gen_df$Mean ~ Gen_df$Context + (1 | Gen_df$Part)))

summary(m<-lmer(Gen_df$Mean ~ Gen_df$Villain_scaled + Gen_df$Context + (1 | Gen_df$Part)))
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## Gen_df$Mean ~ Gen_df$Villain_scaled + Gen_df$Context + (1 | Gen_df$Part)
## 
## REML criterion at convergence: 6271.9
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.6887 -0.6619 -0.0477  0.6148  3.6153 
## 
## Random effects:
##  Groups      Name        Variance Std.Dev.
##  Gen_df$Part (Intercept) 1.141    1.068   
##  Residual                1.408    1.187   
## Number of obs: 1800, groups:  Gen_df$Part, 300
## 
## Fixed effects:
##                         Estimate Std. Error         df t value Pr(>|t|)    
## (Intercept)              4.14665    0.09432  940.74832  43.962  < 2e-16 ***
## Gen_df$Villain_scaled   -0.37836    0.04077 1721.22945  -9.281  < 2e-16 ***
## Gen_df$ContextS         -1.00164    0.10075 1511.63770  -9.942  < 2e-16 ***
## Gen_df$ContextP         -0.90453    0.09838 1499.37006  -9.195  < 2e-16 ***
## Gen_df$ContextJ         -0.41908    0.11789 1576.91728  -3.555 0.000389 ***
## Gen_df$ContextW         -0.94624    0.09696 1491.55157  -9.759  < 2e-16 ***
## Gen_df$ContextN         -0.74952    0.09702 1491.89692  -7.726 2.03e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) Gn_$V_ Gn_$CS Gn_$CP Gn_$CJ Gn_$CW
## Gn_df$Vlln_  0.212                                   
## Gn_df$CntxS -0.552 -0.274                            
## Gn_df$CntxP -0.543 -0.173  0.521                     
## Gn_df$CntxJ -0.543 -0.570  0.551  0.503              
## Gn_df$CntxW -0.521 -0.035  0.490  0.498  0.431       
## Gn_df$CntxN -0.524 -0.050  0.494  0.500  0.439  0.501
m <- lmer(Mean ~ Villain_scaled + Context + (1 | Part), data = Gen_df)
report(m)
## We fitted a linear mixed model (estimated using REML and nloptwrap optimizer)
## to predict Mean with Villain_scaled and Context (formula: Mean ~ Villain_scaled
## + Context). The model included Part as random effect (formula: ~1 | Part). The
## model's total explanatory power is substantial (conditional R2 = 0.50) and the
## part related to the fixed effects alone (marginal R2) is of 0.09. The model's
## intercept, corresponding to Villain_scaled = 0 and Context = F, is at 4.15 (95%
## CI [3.96, 4.33], t(1791) = 43.96, p < .001). Within this model:
## 
##   - The effect of Villain scaled is statistically significant and negative (beta
## = -0.38, 95% CI [-0.46, -0.30], t(1791) = -9.28, p < .001; Std. beta = -0.23,
## 95% CI [-0.28, -0.18])
##   - The effect of Context [S] is statistically significant and negative (beta =
## -1.00, 95% CI [-1.20, -0.80], t(1791) = -9.94, p < .001; Std. beta = -0.61, 95%
## CI [-0.73, -0.49])
##   - The effect of Context [P] is statistically significant and negative (beta =
## -0.90, 95% CI [-1.10, -0.71], t(1791) = -9.19, p < .001; Std. beta = -0.55, 95%
## CI [-0.67, -0.43])
##   - The effect of Context [J] is statistically significant and negative (beta =
## -0.42, 95% CI [-0.65, -0.19], t(1791) = -3.55, p < .001; Std. beta = -0.25, 95%
## CI [-0.39, -0.11])
##   - The effect of Context [W] is statistically significant and negative (beta =
## -0.95, 95% CI [-1.14, -0.76], t(1791) = -9.76, p < .001; Std. beta = -0.57, 95%
## CI [-0.69, -0.46])
##   - The effect of Context [N] is statistically significant and negative (beta =
## -0.75, 95% CI [-0.94, -0.56], t(1791) = -7.73, p < .001; Std. beta = -0.45, 95%
## CI [-0.57, -0.34])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
confint(m)
## Computing profile confidence intervals ...
##                     2.5 %     97.5 %
## .sig01          0.9679921  1.1761600
## .sigma          1.1432219  1.2281383
## (Intercept)     3.9617522  4.3312662
## Villain_scaled -0.4585748 -0.2980488
## ContextS       -1.1990059 -0.8044199
## ContextP       -1.0971627 -0.7119805
## ContextJ       -0.6504152 -0.1880477
## ContextW       -1.1360294 -0.7564662
## ContextN       -0.9394409 -0.5596353
qqnorm(resid(m))
qqline(resid(m))

plot(fitted(m),residuals(m))
abline(h = 0)

anova(m)
## Type III Analysis of Variance Table with Satterthwaite's method
##                Sum Sq Mean Sq NumDF  DenDF F value    Pr(>F)    
## Villain_scaled 121.31 121.315     1 1721.2  86.138 < 2.2e-16 ***
## Context        227.62  45.525     5 1512.2  32.324 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Model Linearity

# Fitted values from your model
fitted_vals <- fitted(m)

# Plot observed values against fitted values
plot(fitted_vals, Gen_df$Mean,
     xlab = "Fitted Values",
     ylab = "Observed ARV",
     main = "Observed vs Fitted Values")
abline(0, 1, col = "blue", lty = 2)

==> Negative association between perceived villains and context independent ARV.


H2a Outliers analyses

Note on Outlier management: As part of a sensitivity analysis, We can compare our model output with a robust version of the model, using robust lmer

summary(robust_m<-rlmer(Mean ~ Villain_scaled + Context + (1 | Part), data = Gen_df))
## Robust linear mixed model fit by DAStau 
## Formula: Mean ~ Villain_scaled + Context + (1 | Part) 
##    Data: Gen_df 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.9997 -0.6140 -0.0316  0.6195  4.3162 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Part     (Intercept) 1.288    1.135   
##  Residual             1.350    1.162   
## Number of obs: 1800, groups: Part, 300
## 
## Fixed effects:
##                Estimate Std. Error t value
## (Intercept)     4.09657    0.09827   41.68
## Villain_scaled -0.37335    0.04118   -9.07
## ContextS       -1.01587    0.10122  -10.04
## ContextP       -0.90503    0.09881   -9.16
## ContextJ       -0.46744    0.11861   -3.94
## ContextW       -0.94098    0.09736   -9.66
## ContextN       -0.73807    0.09743   -7.58
## 
## Correlation of Fixed Effects:
##             (Intr) Vlln_s CntxtS CntxtP CntxtJ CntxtW
## Villan_scld  0.206                                   
## ContextS    -0.533 -0.275                            
## ContextP    -0.523 -0.174  0.521                     
## ContextJ    -0.524 -0.572  0.552  0.503              
## ContextW    -0.502 -0.035  0.490  0.498  0.430       
## ContextN    -0.505 -0.050  0.494  0.500  0.438  0.501
## 
## Robustness weights for the residuals: 
##  1504 weights are ~= 1. The remaining 296 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   0.312   0.691   0.844   0.802   0.935   0.999 
## 
## Robustness weights for the random effects: 
##  250 weights are ~= 1. The remaining 50 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   0.410   0.709   0.815   0.812   0.947   0.995 
## 
## Rho functions used for fitting:
##   Residuals:
##     eff: smoothed Huber (k = 1.345, s = 10) 
##     sig: smoothed Huber, Proposal 2 (k = 1.345, s = 10) 
##   Random Effects, variance component 1 (Part):
##     eff: smoothed Huber (k = 1.345, s = 10) 
##     vcp: smoothed Huber, Proposal 2 (k = 1.345, s = 10)
compare(m, robust_m)
##                    m              
## Coef                              
## (Intercept)         4.147 (0.0943)
## Villain_scaled     -0.378 (0.0408)
## ContextS           -1.002 (0.1008)
## ContextP           -0.905 (0.0984)
## ContextJ           -0.419 (0.1179)
## ContextW           -0.946 (0.0970)
## ContextN           -0.750 (0.0970)
##                                   
## VarComp                           
## (Intercept) | Part 1.07           
##                                   
## sigma              1.19           
##                                   
## REML               6272           
## rho.e                             
## rho.sigma.e                       
## rho.b_1                           
## rho.sigma.b_1                     
##                    robust_m                                      
## Coef                                                             
## (Intercept)         4.097 (0.0983)                               
## Villain_scaled     -0.373 (0.0412)                               
## ContextS           -1.016 (0.1012)                               
## ContextP           -0.905 (0.0988)                               
## ContextJ           -0.467 (0.1186)                               
## ContextW           -0.941 (0.0974)                               
## ContextN           -0.738 (0.0974)                               
##                                                                  
## VarComp                                                          
## (Intercept) | Part 1.13                                          
##                                                                  
## sigma              1.16                                          
##                                                                  
## REML                                                             
## rho.e              smoothed Huber (k = 1.345, s = 10)            
## rho.sigma.e        smoothed Huber, Proposal 2 (k = 1.345, s = 10)
## rho.b_1            smoothed Huber (k = 1.345, s = 10)            
## rho.sigma.b_1      smoothed Huber, Proposal 2 (k = 1.345, s = 10)

No large discrepancy between robust and non-robust models.


H2a Job Decomposition

We can explore how our variables are associated within each occupations

paste0("Firefighter analysis")


FireRole<- subset(Gen_df, Gen_df$Context == "F")

#mod <-lmer(Heroes ~ Risk_dummy * Motiv_dummy + (1|ResponseId), data = df_Roles)
FireRole$Villain_scaled <- scale(FireRole$Villain_scaled)
summary(FireMod<-lm(Mean ~ Villain_scaled, data = FireRole))
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)


paste0("Nurses analysis")

HCrole<- subset(Gen_df, Gen_df$Context == "N")
HCrole$Villain_scaled <- scale(HCrole$Villain_scaled)

summary(HCMod<-lm(Mean ~ Villain_scaled, data = HCrole))
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)



paste0("Soldier analysis")

MilitaryRole<- subset(Gen_df, Gen_df$Context == "S")
MilitaryRole$Villain_scaled <- scale(MilitaryRole$Villain_scaled)

summary(MilitaryMod<-lm(Mean ~ Villain_scaled, data = MilitaryRole))
HCMil_typeIII <- car::Anova(MilitaryMod, type = "III")
eta_squared(HCMil_typeIII, partial = TRUE)

paste0("Journalists analysis")

JourRole<- subset(Gen_df, Gen_df$Context == "J")

summary(JourMod<-lm(Mean ~ Villain_scaled, data = JourRole))
JourtypeIII <- car::Anova(JourMod, type = "III")
eta_squared(JourtypeIII, partial = TRUE)


paste0("Psy analysis")

PsyRole<- subset(Gen_df, Gen_df$Context == "P")

summary(PsyMod<-lm(Mean ~ Villain_scaled, data = PsyRole))
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)

paste0("Welders analysis")

WeldRole<- subset(Gen_df, Gen_df$Context == "W")

summary(WeldMod<-lm(Mean ~ Villain_scaled, data = WeldRole))
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)
# Loop over contexts, fit models, collect stats
results_tbl <- map_dfr(contexts, function(ctx) {
  df_ctx <- Gen_df %>%
    filter(Context == ctx) %>%
    select(Mean, Villain_scaled) %>%
    tidyr::drop_na(Mean, Villain_scaled) %>%
    mutate(Villain_scaled = as.numeric(scale(Villain_scaled)))

  # Linear model
  mod <- lm(Mean ~ Villain_scaled, data = df_ctx)

  # Coef table (grab the slope for Hero_s_z)
  coef_row <- tidy(mod) %>%
    filter(term == "Villain_scaled") %>%
    transmute(
      Context = ctx,
      Beta = estimate,
      t = statistic,
      p = p.value
    )

  # Type III ANOVA + partial eta^2
  a3 <- car::Anova(mod, type = "III")
  et <- effectsize::eta_squared(a3, partial = TRUE) %>%
    as.data.frame()

  # Pull partial eta^2 for the Hero_s_z effect
  pe2 <- et %>%
    dplyr::filter(Parameter == "Villain_scaled") %>%
    dplyr::pull(Eta2)

  coef_row %>%
    mutate(partial_eta2 = pe2)
})
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
# Present a clean HTML table
c_tab<-results_tbl %>%
  mutate(
    Beta = round(Beta, 3),
    t = round(t, 3),
    p = fmt_p(p),
    partial_eta2 = round(partial_eta2, 3)
  ) %>%
  gt() %>%
  tab_header(
    title = md("**Effect of Villain perception on Context independent ARV within each occupation**")
  ) %>%
  cols_label(
    Context = "Context",
    Beta = "β (slope)",
    t = "t",
    p = "p",
    partial_eta2 = "partial η² (Type III)"
  ) %>%
  fmt_missing(everything(), missing_text = "—") %>%
  tab_options(table.font.size = px(14))
c_tab
Effect of Villain perception on Context independent ARV within each occupation
Context β (slope) t p partial η² (Type III)
F 0.019 0.206 0.837 0.000
N 0.126 1.357 0.176 0.006
P -0.051 -0.570 0.569 0.001
S -0.348 -3.663 < .001 0.043
J -0.424 -4.481 < .001 0.063
W 0.071 0.830 0.407 0.002

Only soldiers and Journalists (two somewhat villified groups) show the effect of villain perception on context independent ARV.


H2a Conclusion

Hypothesis 2a receive support. Villain perception is associated to decreased acceptability of regulations violations. We are less in favour of granting any liberty to groups we hold to be villains. However, when looking at within occupations analyses, it appears that only journalists and soldiers are associated to this villain-effect. This might be explained by other occupations being quasi-unanimously perceived as “not at all villains”.


H2b: Positive association between perceived heroism and context-dependent ARV

This tab contains:

We repeat the analyses using the attributes rating (Manipulation check #1) as predictors:

H2b Main registered model

We standardise each variable, and compute our mixed models (and additional diagnostics: qq-plot for normality of residuals; fit v residuals plot for homoscedasticity).

Spec_df <- subset(long_df, long_df$Measure == "Mean_Spec")

Spec_df$Mean_scaled <- scale(Spec_df$Mean)
Spec_df$Villain_scaled <- scale(Spec_df$Vill)
Spec_df$Danger_scaled <- scale(Spec_df$Danger)
Spec_df$Helpf_scaled <- scale(Spec_df$Helpf)
Spec_df$Att_scaled <- scale(Spec_df$Att)

#summary(lmer(Spec_df$Mean ~ Spec_df$Context + (1 | Spec_df$Part)))

summary(m<-lmer(Spec_df$Mean ~ Spec_df$Villain_scaled + Spec_df$Context + (1 | Spec_df$Part)))
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Spec_df$Mean ~ Spec_df$Villain_scaled + Spec_df$Context + (1 |  
##     Spec_df$Part)
## 
## REML criterion at convergence: 6774.3
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.2155 -0.6549  0.0306  0.6729  2.8623 
## 
## Random effects:
##  Groups       Name        Variance Std.Dev.
##  Spec_df$Part (Intercept) 0.8246   0.9081  
##  Residual                 2.0381   1.4276  
## Number of obs: 1800, groups:  Spec_df$Part, 300
## 
## Fixed effects:
##                          Estimate Std. Error         df t value Pr(>|t|)    
## (Intercept)               3.97408    0.10043 1309.95858  39.572  < 2e-16 ***
## Spec_df$Villain_scaled   -0.24088    0.04753 1787.58451  -5.068 4.44e-07 ***
## Spec_df$ContextS          1.19196    0.12092 1524.07677   9.857  < 2e-16 ***
## Spec_df$ContextP          0.32817    0.11823 1505.66778   2.776  0.00558 ** 
## Spec_df$ContextJ         -1.29995    0.14041 1620.96743  -9.258  < 2e-16 ***
## Spec_df$ContextW          0.78119    0.11663 1493.93746   6.698 2.98e-11 ***
## Spec_df$ContextN         -0.06029    0.11670 1494.45546  -0.517  0.60549    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) Sp_$V_ Sp_$CS Sp_$CP Sp_$CJ Sp_$CW
## Spc_df$Vll_  0.232                                   
## Spc_df$CntS -0.621 -0.266                            
## Spc_df$CntP -0.611 -0.168  0.520                     
## Spc_df$CntJ -0.611 -0.558  0.548  0.503              
## Spc_df$CntW -0.588 -0.034  0.491  0.498  0.434       
## Spc_df$CntN -0.591 -0.048  0.494  0.500  0.442  0.501
m <- lmer(Mean ~ Villain_scaled + Context + (1 | Part), data = Spec_df)
report(m)
## We fitted a linear mixed model (estimated using REML and nloptwrap optimizer)
## to predict Mean with Villain_scaled and Context (formula: Mean ~ Villain_scaled
## + Context). The model included Part as random effect (formula: ~1 | Part). The
## model's total explanatory power is substantial (conditional R2 = 0.44) and the
## part related to the fixed effects alone (marginal R2) is of 0.22. The model's
## intercept, corresponding to Villain_scaled = 0 and Context = F, is at 3.97 (95%
## CI [3.78, 4.17], t(1791) = 39.57, p < .001). Within this model:
## 
##   - The effect of Villain scaled is statistically significant and negative (beta
## = -0.24, 95% CI [-0.33, -0.15], t(1791) = -5.07, p < .001; Std. beta = -0.13,
## 95% CI [-0.18, -0.08])
##   - The effect of Context [S] is statistically significant and positive (beta =
## 1.19, 95% CI [0.95, 1.43], t(1791) = 9.86, p < .001; Std. beta = 0.63, 95% CI
## [0.50, 0.75])
##   - The effect of Context [P] is statistically significant and positive (beta =
## 0.33, 95% CI [0.10, 0.56], t(1791) = 2.78, p = 0.006; Std. beta = 0.17, 95% CI
## [0.05, 0.29])
##   - The effect of Context [J] is statistically significant and negative (beta =
## -1.30, 95% CI [-1.58, -1.02], t(1791) = -9.26, p < .001; Std. beta = -0.68, 95%
## CI [-0.83, -0.54])
##   - The effect of Context [W] is statistically significant and positive (beta =
## 0.78, 95% CI [0.55, 1.01], t(1791) = 6.70, p < .001; Std. beta = 0.41, 95% CI
## [0.29, 0.53])
##   - The effect of Context [N] is statistically non-significant and negative (beta
## = -0.06, 95% CI [-0.29, 0.17], t(1791) = -0.52, p = 0.605; Std. beta = -0.03,
## 95% CI [-0.15, 0.09])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
confint(m)
## Computing profile confidence intervals ...
##                      2.5 %     97.5 %
## .sig01          0.80735540  1.0150256
## .sigma          1.37530000  1.4773795
## (Intercept)     3.77741328  4.1706389
## Villain_scaled -0.33404996 -0.1476887
## ContextS        0.95518846  1.4286255
## ContextP        0.09670069  0.5595836
## ContextJ       -1.57500851 -1.0250779
## ContextW        0.55289147  1.0094762
## ContextN       -0.28872866  0.1681278
qqnorm(resid(m))
qqline(resid(m))

plot(fitted(m),residuals(m))
abline(h = 0)

anova(m)
## Type III Analysis of Variance Table with Satterthwaite's method
##                Sum Sq Mean Sq NumDF  DenDF F value    Pr(>F)    
## Villain_scaled  52.35  52.346     1 1787.6  25.684 4.437e-07 ***
## Context        949.42 189.884     5 1524.7  93.169 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Model Linearity

# Fitted values from your model
fitted_vals <- fitted(m)

# Plot observed values against fitted values
plot(fitted_vals, Spec_df$Mean,
     xlab = "Fitted Values",
     ylab = "Observed ARV",
     main = "Observed vs Fitted Values")
abline(0, 1, col = "blue", lty = 2)

==> Negative association between perceived villains and context independent ARV There is, again, an effect of villain perception on the acceptability of violating regulations in the context of our ethical dilemmas. The more the group is perceived to be a villain - the less we tolerate the the group broke regulations.


H2b Outliers analyses

Note on Outlier management: As part of a sensitivity analysis, We can compare our model output with a robust version of the model, using robust lmer.

summary(robust_m<-rlmer(Mean ~ Villain_scaled + Context + (1 | Part), data = Spec_df))
## Robust linear mixed model fit by DAStau 
## Formula: Mean ~ Villain_scaled + Context + (1 | Part) 
##    Data: Spec_df 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.4585 -0.6360  0.0451  0.6483  3.0419 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Part     (Intercept) 0.8866   0.9416  
##  Residual             2.0964   1.4479  
## Number of obs: 1800, groups: Part, 300
## 
## Fixed effects:
##                Estimate Std. Error t value
## (Intercept)     4.01553    0.10512   38.20
## Villain_scaled -0.25004    0.04955   -5.05
## ContextS        1.23019    0.12580    9.78
## ContextP        0.33171    0.12300    2.70
## ContextJ       -1.37362    0.14616   -9.40
## ContextW        0.77478    0.12132    6.39
## ContextN       -0.07787    0.12139   -0.64
## 
## Correlation of Fixed Effects:
##             (Intr) Vlln_s CntxtS CntxtP CntxtJ CntxtW
## Villan_scld  0.231                                   
## ContextS    -0.617 -0.267                            
## ContextP    -0.607 -0.168  0.520                     
## ContextJ    -0.608 -0.558  0.549  0.503              
## ContextW    -0.584 -0.034  0.491  0.498  0.434       
## ContextN    -0.587 -0.048  0.494  0.500  0.441  0.501
## 
## Robustness weights for the residuals: 
##  1494 weights are ~= 1. The remaining 306 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   0.389   0.725   0.862   0.825   0.944   0.999 
## 
## Robustness weights for the random effects: 
##  259 weights are ~= 1. The remaining 41 ones are summarized as
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   0.476   0.732   0.825   0.809   0.927   0.998 
## 
## Rho functions used for fitting:
##   Residuals:
##     eff: smoothed Huber (k = 1.345, s = 10) 
##     sig: smoothed Huber, Proposal 2 (k = 1.345, s = 10) 
##   Random Effects, variance component 1 (Part):
##     eff: smoothed Huber (k = 1.345, s = 10) 
##     vcp: smoothed Huber, Proposal 2 (k = 1.345, s = 10)
compare(m, robust_m)
##                    m               
## Coef                               
## (Intercept)         3.9741 (0.1004)
## Villain_scaled     -0.2409 (0.0475)
## ContextS            1.1920 (0.1209)
## ContextP            0.3282 (0.1182)
## ContextJ           -1.3000 (0.1404)
## ContextW            0.7812 (0.1166)
## ContextN           -0.0603 (0.1167)
##                                    
## VarComp                            
## (Intercept) | Part 0.908           
##                                    
## sigma              1.43            
##                                    
## REML               6774            
## rho.e                              
## rho.sigma.e                        
## rho.b_1                            
## rho.sigma.b_1                      
##                    robust_m                                      
## Coef                                                             
## (Intercept)         4.0155 (0.1051)                              
## Villain_scaled     -0.2500 (0.0496)                              
## ContextS            1.2302 (0.1258)                              
## ContextP            0.3317 (0.1230)                              
## ContextJ           -1.3736 (0.1462)                              
## ContextW            0.7748 (0.1213)                              
## ContextN           -0.0779 (0.1214)                              
##                                                                  
## VarComp                                                          
## (Intercept) | Part 0.942                                         
##                                                                  
## sigma              1.45                                          
##                                                                  
## REML                                                             
## rho.e              smoothed Huber (k = 1.345, s = 10)            
## rho.sigma.e        smoothed Huber, Proposal 2 (k = 1.345, s = 10)
## rho.b_1            smoothed Huber (k = 1.345, s = 10)            
## rho.sigma.b_1      smoothed Huber, Proposal 2 (k = 1.345, s = 10)

No large discrepancy. Overall – across all our analyses, robust alternatives to our registered mixed model does not provide evidence of influential cases.


H2b Decomposition of the effects within job

We can explore how our variables are associated within each occupations

paste0("Firefighter analysis")


FireRole<- subset(Spec_df, Spec_df$Context == "F")

#mod <-lmer(Heroes ~ Risk_dummy * Motiv_dummy + (1|ResponseId), data = df_Roles)
FireRole$Villain_scaled <- scale(FireRole$Villain_scaled)
summary(FireMod<-lm(Mean ~ Villain_scaled, data = FireRole))
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)


paste0("Nurses analysis")

HCrole<- subset(Spec_df, Spec_df$Context == "N")
HCrole$Villain_scaled <- scale(HCrole$Villain_scaled)

summary(HCMod<-lm(Mean ~ Villain_scaled, data = HCrole))
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)



paste0("Soldier analysis")

MilitaryRole<- subset(Spec_df, Spec_df$Context == "S")
MilitaryRole$Villain_scaled <- scale(MilitaryRole$Villain_scaled)

summary(MilitaryMod<-lm(Mean ~ Villain_scaled, data = MilitaryRole))
HCMil_typeIII <- car::Anova(MilitaryMod, type = "III")
eta_squared(HCMil_typeIII, partial = TRUE)

paste0("Journalists analysis")

JourRole<- subset(Spec_df, Spec_df$Context == "J")

summary(JourMod<-lm(Mean ~ Villain_scaled, data = JourRole))
JourtypeIII <- car::Anova(JourMod, type = "III")
eta_squared(JourtypeIII, partial = TRUE)


paste0("Psy analysis")

PsyRole<- subset(Spec_df, Spec_df$Context == "P")

summary(PsyMod<-lm(Mean ~ Villain_scaled, data = PsyRole))
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)

paste0("Welders analysis")

WeldRole<- subset(Spec_df, Spec_df$Context == "W")

summary(WeldMod<-lm(Mean ~ Villain_scaled, data = WeldRole))
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)

We will explore the effects within jobs.

# Loop over contexts, fit models, collect stats
results_tbl <- map_dfr(contexts, function(ctx) {
  df_ctx <- Spec_df %>%
    filter(Context == ctx) %>%
    select(Mean, Villain_scaled) %>%
    tidyr::drop_na(Mean, Villain_scaled) %>%
    mutate(Villain_scaled = as.numeric(scale(Villain_scaled)))

  # Linear model
  mod <- lm(Mean ~ Villain_scaled, data = df_ctx)

  # Coef table (grab the slope for Hero_s_z)
  coef_row <- tidy(mod) %>%
    filter(term == "Villain_scaled") %>%
    transmute(
      Context = ctx,
      Beta = estimate,
      t = statistic,
      p = p.value
    )

  # Type III ANOVA + partial eta^2
  a3 <- car::Anova(mod, type = "III")
  et <- effectsize::eta_squared(a3, partial = TRUE) %>%
    as.data.frame()

  # Pull partial eta^2 for the Hero_s_z effect
  pe2 <- et %>%
    dplyr::filter(Parameter == "Villain_scaled") %>%
    dplyr::pull(Eta2)

  coef_row %>%
    mutate(partial_eta2 = pe2)
})
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
## For one-way between subjects designs, partial eta squared is equivalent
##   to eta squared. Returning eta squared.
# Present a clean HTML table
d_tab<-results_tbl %>%
  mutate(
    Beta = round(Beta, 3),
    t = round(t, 3),
    p = fmt_p(p),
    partial_eta2 = round(partial_eta2, 3)
  ) %>%
  gt() %>%
  tab_header(
    title = md("**Effect of Villain perception on Context dependent ARV (dilemmas) within each occupation**")
  ) %>%
  cols_label(
    Context = "Context",
    Beta = "β (slope)",
    t = "t",
    p = "p",
    partial_eta2 = "partial η² (Type III)"
  ) %>%
  fmt_missing(everything(), missing_text = "—") %>%
  tab_options(table.font.size = px(14))

d_tab
Effect of Villain perception on Context dependent ARV (dilemmas) within each occupation
Context β (slope) t p partial η² (Type III)
F -0.048 -0.487 0.627 0.001
N 0.108 0.979 0.328 0.003
P -0.193 -1.883 0.061 0.012
S -0.439 -4.818 < .001 0.072
J -0.145 -1.690 0.092 0.009
W -0.017 -0.179 0.858 0.000

Looking within occupations – there is only soldiers that were associated with decreased tolerance for violating regulations associated with villains perception. ___

H2b Conclusion

Hypothesis 1 receive support.


H3: Association between Context dependent and Context independent measures

Main minimal correlations

We want to ensure that, at least, context dependent and context independent share a bit of variance – we registered a minimal correlation of .3 (i.e., 9% shared variance).

df_wide <- long_df %>%
  unite("Measure_Context", Measure, Context, sep = "_") %>%  # combine Measure + Context
  pivot_wider(
    id_cols = Part,
    names_from = Measure_Context,
    values_from = Mean,
    names_prefix = "Mean_"
  )

PerformanceAnalytics::chart.Correlation(df_wide[2:13])

library(stringr)
library(dplyr)

# from previous step:

library(dplyr)
library(tidyr)
library(purrr)
library(tibble)

# Compute correlations + CIs for a set of columns
corr_ci_table <- function(df, cols, method = c("pearson","spearman"),
                          use = "pairwise.complete.obs",
                          conf.level = 0.95) {
  method <- match.arg(method)
  X <- as.data.frame(df[, cols, drop = FALSE])
  stopifnot(all(map_lgl(X, is.numeric)))

  # correlation matrix
  R <- cor(X, use = use, method = method)

  # pairwise N for each (i, j)
  get_n <- function(i, j) sum(complete.cases(X[, c(i, j)]))
  idx <- expand.grid(i = seq_len(ncol(X)), j = seq_len(ncol(X)))
  Nvec <- map2_int(idx$i, idx$j, get_n)
  N <- matrix(Nvec, ncol = ncol(X), byrow = FALSE,
              dimnames = list(colnames(X), colnames(X)))

  # Fisher z CI
  z <- atanh(R)
  se <- 1 / sqrt(pmax(N - 3, 1))            # avoid division by zero if N<=3
  zcrit <- qnorm(1 - (1 - conf.level) / 2)
  lo <- tanh(z - zcrit * se)
  hi <- tanh(z + zcrit * se)

  # tidy, upper triangle without diagonal
  upper <- which(upper.tri(R), arr.ind = TRUE)
  tibble(
    var1 = colnames(R)[upper[, 1]],
    var2 = colnames(R)[upper[, 2]],
    n    = N[upper],
    r    = R[upper],
    conf.low  = lo[upper],
    conf.high = hi[upper],
    conf.level = conf.level,
    method = method
  ) %>%
    arrange(var1, var2)
}

ci_tab <- corr_ci_table(df_wide, cols = 2:13, method = "pearson", conf.level = 0.95)

same_final_letter <- function(tab) {
  tab %>%
    mutate(
      last1 = str_extract(var1, "[A-Za-z]$"),
      last2 = str_extract(var2, "[A-Za-z]$")
    ) %>%
    filter(!is.na(last1), !is.na(last2), toupper(last1) == toupper(last2)) %>%
    select(-last1, -last2)
}

ci_tab_same <- corr_ci_table(df_wide, cols = 2:13, method = "pearson", conf.level = 0.95) %>%
  same_final_letter()

print(ci_tab_same, n = Inf)
## # A tibble: 6 × 8
##   var1             var2             n     r conf.low conf.high conf.level method
##   <chr>            <chr>        <int> <dbl>    <dbl>     <dbl>      <dbl> <chr> 
## 1 Mean_Mean_Spec_F Mean_Mean_G…   300 0.424    0.327     0.513       0.95 pears…
## 2 Mean_Mean_Spec_J Mean_Mean_G…   300 0.413    0.315     0.503       0.95 pears…
## 3 Mean_Mean_Spec_N Mean_Mean_G…   300 0.335    0.231     0.432       0.95 pears…
## 4 Mean_Mean_Spec_P Mean_Mean_G…   300 0.323    0.218     0.421       0.95 pears…
## 5 Mean_Mean_Spec_S Mean_Mean_G…   300 0.334    0.229     0.431       0.95 pears…
## 6 Mean_Mean_Spec_W Mean_Mean_G…   300 0.336    0.231     0.432       0.95 pears…
gt(ci_tab_same)
var1 var2 n r conf.low conf.high conf.level method
Mean_Mean_Spec_F Mean_Mean_Gen_F 300 0.4243633 0.3268283 0.5129540 0.95 pearson
Mean_Mean_Spec_J Mean_Mean_Gen_J 300 0.4134842 0.3149923 0.5031652 0.95 pearson
Mean_Mean_Spec_N Mean_Mean_Gen_N 300 0.3350929 0.2306025 0.4319431 0.95 pearson
Mean_Mean_Spec_P Mean_Mean_Gen_P 300 0.3233234 0.2180667 0.4211447 0.95 pearson
Mean_Mean_Spec_S Mean_Mean_Gen_S 300 0.3338214 0.2292466 0.4307779 0.95 pearson
Mean_Mean_Spec_W Mean_Mean_Gen_W 300 0.3356857 0.2312349 0.4324863 0.95 pearson

Smallest correlation is .32 (psychiatrists). All correlations are around .34 - with the highest correlation being observed for firefighters (.42). Note that only Firefighters and Journalists exclude a r = .3 from their CI. But that’s anecdotal - as it was not specifically registered.

Also, we added an exploration of the invariance of the correlation between occupation. Let’s do that now.

library(lavaan)
## This is lavaan 0.6-19
## lavaan is FREE software! Please report any bugs.
df_wide <- df_long %>%
  distinct(Part, Measure, Context, Mean) %>%     # collapse possible duplicates
  unite("Measure_Context", Measure, Context, sep = "_") %>%
  pivot_wider(
    id_cols = Part,
    names_from = Measure_Context,
    values_from = Mean
  )

# optional: keep only Parts with at least one pair present
# df_wide <- df_wide %>% filter(rowSums(!is.na(select(., matches("^Mean_(Spec|Gen)_")))) > 0)

# 2) Model generator: within-context correlation between Mean_Spec_c and Mean_Gen_c
make_corr_model <- function(contexts, equal_r = FALSE) {
  L <- character()
  for (c in contexts) {
    rlab <- if (equal_r) "r" else paste0("r_", c)
    # this line estimates the covariance between the two observed variables in context c,
    # with a shared label to impose equality if equal_r = TRUE.
    L <- c(L, sprintf("Mean_Spec_%s ~~ %s*Mean_Gen_%s", c, rlab, c))
    # (variances and means of observed variables are freely estimated by default)
  }
  paste(L, collapse = "\n")
}

contexts <- levels(df_long$Context)

model_free   <- make_corr_model(contexts, equal_r = FALSE)  # different r for each context
model_equal  <- make_corr_model(contexts, equal_r = TRUE)   # one common r across contexts

# 3) Fit (use sem; listwise deletion by default; use missing='fiml' if you want FIML)
fit_free  <- lavaan::sem(model_free,  data = df_wide, meanstructure = FALSE, missing = "fiml")
## Warning: lavaan->lav_options_set():  
##    missing argument ml forces meanstructure = TRUE
fit_equal <- lavaan::sem(model_equal, data = df_wide, meanstructure = FALSE, missing = "fiml")
## Warning: lavaan->lav_options_set():  
##    missing argument ml forces meanstructure = TRUE
summary(fit_free, fit.measures = TRUE, standardized = TRUE)
## lavaan 0.6-19 ended normally after 26 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        30
## 
##   Number of observations                           300
##   Number of missing patterns                         1
## 
## Model Test User Model:
##                                                       
##   Test statistic                               779.502
##   Degrees of freedom                                60
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              1035.486
##   Degrees of freedom                                66
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.258
##   Tucker-Lewis Index (TLI)                       0.184
##                                                       
##   Robust Comparative Fit Index (CFI)             0.258
##   Robust Tucker-Lewis Index (TLI)                0.184
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -6758.028
##   Loglikelihood unrestricted model (H1)      -6368.277
##                                                       
##   Akaike (AIC)                               13576.057
##   Bayesian (BIC)                             13687.170
##   Sample-size adjusted Bayesian (SABIC)      13592.028
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.200
##   90 Percent confidence interval - lower         0.188
##   90 Percent confidence interval - upper         0.213
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    1.000
##                                                       
##   Robust RMSEA                                   0.200
##   90 Percent confidence interval - lower         0.188
##   90 Percent confidence interval - upper         0.213
##   P-value H_0: Robust RMSEA <= 0.050             0.000
##   P-value H_0: Robust RMSEA >= 0.080             1.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.257
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Observed
##   Observed information based on                Hessian
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   Mean_Spec_F ~~                                                        
##     Men_Gn_F (r_F)    1.162    0.172    6.766    0.000    1.162    0.424
##   Mean_Spec_S ~~                                                        
##     Men_Gn_S (r_S)    0.913    0.166    5.484    0.000    0.913    0.334
##   Mean_Spec_P ~~                                                        
##     Men_Gn_P (r_P)    0.876    0.164    5.329    0.000    0.876    0.323
##   Mean_Spec_J ~~                                                        
##     Men_Gn_J (r_J)    1.032    0.156    6.618    0.000    1.032    0.413
##   Mean_Spec_W ~~                                                        
##     Men_Gn_W (r_W)    0.809    0.147    5.512    0.000    0.809    0.336
##   Mean_Spec_N ~~                                                        
##     Men_Gn_N (r_N)    1.029    0.187    5.503    0.000    1.029    0.335
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     Mean_Spec_F       4.092    0.099   41.390    0.000    4.092    2.390
##     Mean_Spec_S       5.121    0.094   54.350    0.000    5.121    3.138
##     Mean_Spec_P       4.320    0.102   42.217    0.000    4.320    2.437
##     Mean_Spec_J       2.396    0.086   27.970    0.000    2.396    1.615
##     Mean_Spec_W       4.853    0.094   51.636    0.000    4.853    2.981
##     Mean_Spec_N       4.003    0.110   36.298    0.000    4.003    2.096
##     Mean_Gen_F        4.332    0.092   46.915    0.000    4.332    2.709
##     Mean_Gen_S        3.074    0.097   31.781    0.000    3.074    1.835
##     Mean_Gen_P        3.270    0.088   37.043    0.000    3.270    2.139
##     Mean_Gen_J        3.290    0.097   33.858    0.000    3.290    1.955
##     Mean_Gen_W        3.354    0.085   39.265    0.000    3.354    2.267
##     Mean_Gen_N        3.538    0.093   38.135    0.000    3.538    2.202
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     Mean_Spec_F       2.933    0.239   12.247    0.000    2.933    1.000
##     Mean_Spec_S       2.663    0.217   12.247    0.000    2.663    1.000
##     Mean_Spec_P       3.141    0.256   12.247    0.000    3.141    1.000
##     Mean_Spec_J       2.201    0.180   12.247    0.000    2.201    1.000
##     Mean_Spec_W       2.650    0.216   12.247    0.000    2.650    1.000
##     Mean_Spec_N       3.649    0.298   12.247    0.000    3.649    1.000
##     Mean_Gen_F        2.558    0.209   12.247    0.000    2.558    1.000
##     Mean_Gen_S        2.807    0.229   12.247    0.000    2.807    1.000
##     Mean_Gen_P        2.338    0.191   12.247    0.000    2.338    1.000
##     Mean_Gen_J        2.833    0.231   12.247    0.000    2.833    1.000
##     Mean_Gen_W        2.190    0.179   12.247    0.000    2.190    1.000
##     Mean_Gen_N        2.582    0.211   12.247    0.000    2.582    1.000
summary(fit_equal, fit.measures = TRUE, standardized = TRUE)
## lavaan 0.6-19 ended normally after 20 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        30
##   Number of equality constraints                     5
## 
##   Number of observations                           300
##   Number of missing patterns                         1
## 
## Model Test User Model:
##                                                       
##   Test statistic                               782.683
##   Degrees of freedom                                65
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              1035.486
##   Degrees of freedom                                66
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.260
##   Tucker-Lewis Index (TLI)                       0.248
##                                                       
##   Robust Comparative Fit Index (CFI)             0.260
##   Robust Tucker-Lewis Index (TLI)                0.248
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -6759.619
##   Loglikelihood unrestricted model (H1)      -6368.277
##                                                       
##   Akaike (AIC)                               13569.238
##   Bayesian (BIC)                             13661.832
##   Sample-size adjusted Bayesian (SABIC)      13582.547
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.192
##   90 Percent confidence interval - lower         0.180
##   90 Percent confidence interval - upper         0.204
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    1.000
##                                                       
##   Robust RMSEA                                   0.192
##   90 Percent confidence interval - lower         0.180
##   90 Percent confidence interval - upper         0.204
##   P-value H_0: Robust RMSEA <= 0.050             0.000
##   P-value H_0: Robust RMSEA >= 0.080             1.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.258
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Observed
##   Observed information based on                Hessian
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   Mean_Spec_F ~~                                                        
##     Mean_Gen_F (r)    0.970    0.067   14.426    0.000    0.970    0.371
##   Mean_Spec_S ~~                                                        
##     Mean_Gen_S (r)    0.970    0.067   14.426    0.000    0.970    0.350
##   Mean_Spec_P ~~                                                        
##     Mean_Gen_P (r)    0.970    0.067   14.426    0.000    0.970    0.350
##   Mean_Spec_J ~~                                                        
##     Mean_Gen_J (r)    0.970    0.067   14.426    0.000    0.970    0.395
##   Mean_Spec_W ~~                                                        
##     Mean_Gen_W (r)    0.970    0.067   14.426    0.000    0.970    0.385
##   Mean_Spec_N ~~                                                        
##     Mean_Gen_N (r)    0.970    0.067   14.426    0.000    0.970    0.319
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     Mean_Spec_F       4.092    0.097   42.373    0.000    4.092    2.446
##     Mean_Spec_S       5.121    0.095   54.001    0.000    5.121    3.118
##     Mean_Spec_P       4.320    0.103   41.770    0.000    4.320    2.412
##     Mean_Spec_J       2.396    0.085   28.213    0.000    2.396    1.629
##     Mean_Spec_W       4.853    0.096   50.515    0.000    4.853    2.917
##     Mean_Spec_N       4.003    0.110   36.501    0.000    4.003    2.107
##     Mean_Gen_F        4.332    0.090   48.029    0.000    4.332    2.773
##     Mean_Gen_S        3.074    0.097   31.577    0.000    3.074    1.823
##     Mean_Gen_P        3.270    0.089   36.650    0.000    3.270    2.116
##     Mean_Gen_J        3.290    0.096   34.153    0.000    3.290    1.972
##     Mean_Gen_W        3.354    0.087   38.413    0.000    3.354    2.218
##     Mean_Gen_N        3.538    0.092   38.349    0.000    3.538    2.214
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     Mean_Spec_F       2.798    0.202   13.838    0.000    2.798    1.000
##     Mean_Spec_S       2.698    0.200   13.519    0.000    2.698    1.000
##     Mean_Spec_P       3.209    0.238   13.493    0.000    3.209    1.000
##     Mean_Spec_J       2.163    0.156   13.907    0.000    2.163    1.000
##     Mean_Spec_W       2.769    0.203   13.626    0.000    2.769    1.000
##     Mean_Spec_N       3.609    0.269   13.395    0.000    3.609    1.000
##     Mean_Gen_F        2.441    0.176   13.838    0.000    2.441    1.000
##     Mean_Gen_S        2.844    0.210   13.519    0.000    2.844    1.000
##     Mean_Gen_P        2.388    0.177   13.493    0.000    2.388    1.000
##     Mean_Gen_J        2.784    0.200   13.907    0.000    2.784    1.000
##     Mean_Gen_W        2.288    0.168   13.626    0.000    2.288    1.000
##     Mean_Gen_N        2.553    0.191   13.395    0.000    2.553    1.000
# 4) Likelihood-ratio test: does constraining all r's to be equal hurt fit?
anova(fit_equal, fit_free)   # free model should fit >= as well; significant LR test => r's differ
## 
## Chi-Squared Difference Test
## 
##           Df   AIC   BIC  Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
## fit_free  60 13576 13687 779.50                                    
## fit_equal 65 13569 13662 782.68      3.181     0       5     0.6721
# 5) Extract per-context correlations (standardized covariances) from the free model
# Note: parameterEstimates gives unstandardized covariances; get correlations via 'standardized'
pe_free <- standardizedSolution(fit_free) %>%
  filter(op == "~~",
         grepl("^Mean_Spec_", lhs),
         grepl("^Mean_Gen_", rhs)) %>%
  transmute(
    context = str_remove(lhs, "^Mean_Spec_"),
    r_est   = est.std
  )


# 6) Optional: CIs for those correlations
# lavaan reports CIs for unstandardized parameters by default; for correlations (standardized),
# a common approach is Fisher z CIs using the model's effective N (lavaan uses listwise/FIML).
# Here we use the sample size reported by lavaan and compute z-based CIs:

N_eff <- lavInspect(fit_free, "nobs")  # effective N used by the model
zcrit <- qnorm(0.975)

ci_free <- pe_free %>%
  mutate(
    z   = atanh(pmin(pmax(r_est, -0.999999), 0.999999)),
    se  = 1 / sqrt(pmax(N_eff - 3, 1)),
    lo  = tanh(z - zcrit * se),
    hi  = tanh(z + zcrit * se)
  )

#ci_free





library(dplyr)
library(stringr)
library(lavaan)

## contexts in the order you want to index as r1, r2, ..., rk
contexts <- levels(df_long$Context)
K <- length(contexts)
r_labels <- paste0("r", seq_len(K))

# ---------- 1) Model with labels r1..rK on each within-context covariance ----------
make_corr_model <- function(contexts) {
  L <- character()
  for (k in seq_along(contexts)) {
    c <- contexts[k]
    L <- c(L, sprintf("Mean_Spec_%s ~~ r%d*Mean_Gen_%s", c, k, c))
  }
  paste(L, collapse = "\n")
}

model_free <- make_corr_model(contexts)

# FIML implies a meanstructure; let lavaan handle that quietly
fit_free <- sem(model_free, data = df_wide, missing = "fiml")

# ---------- 2) Chained Wald test: r1==r2==...==rK ----------
constraints_chain <- paste(sprintf("r%d == r%d", 1:(K-1), 2:K), collapse = "\n")
lavTestWald(fit_free, constraints = constraints_chain)
## $stat
## [1] 3.141226
## 
## $df
## [1] 5
## 
## $p.value
## [1] 0.6782234
## 
## $se
## [1] "standard"
# ---------- 3) Extract per-context correlations + 95% CIs ----------
# Note: parameterEstimates(..., ci=TRUE) gives CIs for *unstandardized* covariances.
# For correlations, use standardizedSolution() + Fisher z CIs.
pe_std <- standardizedSolution(fit_free)

rho <- pe_std %>%
  filter(op == "~~",
         grepl("^Mean_Spec_", lhs),
         grepl("^Mean_Gen_", rhs)) %>%
  mutate(
    context = str_remove(lhs, "^Mean_Spec_"),
    label   = r_labels[match(context, contexts)],
    r       = est.std
  ) %>%
  select(context, label, r) %>%
  arrange(match(context, contexts))

# Fisher z CIs using the model’s effective N (FIML)
N_eff <- lavInspect(fit_free, "nobs")
zcrit <- qnorm(0.975)

rho <- rho %>%
  mutate(
    z   = atanh(pmin(pmax(r, -0.999999), 0.999999)),
    se  = 1 / sqrt(pmax(N_eff - 3, 1)),
    ci.low  = tanh(z - zcrit * se),
    ci.high = tanh(z + zcrit * se)
  ) %>%
  select(context, label, r, ci.low, ci.high)

print(rho, row.names = FALSE)
##  context label     r ci.low ci.high
##        F    r1 0.424  0.327   0.513
##        S    r2 0.334  0.229   0.431
##        P    r3 0.323  0.218   0.421
##        J    r4 0.413  0.315   0.503
##        W    r5 0.336  0.231   0.432
##        N    r6 0.335  0.231   0.432
# ---------- 4) Post-hoc pairwise Wald tests with Holm adjustment ----------
pairs <- combn(r_labels, 2, simplify = FALSE)

posthoc <- lapply(pairs, function(pr) {
  tt <- lavTestWald(fit_free, constraints = sprintf("%s == %s", pr[1], pr[2]))
  data.frame(pair = paste(pr, collapse = "-"),
             chisq = tt$stat, df = tt$df, p = tt$p.value)
}) %>% bind_rows()

posthoc <- posthoc %>%
  mutate(p_adj = p.adjust(p, method = "holm")) %>%
  arrange(p_adj)

print(posthoc, row.names = FALSE)
##   pair        chisq df         p p_adj
##  r1-r2 1.0879476507  1 0.2969264     1
##  r1-r3 1.4478781038  1 0.2288687     1
##  r1-r4 0.3138145603  1 0.5753488     1
##  r1-r5 2.4510465346  1 0.1174465     1
##  r1-r6 0.2775927451  1 0.5982834     1
##  r2-r3 0.0245376850  1 0.8755246     1
##  r2-r4 0.2744641475  1 0.6003528     1
##  r2-r5 0.2205091083  1 0.6386523     1
##  r2-r6 0.2138616653  1 0.6437574     1
##  r3-r4 0.4746870077  1 0.4908388     1
##  r3-r5 0.0939255062  1 0.7592448     1
##  r3-r6 0.3747244640  1 0.5404402     1
##  r4-r5 1.0912340822  1 0.2961979     1
##  r4-r6 0.0002392426  1 0.9876592     1
##  r5-r6 0.8567080110  1 0.3546616     1
gt(posthoc)
pair chisq df p p_adj
r1-r2 1.0879476507 1 0.2969264 1
r1-r3 1.4478781038 1 0.2288687 1
r1-r4 0.3138145603 1 0.5753488 1
r1-r5 2.4510465346 1 0.1174465 1
r1-r6 0.2775927451 1 0.5982834 1
r2-r3 0.0245376850 1 0.8755246 1
r2-r4 0.2744641475 1 0.6003528 1
r2-r5 0.2205091083 1 0.6386523 1
r2-r6 0.2138616653 1 0.6437574 1
r3-r4 0.4746870077 1 0.4908388 1
r3-r5 0.0939255062 1 0.7592448 1
r3-r6 0.3747244640 1 0.5404402 1
r4-r5 1.0912340822 1 0.2961979 1
r4-r6 0.0002392426 1 0.9876592 1
r5-r6 0.8567080110 1 0.3546616 1

Conclusion

Overall, we observed support for all our registered hypotheses. Heroes and Villains are associated to different levels of tolerance for regulations violations. In particular, we suggest that the heroic status might shield workers against criticism and make them less targetable upon violations of regulations. In extreme cases, heroism might decrease accountability of heroes not following regulations, and granting them some kind of imputunity. (see Jorgensen, 2009) – there is a shield effect of being perceived as a hero. On the other end of the spectrum, villains are associated to decreased acceptability of regulations violations. This might mean that we are more severe with ‘villains’ breaking the laws than heroes. In definitive, these findings open up a lot of questions on accountability and impunity of workers. In particular, the extent to which we found workers guilty of misconduct upon breaking regulations might be directly related to our perception of the workers has exemplifying heroism or being villains. Perceived moral characters would colour the degree to which we hold workers accountable for regulations violations.

However, while this correlation was observed quite consistently for each job (at least in the case of heroism - ARV), it is noteworthy that some occupations were associated to strong shield effects (soldiers and journalists), while others were associated to weak shield effects:

a_tab
Effect of heroism on General ARV within each occupation
Context β (slope) t p partial η² (Type III)
F 0.315 3.460 < .001 0.039
N 0.185 1.995 0.047 0.013
P 0.268 3.072 0.002 0.031
S 0.673 7.552 < .001 0.161
J 0.710 8.015 < .001 0.177
W 0.240 2.831 0.005 0.026
b_tab
Effect of heroism on Context dependent ARV (dilemmas) within each occupation
Context β (slope) t p partial η² (Type III)
F 0.209 2.116 0.035 0.015
N 0.185 1.678 0.094 0.009
P 0.288 2.840 0.005 0.026
S 0.540 6.042 < .001 0.109
J 0.473 5.792 < .001 0.101
W 0.350 3.793 < .001 0.046
c_tab
Effect of Villain perception on Context independent ARV within each occupation
Context β (slope) t p partial η² (Type III)
F 0.019 0.206 0.837 0.000
N 0.126 1.357 0.176 0.006
P -0.051 -0.570 0.569 0.001
S -0.348 -3.663 < .001 0.043
J -0.424 -4.481 < .001 0.063
W 0.071 0.830 0.407 0.002
d_tab
Effect of Villain perception on Context dependent ARV (dilemmas) within each occupation
Context β (slope) t p partial η² (Type III)
F -0.048 -0.487 0.627 0.001
N 0.108 0.979 0.328 0.003
P -0.193 -1.883 0.061 0.012
S -0.439 -4.818 < .001 0.072
J -0.145 -1.690 0.092 0.009
W -0.017 -0.179 0.858 0.000

In addition - as indicated in the exploratory analyses below, when controlling for attitude - our overall effects resisted (with the exception of the association between villain perceptions and context-dependent ARV). Looking into our effects within each occupation level revealed a robustness of the effect to controlling attitude - with the notable exception of Nurses. With the exception of nurses, it would appear that the fact that heroes are ‘above regulations’ as indicated by support for deregulation and shielding heroes from prosecutions upon breaking regulations, is independent of attitude. There is something more at work here than wanting to protect workers we appreciate – heroism uniquely shields workers.


Exploratory analyses

Using attitude as a covariate (deviation)

Gen_df <- subset(long_df, long_df$Measure == "Mean_Gen")

Gen_df$Mean_scaled <- scale(Gen_df$Mean)
Gen_df$Hero_s <- scale(Gen_df$Hero)
Gen_df$Villain_scaled <- scale(Gen_df$Vill)
Gen_df$Danger_scaled <- scale(Gen_df$Danger)
Gen_df$Helpf_scaled <- scale(Gen_df$Helpf)
Gen_df$Att_scaled <- scale(Gen_df$Att)

#summary(lmer(Gen_df$Mean ~ Gen_df$Context + (1 | Gen_df$Part)))

summary(m<-lmer(Mean ~ Hero_s + Context + Att_scaled + (1 | Part), data = Gen_df))
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Mean ~ Hero_s + Context + Att_scaled + (1 | Part)
##    Data: Gen_df
## 
## REML criterion at convergence: 6201
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.7999 -0.6643 -0.0491  0.6305  4.1214 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Part     (Intercept) 1.018    1.009   
##  Residual             1.367    1.169   
## Number of obs: 1800, groups:  Part, 300
## 
## Fixed effects:
##               Estimate Std. Error         df t value Pr(>|t|)    
## (Intercept)    3.83029    0.09800 1124.93797  39.083  < 2e-16 ***
## Hero_s         0.28557    0.05992 1791.87909   4.766 2.03e-06 ***
## ContextS      -0.86771    0.10086 1515.92946  -8.603  < 2e-16 ***
## ContextP      -0.35046    0.11397 1611.40045  -3.075  0.00214 ** 
## ContextJ       0.07404    0.12963 1629.64115   0.571  0.56795    
## ContextW      -0.33115    0.11214 1607.38134  -2.953  0.00319 ** 
## ContextN      -0.64755    0.09656 1503.13723  -6.706 2.82e-11 ***
## Att_scaled     0.32049    0.05276 1738.36772   6.075 1.52e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##            (Intr) Hero_s CntxtS CntxtP CntxtJ CntxtW CntxtN
## Hero_s     -0.253                                          
## ContextS   -0.569  0.006                                   
## ContextP   -0.630  0.410  0.515                            
## ContextJ   -0.636  0.325  0.542  0.656                     
## ContextW   -0.625  0.418  0.508  0.642  0.638              
## ContextN   -0.539  0.129  0.493  0.494  0.451  0.499       
## Att_scaled -0.090 -0.642  0.243  0.013  0.246 -0.025 -0.024
report(m)
## We fitted a linear mixed model (estimated using REML and nloptwrap optimizer)
## to predict Mean with Hero_s, Context and Att_scaled (formula: Mean ~ Hero_s +
## Context + Att_scaled). The model included Part as random effect (formula: ~1 |
## Part). The model's total explanatory power is substantial (conditional R2 =
## 0.50) and the part related to the fixed effects alone (marginal R2) is of 0.13.
## The model's intercept, corresponding to Hero_s = 0, Context = F and Att_scaled
## = 0, is at 3.83 (95% CI [3.64, 4.02], t(1790) = 39.08, p < .001). Within this
## model:
## 
##   - The effect of Hero s is statistically significant and positive (beta = 0.29,
## 95% CI [0.17, 0.40], t(1790) = 4.77, p < .001; Std. beta = 0.17, 95% CI [0.10,
## 0.24])
##   - The effect of Context [S] is statistically significant and negative (beta =
## -0.87, 95% CI [-1.07, -0.67], t(1790) = -8.60, p < .001; Std. beta = -0.53, 95%
## CI [-0.65, -0.41])
##   - The effect of Context [P] is statistically significant and negative (beta =
## -0.35, 95% CI [-0.57, -0.13], t(1790) = -3.08, p = 0.002; Std. beta = -0.21,
## 95% CI [-0.35, -0.08])
##   - The effect of Context [J] is statistically non-significant and positive (beta
## = 0.07, 95% CI [-0.18, 0.33], t(1790) = 0.57, p = 0.568; Std. beta = 0.04, 95%
## CI [-0.11, 0.20])
##   - The effect of Context [W] is statistically significant and negative (beta =
## -0.33, 95% CI [-0.55, -0.11], t(1790) = -2.95, p = 0.003; Std. beta = -0.20,
## 95% CI [-0.33, -0.07])
##   - The effect of Context [N] is statistically significant and negative (beta =
## -0.65, 95% CI [-0.84, -0.46], t(1790) = -6.71, p < .001; Std. beta = -0.39, 95%
## CI [-0.51, -0.28])
##   - The effect of Att scaled is statistically significant and positive (beta =
## 0.32, 95% CI [0.22, 0.42], t(1790) = 6.08, p < .001; Std. beta = 0.19, 95% CI
## [0.13, 0.26])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
summary(m<-lmer(Mean ~ Villain_scaled + Context + Att_scaled + (1 | Part), data = Gen_df))
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Mean ~ Villain_scaled + Context + Att_scaled + (1 | Part)
##    Data: Gen_df
## 
## REML criterion at convergence: 6212.8
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.8179 -0.6687 -0.0454  0.6066  4.0969 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Part     (Intercept) 1.083    1.041   
##  Residual             1.364    1.168   
## Number of obs: 1800, groups:  Part, 300
## 
## Fixed effects:
##                  Estimate Std. Error         df t value Pr(>|t|)    
## (Intercept)       3.94109    0.09593 1038.55744  41.084  < 2e-16 ***
## Villain_scaled   -0.16254    0.04821 1766.32762  -3.372 0.000763 ***
## ContextS         -0.83360    0.10134 1513.72382  -8.226 4.13e-16 ***
## ContextP         -0.59771    0.10409 1527.92624  -5.742 1.12e-08 ***
## ContextJ         -0.03171    0.12567 1573.71849  -0.252 0.800800    
## ContextW         -0.62064    0.10367 1534.46191  -5.987 2.66e-09 ***
## ContextN         -0.70397    0.09564 1491.99933  -7.360 3.01e-13 ***
## Att_scaled        0.39104    0.04869 1737.11083   8.031 1.76e-15 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) Vlln_s CntxtS CntxtP CntxtJ CntxtW CntxtN
## Villan_scld  0.022                                          
## ContextS    -0.578 -0.108                                   
## ContextP    -0.586  0.071  0.550                            
## ContextJ    -0.588 -0.223  0.577  0.573                     
## ContextW    -0.568  0.190  0.522  0.570  0.516              
## ContextN    -0.522 -0.008  0.495  0.486  0.427  0.483       
## Att_scaled  -0.267  0.555  0.207  0.367  0.385  0.391  0.059
report(m)
## We fitted a linear mixed model (estimated using REML and nloptwrap optimizer)
## to predict Mean with Villain_scaled, Context and Att_scaled (formula: Mean ~
## Villain_scaled + Context + Att_scaled). The model included Part as random
## effect (formula: ~1 | Part). The model's total explanatory power is substantial
## (conditional R2 = 0.51) and the part related to the fixed effects alone
## (marginal R2) is of 0.12. The model's intercept, corresponding to
## Villain_scaled = 0, Context = F and Att_scaled = 0, is at 3.94 (95% CI [3.75,
## 4.13], t(1790) = 41.08, p < .001). Within this model:
## 
##   - The effect of Villain scaled is statistically significant and negative (beta
## = -0.16, 95% CI [-0.26, -0.07], t(1790) = -3.37, p < .001; Std. beta = -0.10,
## 95% CI [-0.16, -0.04])
##   - The effect of Context [S] is statistically significant and negative (beta =
## -0.83, 95% CI [-1.03, -0.63], t(1790) = -8.23, p < .001; Std. beta = -0.51, 95%
## CI [-0.63, -0.39])
##   - The effect of Context [P] is statistically significant and negative (beta =
## -0.60, 95% CI [-0.80, -0.39], t(1790) = -5.74, p < .001; Std. beta = -0.36, 95%
## CI [-0.49, -0.24])
##   - The effect of Context [J] is statistically non-significant and negative (beta
## = -0.03, 95% CI [-0.28, 0.21], t(1790) = -0.25, p = 0.801; Std. beta = -0.02,
## 95% CI [-0.17, 0.13])
##   - The effect of Context [W] is statistically significant and negative (beta =
## -0.62, 95% CI [-0.82, -0.42], t(1790) = -5.99, p < .001; Std. beta = -0.38, 95%
## CI [-0.50, -0.25])
##   - The effect of Context [N] is statistically significant and negative (beta =
## -0.70, 95% CI [-0.89, -0.52], t(1790) = -7.36, p < .001; Std. beta = -0.43, 95%
## CI [-0.54, -0.31])
##   - The effect of Att scaled is statistically significant and positive (beta =
## 0.39, 95% CI [0.30, 0.49], t(1790) = 8.03, p < .001; Std. beta = 0.24, 95% CI
## [0.18, 0.30])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
Spec_df <- subset(long_df, long_df$Measure == "Mean_Spec")

Spec_df$Mean_scaled <- scale(Spec_df$Mean)
Spec_df$Hero_s <- scale(Spec_df$Hero)
Spec_df$Villain_scaled <- scale(Spec_df$Vill)
Spec_df$Danger_scaled <- scale(Spec_df$Danger)
Spec_df$Helpf_scaled <- scale(Spec_df$Helpf)
Spec_df$Att_scaled <- scale(Spec_df$Att)

#summary(lmer(Gen_df$Mean ~ Gen_df$Context + (1 | Gen_df$Part)))

summary(m<-lmer(Mean ~ Hero_s + Context + Att_scaled + (1 | Part), data = Spec_df))
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Mean ~ Hero_s + Context + Att_scaled + (1 | Part)
##    Data: Spec_df
## 
## REML criterion at convergence: 6719.9
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -3.03336 -0.67289  0.03611  0.68797  2.80827 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Part     (Intercept) 0.7536   0.8681  
##  Residual             1.9897   1.4106  
## Number of obs: 1800, groups:  Part, 300
## 
## Fixed effects:
##               Estimate Std. Error         df t value Pr(>|t|)    
## (Intercept)    3.66631    0.10663 1429.39720  34.383  < 2e-16 ***
## Hero_s         0.26128    0.06849 1698.01598   3.815 0.000141 ***
## ContextS       1.35216    0.12136 1528.31489  11.141  < 2e-16 ***
## ContextP       0.83654    0.13571 1654.47055   6.164 8.88e-10 ***
## ContextJ      -0.75484    0.15397 1683.56535  -4.902 1.04e-06 ***
## ContextW       1.31597    0.13360 1648.51630   9.850  < 2e-16 ***
## ContextN       0.03789    0.11636 1507.44939   0.326 0.744726    
## Att_scaled     0.25138    0.06160 1790.24019   4.081 4.69e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##            (Intr) Hero_s CntxtS CntxtP CntxtJ CntxtW CntxtN
## Hero_s     -0.263                                          
## ContextS   -0.626  0.003                                   
## ContextP   -0.688  0.391  0.516                            
## ContextJ   -0.694  0.308  0.541  0.647                     
## ContextW   -0.682  0.398  0.509  0.633  0.629              
## ContextN   -0.593  0.121  0.494  0.494  0.451  0.498       
## Att_scaled -0.108 -0.634  0.242  0.027  0.259 -0.010 -0.019
report(m)
## We fitted a linear mixed model (estimated using REML and nloptwrap optimizer)
## to predict Mean with Hero_s, Context and Att_scaled (formula: Mean ~ Hero_s +
## Context + Att_scaled). The model included Part as random effect (formula: ~1 |
## Part). The model's total explanatory power is substantial (conditional R2 =
## 0.45) and the part related to the fixed effects alone (marginal R2) is of 0.24.
## The model's intercept, corresponding to Hero_s = 0, Context = F and Att_scaled
## = 0, is at 3.67 (95% CI [3.46, 3.88], t(1790) = 34.38, p < .001). Within this
## model:
## 
##   - The effect of Hero s is statistically significant and positive (beta = 0.26,
## 95% CI [0.13, 0.40], t(1790) = 3.81, p < .001; Std. beta = 0.14, 95% CI [0.07,
## 0.21])
##   - The effect of Context [S] is statistically significant and positive (beta =
## 1.35, 95% CI [1.11, 1.59], t(1790) = 11.14, p < .001; Std. beta = 0.71, 95% CI
## [0.58, 0.83])
##   - The effect of Context [P] is statistically significant and positive (beta =
## 0.84, 95% CI [0.57, 1.10], t(1790) = 6.16, p < .001; Std. beta = 0.44, 95% CI
## [0.30, 0.58])
##   - The effect of Context [J] is statistically significant and negative (beta =
## -0.75, 95% CI [-1.06, -0.45], t(1790) = -4.90, p < .001; Std. beta = -0.40, 95%
## CI [-0.55, -0.24])
##   - The effect of Context [W] is statistically significant and positive (beta =
## 1.32, 95% CI [1.05, 1.58], t(1790) = 9.85, p < .001; Std. beta = 0.69, 95% CI
## [0.55, 0.83])
##   - The effect of Context [N] is statistically non-significant and positive (beta
## = 0.04, 95% CI [-0.19, 0.27], t(1790) = 0.33, p = 0.745; Std. beta = 0.02, 95%
## CI [-0.10, 0.14])
##   - The effect of Att scaled is statistically significant and positive (beta =
## 0.25, 95% CI [0.13, 0.37], t(1790) = 4.08, p < .001; Std. beta = 0.13, 95% CI
## [0.07, 0.20])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.
summary(m<-lmer(Mean ~ Villain_scaled + Context + Att_scaled + (1 | Part), data = Spec_df))
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: Mean ~ Villain_scaled + Context + Att_scaled + (1 | Part)
##    Data: Spec_df
## 
## REML criterion at convergence: 6734.3
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -3.05479 -0.67228  0.03718  0.67850  2.76245 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Part     (Intercept) 0.7784   0.8823  
##  Residual             1.9989   1.4138  
## Number of obs: 1800, groups:  Part, 300
## 
## Fixed effects:
##                  Estimate Std. Error         df t value Pr(>|t|)    
## (Intercept)       3.77160    0.10351 1407.30539  36.437  < 2e-16 ***
## Villain_scaled   -0.03766    0.05593 1782.94506  -0.673    0.501    
## ContextS          1.35918    0.12235 1528.60118  11.109  < 2e-16 ***
## ContextP          0.62844    0.12547 1549.45609   5.009 6.10e-07 ***
## ContextJ         -0.91393    0.15062 1621.04335  -6.068 1.61e-09 ***
## ContextW          1.09762    0.12488 1557.63257   8.790  < 2e-16 ***
## ContextN         -0.01536    0.11577 1494.63972  -0.133    0.894    
## Att_scaled        0.37941    0.05690 1791.51302   6.668 3.45e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) Vlln_s CntxtS CntxtP CntxtJ CntxtW CntxtN
## Villan_scld  0.027                                          
## ContextS    -0.644 -0.107                                   
## ContextP    -0.653  0.064  0.548                            
## ContextJ    -0.655 -0.222  0.574  0.571                     
## ContextW    -0.634  0.180  0.522  0.567  0.517              
## ContextN    -0.584 -0.009  0.495  0.487  0.429  0.485       
## Att_scaled  -0.294  0.543  0.206  0.359  0.386  0.380  0.058
report(m)
## We fitted a linear mixed model (estimated using REML and nloptwrap optimizer)
## to predict Mean with Villain_scaled, Context and Att_scaled (formula: Mean ~
## Villain_scaled + Context + Att_scaled). The model included Part as random
## effect (formula: ~1 | Part). The model's total explanatory power is substantial
## (conditional R2 = 0.45) and the part related to the fixed effects alone
## (marginal R2) is of 0.24. The model's intercept, corresponding to
## Villain_scaled = 0, Context = F and Att_scaled = 0, is at 3.77 (95% CI [3.57,
## 3.97], t(1790) = 36.44, p < .001). Within this model:
## 
##   - The effect of Villain scaled is statistically non-significant and negative
## (beta = -0.04, 95% CI [-0.15, 0.07], t(1790) = -0.67, p = 0.501; Std. beta =
## -0.02, 95% CI [-0.08, 0.04])
##   - The effect of Context [S] is statistically significant and positive (beta =
## 1.36, 95% CI [1.12, 1.60], t(1790) = 11.11, p < .001; Std. beta = 0.71, 95% CI
## [0.59, 0.84])
##   - The effect of Context [P] is statistically significant and positive (beta =
## 0.63, 95% CI [0.38, 0.87], t(1790) = 5.01, p < .001; Std. beta = 0.33, 95% CI
## [0.20, 0.46])
##   - The effect of Context [J] is statistically significant and negative (beta =
## -0.91, 95% CI [-1.21, -0.62], t(1790) = -6.07, p < .001; Std. beta = -0.48, 95%
## CI [-0.63, -0.32])
##   - The effect of Context [W] is statistically significant and positive (beta =
## 1.10, 95% CI [0.85, 1.34], t(1790) = 8.79, p < .001; Std. beta = 0.58, 95% CI
## [0.45, 0.70])
##   - The effect of Context [N] is statistically non-significant and negative (beta
## = -0.02, 95% CI [-0.24, 0.21], t(1790) = -0.13, p = 0.894; Std. beta =
## -8.06e-03, 95% CI [-0.13, 0.11])
##   - The effect of Att scaled is statistically significant and positive (beta =
## 0.38, 95% CI [0.27, 0.49], t(1790) = 6.67, p < .001; Std. beta = 0.20, 95% CI
## [0.14, 0.26])
## 
## Standardized parameters were obtained by fitting the model on a standardized
## version of the dataset. 95% Confidence Intervals (CIs) and p-values were
## computed using a Wald t-distribution approximation.

Controlling for attitude reduced all main effects - but it only nullified the correlation between villain perception and context-dependent ARV.

Correlation matrix between all our measures (registered)

DF<-Gen_df
DF$Dilemma_ARV <- Spec_df$Mean
DF$Gen_ARV <- as.numeric(DF$Mean)
PerformanceAnalytics::chart.Correlation(DF[, c(5:9, 16,17)])

Job decomposition with attitude as a covarite

paste0("CONTEXT INDEPENDENT analysis")

paste0("Hero analysis")


paste0("Firefighter analysis")


FireRole<- subset(Gen_df, Gen_df$Context == "F")

#mod <-lmer(Heroes ~ Risk_dummy * Motiv_dummy + (1|ResponseId), data = df_Roles)
FireRole$Hero_s <- scale(FireRole$Hero_s)
summary(FireMod<-lm(Mean ~ Hero_s + Att_scaled, data = FireRole))
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)


paste0("Nurses analysis")

HCrole<- subset(Gen_df, Gen_df$Context == "N")
HCrole$Hero_s <- scale(HCrole$Hero_s)

summary(HCMod<-lm(Mean ~ Hero_s + Att_scaled, data = HCrole))
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)



paste0("Soldier analysis")

MilitaryRole<- subset(Gen_df, Gen_df$Context == "S")
MilitaryRole$Hero_s <- scale(MilitaryRole$Hero_s)

summary(MilitaryMod<-lm(Mean ~ Hero_s + Att_scaled, data = MilitaryRole))
HCMil_typeIII <- car::Anova(MilitaryMod, type = "III")
eta_squared(HCMil_typeIII, partial = TRUE)

paste0("Journalists analysis")

JourRole<- subset(Gen_df, Gen_df$Context == "J")

summary(JourMod<-lm(Mean ~ Hero_s + Att_scaled, data = JourRole))
JourtypeIII <- car::Anova(JourMod, type = "III")
eta_squared(JourtypeIII, partial = TRUE)


paste0("Psy analysis")

PsyRole<- subset(Gen_df, Gen_df$Context == "P")

summary(PsyMod<-lm(Mean ~ Hero_s + Att_scaled, data = PsyRole))
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)

paste0("Welders analysis")

WeldRole<- subset(Gen_df, Gen_df$Context == "W")

summary(WeldMod<-lm(Mean ~ Hero_s + Att_scaled, data = WeldRole))
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)
paste0("CONTEXT INDEPENDENT analysis")

paste0("VILLAINS analysis")

paste0("Firefighter analysis")


FireRole<- subset(Gen_df, Gen_df$Context == "F")

#mod <-lmer(Heroes ~ Risk_dummy * Motiv_dummy + (1|ResponseId), data = df_Roles)
FireRole$Villain_scaled <- scale(FireRole$Villain_scaled)
summary(FireMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = FireRole))
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)


paste0("Nurses analysis")

HCrole<- subset(Gen_df, Gen_df$Context == "N")
HCrole$Villain_scaled <- scale(HCrole$Villain_scaled)

summary(HCMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = HCrole))
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)



paste0("Soldier analysis")

MilitaryRole<- subset(Gen_df, Gen_df$Context == "S")
MilitaryRole$Villain_scaled <- scale(MilitaryRole$Villain_scaled)

summary(MilitaryMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = MilitaryRole))
HCMil_typeIII <- car::Anova(MilitaryMod, type = "III")
eta_squared(HCMil_typeIII, partial = TRUE)

paste0("Journalists analysis")

JourRole<- subset(Gen_df, Gen_df$Context == "J")
JourRole$Villain_scaled <- scale(JourRole$Villain_scaled)

summary(JourMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = JourRole))
JourtypeIII <- car::Anova(JourMod, type = "III")
eta_squared(JourtypeIII, partial = TRUE)


paste0("Psy analysis")

PsyRole<- subset(Gen_df, Gen_df$Context == "P")
PsyRole$Villain_scaled <- scale(PsyRole$Villain_scaled)

summary(PsyMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = PsyRole))
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)

paste0("Welders analysis")

WeldRole<- subset(Gen_df, Gen_df$Context == "W")
WeldRole$Villain_scaled <- scale(WeldRole$Villain_scaled)

summary(WeldMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = WeldRole))
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)
paste0("DILEMMAS analysis")

paste0("Hero analysis")

Gen_df <- subset(long_df, long_df$Measure == "Mean_Gen")

paste0("Firefighter analysis")


FireRole<- subset(Gen_df, Gen_df$Context == "F")

#mod <-lmer(Heroes ~ Risk_dummy * Motiv_dummy + (1|ResponseId), data = df_Roles)
FireRole$Hero_s <- scale(FireRole$Hero)
FireRole$Att_scaled <- scale(FireRole$Att)

summary(FireMod<-lm(Mean ~ Hero_s + Att_scaled, data = FireRole))
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)


paste0("Nurses analysis")

HCrole<- subset(Gen_df, Gen_df$Context == "N")
HCrole$Hero_s <- scale(HCrole$Hero)
HCrole$Att_scaled <- scale(HCrole$Att)
summary(HCMod<-lm(Mean ~ Hero_s + Att_scaled, data = HCrole))
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)



paste0("Soldier analysis")

MilitaryRole<- subset(Gen_df, Gen_df$Context == "S")
MilitaryRole$Hero_s <- scale(MilitaryRole$Hero)
MilitaryRole$Att_scaled <- scale(MilitaryRole$Att)
summary(MilitaryMod<-lm(Mean ~ Hero_s + Att_scaled, data = MilitaryRole))
HCMil_typeIII <- car::Anova(MilitaryMod, type = "III")
eta_squared(HCMil_typeIII, partial = TRUE)

paste0("Journalists analysis")

JourRole<- subset(Gen_df, Gen_df$Context == "J")
JourRole$Hero_s <- scale(JourRole$Hero)
JourRole$Att_scaled <- scale(JourRole$Att)
summary(JourMod<-lm(Mean ~ Hero_s + Att_scaled, data = JourRole))
JourtypeIII <- car::Anova(JourMod, type = "III")
eta_squared(JourtypeIII, partial = TRUE)


paste0("Psy analysis")

PsyRole<- subset(Gen_df, Gen_df$Context == "P")
PsyRole$Hero_s <- scale(PsyRole$Hero)
PsyRole$Att_scaled <- scale(PsyRole$Att)
summary(PsyMod<-lm(Mean ~ Hero_s + Att_scaled, data = PsyRole))
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)

paste0("Welders analysis")

WeldRole<- subset(Gen_df, Gen_df$Context == "W")
WeldRole$Hero_s <- scale(WeldRole$Hero)
WeldRole$Att_scaled <- scale(WeldRole$Att)
summary(WeldMod<-lm(Mean ~ Hero_s + Att_scaled, data = WeldRole))
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)
paste0("DILEMMAS analysis")

paste0("Villain analysis")

paste0("Firefighter analysis")


FireRole<- subset(Gen_df, Gen_df$Context == "F")

#mod <-lmer(Heroes ~ Risk_dummy * Motiv_dummy + (1|ResponseId), data = df_Roles)
FireRole$Villain_scaled <- scale(FireRole$Vill)
FireRole$Att_scaled <- scale(FireRole$Att)

summary(FireMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = FireRole))
FfMod_typeIII <- car::Anova(FireMod, type = "III")
eta_squared(FfMod_typeIII, partial = TRUE)


paste0("Nurses analysis")

HCrole<- subset(Gen_df, Gen_df$Context == "N")
HCrole$Villain_scaled <- scale(HCrole$Vill)
HCrole$Att_scaled <- scale(HCrole$Att)
summary(HCMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = HCrole))
HCMod_typeIII <- car::Anova(HCMod, type = "III")
eta_squared(HCMod_typeIII, partial = TRUE)



paste0("Soldier analysis")

MilitaryRole<- subset(Gen_df, Gen_df$Context == "S")
MilitaryRole$Villain_scaled <- scale(MilitaryRole$Vill)
MilitaryRole$Att_scaled <- scale(MilitaryRole$Att)
summary(MilitaryMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = MilitaryRole))
HCMil_typeIII <- car::Anova(MilitaryMod, type = "III")
eta_squared(HCMil_typeIII, partial = TRUE)

paste0("Journalists analysis")

JourRole<- subset(Gen_df, Gen_df$Context == "J")
JourRole$Villain_scaled <- scale(JourRole$Vill)
JourRole$Att_scaled <- scale(JourRole$Att)
summary(JourMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = JourRole))
JourtypeIII <- car::Anova(JourMod, type = "III")
eta_squared(JourtypeIII, partial = TRUE)


paste0("Psy analysis")

PsyRole<- subset(Gen_df, Gen_df$Context == "P")
PsyRole$Villain_scaled <- scale(PsyRole$Vill)
PsyRole$Att_scaled <- scale(PsyRole$Att)
summary(PsyMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = PsyRole))
PsyTypeIII <- car::Anova(PsyMod, type = "III")
eta_squared(PsyTypeIII, partial = TRUE)

paste0("Welders analysis")

WeldRole<- subset(Gen_df, Gen_df$Context == "W")
WeldRole$Villain_scaled <- scale(WeldRole$Vill)
WeldRole$Att_scaled <- scale(WeldRole$Att)
summary(WeldMod<-lm(Mean ~ Villain_scaled + Att_scaled, data = WeldRole))
WeldTypeIII <- car::Anova(WeldMod, type = "III")
eta_squared(WeldTypeIII, partial = TRUE)
results_tbl <- map_dfr(contexts, function(ctx) {
  df_ctx <- Gen_df %>%
    filter(Context == ctx) %>%
    select(Mean, Hero) %>%
    tidyr::drop_na(Mean, Hero) %>%
    mutate(Hero_s = as.numeric(scale(Hero)))

  # Linear model
  mod <- lm(Mean ~ Hero_s, data = df_ctx)

  # Coef table (grab the slope for Hero_s_z)
  coef_row <- tidy(mod) %>%
    filter(term == "Hero_s") %>%
    transmute(
      Context = ctx,
      Beta = estimate,
      t = statistic,
      p = p.value
    )

  # Type III ANOVA + partial eta^2
  a3 <- car::Anova(mod, type = "III")
  et <- effectsize::eta_squared(a3, partial = TRUE) %>%
    as.data.frame()

  # Pull partial eta^2 for the Hero_s_z effect
  pe2 <- et %>%
    dplyr::filter(Parameter == "Hero_s") %>%
    dplyr::pull(Eta2)

  coef_row %>%
    mutate(partial_eta2 = pe2)
})

# Present a clean HTML table
results_tbl %>%
  mutate(
    Beta = round(Beta, 3),
    t = round(t, 3),
    p = fmt_p(p),
    partial_eta2 = round(partial_eta2, 3)
  ) %>%
  gt() %>%
  tab_header(
    title = md("**Effect of Hero perception on Context independent ARV within each occupation**")
  ) %>%
  cols_label(
    Context = "Context",
    Beta = "β (slope)",
    t = "t",
    p = "p",
    partial_eta2 = "partial η² (Type III)"
  ) %>%
  fmt_missing(everything(), missing_text = "—") %>%
  tab_options(table.font.size = px(14))
Effect of Hero perception on Context independent ARV within each occupation
Context β (slope) t p partial η² (Type III)
F 0.315 3.460 < .001 0.039
S 0.673 7.552 < .001 0.161
P 0.268 3.072 0.002 0.031
J 0.710 8.015 < .001 0.177
W 0.240 2.831 0.005 0.026
N 0.185 1.995 0.047 0.013
results_tbl <- map_dfr(contexts, function(ctx) {
  df_ctx <- Gen_df %>%
    filter(Context == ctx) %>%
    select(Mean, Villain_scaled) %>%
    tidyr::drop_na(Mean, Villain_scaled) %>%
    mutate(Villain_scaled = as.numeric(scale(Villain_scaled)))

  # Linear model
  mod <- lm(Mean ~ Villain_scaled, data = df_ctx)

  # Coef table (grab the slope for Hero_s_z)
  coef_row <- tidy(mod) %>%
    filter(term == "Villain_scaled") %>%
    transmute(
      Context = ctx,
      Beta = estimate,
      t = statistic,
      p = p.value
    )

  # Type III ANOVA + partial eta^2
  a3 <- car::Anova(mod, type = "III")
  et <- effectsize::eta_squared(a3, partial = TRUE) %>%
    as.data.frame()

  # Pull partial eta^2 for the Hero_s_z effect
  pe2 <- et %>%
    dplyr::filter(Parameter == "Villain_scaled") %>%
    dplyr::pull(Eta2)

  coef_row %>%
    mutate(partial_eta2 = pe2)
})

# Present a clean HTML table
results_tbl %>%
  mutate(
    Beta = round(Beta, 3),
    t = round(t, 3),
    p = fmt_p(p),
    partial_eta2 = round(partial_eta2, 3)
  ) %>%
  gt() %>%
  tab_header(
    title = md("**Effect of Villain perception on Context independent ARV within each occupation**")
  ) %>%
  cols_label(
    Context = "Context",
    Beta = "β (slope)",
    t = "t",
    p = "p",
    partial_eta2 = "partial η² (Type III)"
  ) %>%
  fmt_missing(everything(), missing_text = "—") %>%
  tab_options(table.font.size = px(14))
Effect of Villain perception on Context independent ARV within each occupation
Context β (slope) t p partial η² (Type III)
F 0.019 0.206 0.837 0.000
S -0.348 -3.663 < .001 0.043
P -0.051 -0.570 0.569 0.001
J -0.424 -4.481 < .001 0.063
W 0.071 0.830 0.407 0.002
N 0.126 1.357 0.176 0.006
results_tbl <- map_dfr(contexts, function(ctx) {
  df_ctx <- Spec_df %>%
    filter(Context == ctx) %>%
    select(Mean, Hero) %>%
    tidyr::drop_na(Mean, Hero) %>%
    mutate(Hero_s = as.numeric(scale(Hero)))

  # Linear model
  mod <- lm(Mean ~ Hero_s, data = df_ctx)

  # Coef table (grab the slope for Hero_s_z)
  coef_row <- tidy(mod) %>%
    filter(term == "Hero_s") %>%
    transmute(
      Context = ctx,
      Beta = estimate,
      t = statistic,
      p = p.value
    )

  # Type III ANOVA + partial eta^2
  a3 <- car::Anova(mod, type = "III")
  et <- effectsize::eta_squared(a3, partial = TRUE) %>%
    as.data.frame()

  # Pull partial eta^2 for the Hero_s_z effect
  pe2 <- et %>%
    dplyr::filter(Parameter == "Hero_s") %>%
    dplyr::pull(Eta2)

  coef_row %>%
    mutate(partial_eta2 = pe2)
})

# Present a clean HTML table
results_tbl %>%
  mutate(
    Beta = round(Beta, 3),
    t = round(t, 3),
    p = fmt_p(p),
    partial_eta2 = round(partial_eta2, 3)
  ) %>%
  gt() %>%
  tab_header(
    title = md("**Effect of Hero perception on Context dependent ARV (dilemmas) within each occupation**")
  ) %>%
  cols_label(
    Context = "Context",
    Beta = "β (slope)",
    t = "t",
    p = "p",
    partial_eta2 = "partial η² (Type III)"
  ) %>%
  fmt_missing(everything(), missing_text = "—") %>%
  tab_options(table.font.size = px(14))
Effect of Hero perception on Context dependent ARV (dilemmas) within each occupation
Context β (slope) t p partial η² (Type III)
F 0.209 2.116 0.035 0.015
S 0.540 6.042 < .001 0.109
P 0.288 2.840 0.005 0.026
J 0.473 5.792 < .001 0.101
W 0.350 3.793 < .001 0.046
N 0.185 1.678 0.094 0.009
results_tbl <- map_dfr(contexts, function(ctx) {
  df_ctx <- Spec_df %>%
    filter(Context == ctx) %>%
    select(Mean, Villain_scaled) %>%
    tidyr::drop_na(Mean, Villain_scaled) %>%
    mutate(Villain_scaled = as.numeric(scale(Villain_scaled)))

  # Linear model
  mod <- lm(Mean ~ Villain_scaled, data = df_ctx)

  # Coef table (grab the slope for Hero_s_z)
  coef_row <- tidy(mod) %>%
    filter(term == "Villain_scaled") %>%
    transmute(
      Context = ctx,
      Beta = estimate,
      t = statistic,
      p = p.value
    )

  # Type III ANOVA + partial eta^2
  a3 <- car::Anova(mod, type = "III")
  et <- effectsize::eta_squared(a3, partial = TRUE) %>%
    as.data.frame()

  # Pull partial eta^2 for the Hero_s_z effect
  pe2 <- et %>%
    dplyr::filter(Parameter == "Villain_scaled") %>%
    dplyr::pull(Eta2)

  coef_row %>%
    mutate(partial_eta2 = pe2)
})

# Present a clean HTML table
results_tbl %>%
  mutate(
    Beta = round(Beta, 3),
    t = round(t, 3),
    p = fmt_p(p),
    partial_eta2 = round(partial_eta2, 3)
  ) %>%
  gt() %>%
  tab_header(
    title = md("**Effect of Villain perception on Context dependent ARV (dilemmas) within each occupation**")
  ) %>%
  cols_label(
    Context = "Context",
    Beta = "β (slope)",
    t = "t",
    p = "p",
    partial_eta2 = "partial η² (Type III)"
  ) %>%
  fmt_missing(everything(), missing_text = "—") %>%
  tab_options(table.font.size = px(14))
Effect of Villain perception on Context dependent ARV (dilemmas) within each occupation
Context β (slope) t p partial η² (Type III)
F -0.048 -0.487 0.627 0.001
S -0.439 -4.818 < .001 0.072
P -0.193 -1.883 0.061 0.012
J -0.145 -1.690 0.092 0.009
W -0.017 -0.179 0.858 0.000
N 0.108 0.979 0.328 0.003

Further not-registered analyses:

Any question can be addressed to Jean Monéger (My contact can be easily found using Google).