Integrating ordinal, multitask deep learning with faceted Rasch measurement theory
Debiased, explainable, interval measurement of hate speech
Mar. 2020
Research Team● Chris Kennedy (Lead) – Biostatistics PhD student
● Claudia von Vacano (PI) – Policy, Organizations, Measurement & Evaluation PhD
● Geoff Bacon – Linguistics PhD candidate
● Alexander Sahn – Political Science PhD candidate
● Nora Broege – Sociology PhD and post-doc at Rutgers University
And with special thanks to:
● Professors Mark Wilson & Karen Draney
○ Graduate School of Education
○ Berkeley Evaluation & Assessment Research Center
Previous BEAR Seminar talk● October 2017 - Phase 1 of this project
Scientific goals of our method1. Create an outcome variable that is precise (interval) and with
minimal bias from the humans that labeled the data
- item response theory
2. Use machine learning to predict that outcome measure in a
scalable way, also with minimal human bias, and with a clear
explanation of what led to the predicted score
- deep learning
Categorical, ordinal, and interval variables● Categorical / nominal variables
○ Variable value is a code for different qualitative labels
○ Can be seen as a way of encoding multiple mutually exclusive binary variables
○ E.g. color: red (1), blue (2), or green (3)
○ Alive: yes (1), no (0)
● Ordinal variables
○ Values have an ordering from lower to higher on some variable
○ We cannot take differences between the exact distance between values is unknown
○ E.g. Likert scales: strongly disagree (0), disagree (1), neutral (2), agree (3), strongly agree (4)
○ Or disease severity: mild (1), moderate (2), severe (3)
● Interval variables
○ Continuous variable in which differences between values are meaningful
○ I.e. magnitude or scale of units is constant across the range of the variable
○ A "ruler" that measures the location on an abstract continuum of a variable
Applicable to two types of supervised outcomes to measure1. Complex outcome variable currently measured as a
human-reviewed binary or ordinal variable for convenience,
but that could be decomposed into multiple constituent
components
2. Existing outcome variables measured as an index of multiple
components rated by human reviewers, but not yet using
item response theory
Our method applies to any human-rated data used forsupervised classification or regression
Examples: Text Examples: Images
Hate speech Radiological image review (e.g. CT severity index for acute pancreatitis)
Toxic language / bullying Grading of agricultural produce
Sentiment Satellite image rating for development
Essay grading Pornography detection
Conference abstract or article review Artist identification of paintings
Treason analysis of audio transcripts Microscopy analysis of liver biopsy
Also: time-series, like ECG classification. Other ideas from you?
How does our method work? Details to be described● The core task is to decompose a single-question outcome (e.g. "Is this comment hate
speech?") into a series (say 5) of ordinal components (respect, dehumanization, insult,
etc.)
● Recruit human labelers to review observations on those components (online survey)
○ Batches of comments should be created in an overlapping fashion so that the labelers are densely linked in
a single network
● Apply item response theory to aggregate those components into a continuous scale
○ Simultaneously estimate the bias of each labeler and eliminate its influence from the scale
○ Estimate the randomness in each labeler and remove labelers with inconsistent labels
● Use deep learning in a multitask architecture to predict each component (ordinal
classification) using the human labeled data, also incorporating the bias of labelers
○ The deep learning component predictions are then transformed to a continuous scale through IRT
● The result is a debiased, explainable, efficient prediction machine for measuring the
construct of interest on a continuous, interval scale (with std. errors)
Standard machine learning approach● binary definition of hate speech (yes or no) - qualitative● probability prediction: Pr(Y = 1 | X)● no sense of magnitude: how extreme is the hate speech?● biased by the interpretation of the humans that labeled data● no explanation● not generalizable to future time periods when our sensitivity to
hate speech may change
Machine learning model
[Comment someone makes on Twitter]
Hey AI, is that comment hate speech?
Research team, social media platform, or judge/jury
I estimate 37% probability of being hate
speech.
Standard approach is limited, and not considered measurement
New approach:● continuous hate speech scale (roughly -4.0 to +4.0)● magnitude is incorporated - true quantitative measurement● regression prediction: E[Y | X]● prediction can be explained by intermediate components● debiased from how humans labeled the data● generalizes beyond the specific components measured, comments
analyzed, and raters who labeled data
Our machine learning model
[Comment someone makes on Twitter]
Hey AI, where do you place this comment on your hate speech scale?
Research team, social media platform, or judge/jury
I estimate the comment at 2.5 (+/- 0.3) on the hate speech scale - an extremely hateful comment. My reasoning is that this comment appears to have strongly negative sentiment (75% certainty), likely threatens violence (85% certainty), includes an identity group target (99%), and is likely humiliating to the target group (92%).
Our method measures hate speech as an interval variable, and explains why
Review our scientific contribution● We develop a method for integrating the measurement benefits of item response
theory with the scalable, accurate prediction provided by deep learning
● Our method makes five contributions:
○ Realistic granularity: Outcomes can be measured as interval variables on a continuous scale, rather
than simplistic yes/no binaries
○ Labeler debiasing: we estimate the first-order bias of individual labelers (a "fixed effect") and
eliminate that bias from the estimation of the continuous outcome
○ Sample efficiency: we can achieve greater predictive accuracy for a given sample size because our
ordinal components become supervised latent variables in a multitask neural architecture
○ Explainability: we can explain the predicted continuous score of any observation by examining the
predictions of the individual components
○ Labeler quality: we show how item response theory can estimate the quality of labelers' responses,
allowing low-quality labelers to be removed or down-weighted
● In sum, our method stands to drastically change how we measure outcomes and
conduct machine learning in big data
Agenda for Talk1. Theorize construct (reference set, components)
2. Collect comments (web APIs)
3. Label components (crowdsourcing)
4. Scale (faceted Rasch IRT)
5. Predict (deep learning for NLP)
Theory development
● EDUC 274A (K. Draney) - Fundamentals of Measurement
● EDUC 274B (M. Wilson) - Statistics of IRT
Construct Map: theoretical levels of hate speech
Qualitative ordered value, does not reflect an
interval value on the final hate speech scale
Reference set: empirical grounding of theory● 10+ comments for each of our theorized levels
● Forms an empirical lattice that constrains the theory
● Prompts introspection and debate, leading to improved
understanding of how we truly theory our construct and its
associated levels
● Leads to confirmatory analysis, not exploratory
Components of hate speech
Survey details● Initial screen on identity group targets
○ Major identity groups: race/ethnicity, gender, religion, sexual orientation, disability, age
○ One follow-up question for sub-identity group for each major group
● Hate speech scale questions (~10)
● Participant demographics
○ Gender, education, race, age, income, religion, sexual orientation, political ideology
● Free response feedback (optional)
Comment Collection
Stream comments
Reddit: Most recently published comments on any post in /r/all.
Twitter: Most recent tweets from their streaming API.
YouTube: Search for videos around major US cities, take all
comments on them.
Class imbalance, statistical power, & budget limits
● Binarized hate speech is < 1% of general internet content
● If we had a yes/no outcome for hate speech, what hate speech
proportions would we prefer the training data?
● For a 8-level hate speech construct, we want a mostly even
distribution over each level (~12.5% each)
● Our labeling budget is finite, so we want to avoid spending a
ton of money on imbalanced training data
Sample commentsWe’ve collected over 75 million comments, but we only want to annotate
50k.
Over-sample comments with identity groups, and stratify on estimated
hatefulness.
20k 20k 10k
Comment batch creation
Augment comments
Perspective API: Trained NLP models from Jigsaw for detecting
various kinds of abusive language. We use their identity attack and
threat models.
Word embeddings help us answer “How relevant is this comment to
the identity groups we’re looking for?”
Bin commentsWe use the metadata added from step 2 to bin the comments into
5 bins:
- Not relevant (does not appear to target identity groups)
- Relevant and low on hate scale
- Relevant and neutral on hate scale
- Relevant and high
- Relevant and very high
Stratification: maximize power without eliminating any cells
Positive Neutral Low Hate High Hate
Identity groups 7,500 5,000 18,300 14,200
No identity groups 5,000
Hypothesis dimension: E[ hate score | X ]
Relevance dimension:Pr[ identity groups = 1 | X ]
Total labeling budget: 50,000 comments
Comments downloaded: 75 million
Sampling design for human review of comments
Naive annotation plan can lead to distinct networks with disjoint subsets● Batches of 5 distinct comments● Each batch rated by 3 labelers● Each labeler rates only one batch● We cannot differentiate if a batch is more hateful or a set of raters is
more lenient in their rating - we can't calibrate across batches
Batch 1 Batch 2 Batch 3
R 1 R 2 R 3 R 4 R 5 R 6 R 7 R 8 R 9
● Allowing workers to label comments randomly, like on Figure 8's system, would likely also lead to disjoint subsets○ But maybe one could get lucky and not have any disjoint subsets?
Overlapping reviews lead to a single linked network of raters + comments
1
Rater A
Rater B
Rater C
Rater D
Rater E
2 3 4 5 6 7Comments
Labelers / annotators
● Here is an example with 7 comments reviewed by 5 raters. Every rater reviews 3 comments● Each review creates a link (or connection, edge) between the rater and the comment.
Unfolded version of the same network
Densely linked network for human labeler debiasing
Scaling
Overview of item response theory scaling● Item response theory analyzes the patterns in the ordinal survey responses
(components of hate speech) to create a continuous latent variable (hate speech scale)
● That continuous hate speech score best explains the combined ratings on the survey
instrument for each comment, after correcting for reviewer bias.
● While doing that, IRT simultaneously estimates:
○ Where each survey item falls on the hate speech scale (where it is most informative)
○ Where each response option for each item falls on the hate speech scale
○ The bias (or "severity") of each annotator
● This estimation is through maximum likelihood
○ We use joint maximum likelihood, but marginal or conditional maximum likelihood are options
● It provides statistical diagnostics to evaluate the results
○ Reliability is the primary metric, ranging from 0 to 1. Our scale has a reliability of 0.94.
■ Interpretation: similar to R
2
, it is proportion of variance accounted for by the model
○ It also generates fit statistics for each reviewer, which can identify reviewers who are selecting randomly
○ Fit statistics for each survey item tell us how well the item fits into the scale
● Readings: Wilson (2004) Constructing Measures (Ch. 5 - 7), Wright & Masters (1982) Rating Scale Analysis
Item response theory estimation goal (slightly simplified)
Predict probability of response option R on item I for comment C by annotator A
Based on the subtraction formula:
hate score for comment C
- hate score for item I
- annotator A's bias (aka severity)
- hate score for response option R
See formula 1 in manuscript
for the more technical version
Fixed effect terms
Latent variable of interest (random effect)
Estimation methods for IRT(Add in highlights on JML, MML, CML, non-parametric)
Scaling results from item response theory
Most hateful
Somewhat hateful
Neutral
Counterspeech
Supportive
Very hateful
Reliability: 0.94!
Item fit statistics
Example scaling results (trigger warning)
Crowdsource worker quality analysis
Crowdsource worker quality: identity rate
Worker quality: mean-squared statistic vs. identity rate
Worker quality: mean-squared statistic vs. identity rate
Scaled reference set - initial
Scaled reference set - revised
Estimating thresholds for theorized levels
Distribution across social media platforms
Distribution across social media platforms
Comparison to single binary hate item
We have created a measure of our construct.
Can we predict it with machine learning on raw text?
With robots, if possible.
Short Circuit (1986)
Fully connected
layers
Raw comment text
Binary hate speech status
Deep NLP(BERT,
ULMFiT, GPT)
Language representation
Latent variables related to hate speech
Current best practice in supervised NLP
Comparison to related work
Fully connected
hidden layers
Raw comment text
Intermediate ordinal outcomes(ratings on hate scale items)
1. Sentiment
2. Respect
8. Genocide
7. Violence
9. Attack-Defend
Continuous hate score
Item Response
Theory
Estimated labeler bias (“fixed effect”)
Deep NLP(USE,
XLNet, RoBERTa, ULMFiT)
Language representation
Learning to rate
Neural architecture for predicting a continuous score with multiple intermediate outcomes, labeler bias adjustment, and IRT activation
Loss: quadratic weighted kappa
Loss: squared-error
3. Insult
4. Humiliate
5. Status
6. Dehumanize
Final outcome
Non-linear activation function
Correlation of items suggests benefit from multitask approach
Ordinal classification with labeler bias adjustment
Final hidden layer
Output: Violence Item
Loss: quadratic weighted kappa
Wording: "This comment calls for using violence against the group(s) you previously identified. "
1. Strongly disagree
2. Disagree
3. Neutral
4. Agree
5. Strongly agree
Estimated labeler bias (“fixed effect”) - concatenated onto the final hidden layer
Predicted probabilities using only text (no bias adjustment)
Predicted probabilities with bias adjustment
Proportional Odds Latent
Variable
(See Vargas et al. 2019 Deep ordinal classification)
Quadratic weighted kappa loss: cost matrixPredicted
Actual
Quadratic weighted kappa example:
Predicted Prob 12% 18% 35% 20% 15%
Distance 1 0 1 2 3
Weight 0.0625 0.0625 0.25 0.5625
Loss contribution 0.0075 0 0.02187 0.05 0.08438= 0.16375
Compare to NLL:
-log(0.18) = 1.715
Labeler bias as an auxiliary input● During deep learning, each observation (comment text plus the set of ratings for a
given comment) will have the estimated labeler bias (severity) as an auxiliary input
● Labeler bias is a value on the hate speech scale: centered around 0 and within (-3, +3)
● We include this scalar value as another latent variable in the final hidden layer
● Those values are then inputs into the latent hidden value for each item's ordinal
prediction
○ (Which is evaluated with quadratic weighted kappa loss)
● The effect of the bias input is that the neural network can adjust its probability
predictions for each item based on whether the rater for that observation was more or
less severe.
○ Ex.: based on the text of a comment, the network might predict "strongly agree" for the genocide item
○ But if it knows the rater is severe, it should shift its prediction down, e.g. to "agree" or even "disagree"
Categorical classification with labeler bias adjustment
Final hidden layer
Output: Violence Item
Estimated labeler bias (“fixed effect”) - concatenated onto the final hidden layer
Loss: categorical cross-entropy
Wording: "This comment calls for using violence against the group(s) you previously identified. "
1. Strongly disagree
2. Disagree
3. Neutral
4. Agree
5. Strongly agree
Softm
ax activation
Predicted probabilities using only text (no bias adjustment)
Predicted probabilities with bias adjustment
Statistics of ordinal classification
(Add in some here)
Current results (work in progress)
Future work● Partnerships - interest from Facebook, Pinterest, Blizzard, et al.
● Causal inference (interrupted time series, randomized interventions, user accounts)
● Listening to victims: collect stories and experiences of hate speech
● Focus on genocide in developing countries (Sri Lanka, Myanmar, India, Brazil)
● Improved labeling: incorporate message context
● New platforms: Facebook, Instagram, Wikipedia, game chats (Blizzard)
● New languages
● New constructs: toxicity
● New data types: images, audio, video
● Other applications: automated essay grading, etc.
● Exploring a possible patent application
Appendix
Implementation diagram
Technical implementation: Google serverless functions
Labeling instrument (Qualtrics)
Rater recruitment (Amazon Mechanical Turk)
Google Cloud
SQL Database
Comment Batches
Reserve comment batch
Ratings
Complete comment batch
Serverless functions pool
Fully connected
hidden layers
Raw comment text
Output: Violence Item
Estimated labeler bias (“fixed effect”) - concatenated onto the final hidden layer
Deep NLP(USE,
XLNet, RoBERTa, ULMFiT)
Language representation
Learning to rate
Labeler bias as auxiliary input (violence item)
Loss: quadratic weighted kappa
Wording: "This comment calls for using violence against the group(s) you previously identified. "
1. Strongly disagree
2. Disagree
3. Neutral
4. Agree
5. Strongly agree
Final hidden layer
Proportional Odds Latent
Variable
Fully connected
hidden layers
Raw comment text
Output: Violence Item
Estimated labeler bias (“fixed effect”) - concatenated onto the final hidden layer
Deep NLP(USE,
XLNet, RoBERTa, ULMFiT)
Language representation
Learning to rate
Labeler bias as auxiliary input (violence item)
Loss: categorical cross-entropy
Wording: "This comment calls for using violence against the group(s) you previously identified. "
1. Strongly disagree
2. Disagree
3. Neutral
4. Agree
5. Strongly agree
Final hidden layer
Softm
ax activation