PRL
What is the Polarization Research Lab?
The Polarization Research Lab is a research group and resource hub dedicated to applying science to the study of polarization and democracy.
Who funds your work?
The Polarization Research Lab is supported by the:
Can I use your data?
Yes; all of our data is open to the public.
We humbly ask that you cite our lab if you use our data in your work. You can cite us with:
Are you partisan?
No. We are nonpartisan academics.
How can I check on what you're doing?
In the spirit of open science and peer review, we provide open access to all the code used for the production of our public goods -- so that other researchers can use our resources as well as monitor our work.
The code for our public dashboards can be found on Github.
US Public Opinion
How is the data collected?
Since September 2022, we've partner with YouGov to recruit a wide sample of participants from across the Unites States. We currently field the survey to 1,000 unique respondents each week.
In addition to the core questions we ask each week, we often include sets of questions designed by other political science researchers who apply for survey time on our panel (see our Request for Proposals).
Who are the respondents?
The respondents are paid survey takers from the YouGov survey platform.
You can find the list of demographic variables we collect at this link.
US Officials
Where does the data come from?
Each day, we collect and analyze everything US legislators say from:
- Congressional speeches (via Congressional Record Parser and internal tools)
- Twitter/X (via internal tools),
- In newsletters to constituents (via DCinbox)
- Press releases (via internal tools).
We use modern AI (Large Language Models) to characterize the data we collect -- which forms the basis of our rhetoric rankings.
How good are your AI models?
Our rhetoric classification is done using OpenAI's ChatGPT 4o model. Leveraging carefully engineered prompts, we find that our approach outperforms highly trained humans on rhetoric classification tasks.
Our approach also provides context and explanation for how a text was classified. This doesn’t mean that our approach is flawless, but we've made extensive efforts to minimize error -- allowing us to reach an industry-leading level of accuracy exceeding that of human raters.
Model Performance
Below is a comparison of how the model we use performs against trained human annotators -- when it comes to classifying personal attacks and constructive debate.
Accuracy | Precision | Recall | |
---|---|---|---|
Trained annotators | 92% | 56% | 82% |
GPT 4.5 | 97% | 98% | 92% |
Accuracy | Precision | Recall | |
---|---|---|---|
Trained annotators | 80% | 83% | 91% |
GPT 4.5 | 81% | 84% | 92% |
How can elected officials change their rankings?
Rankings are provided for the current Congress and are updated daily based on data collected and categorized in real time. The rankings are not manually assigned, and reflect how an elected official compares to their colleagues in each rhetoric category (e.g., personal attacks; policy discussion). For an elected official to change their score, they need to change how they talk about politics in the public sphere.
Why don’t you generate a single score for each elected official?
We have designed our project to avoid single scores for each representative for two reasons. First, we do not want to produce a score that can be taken out of context and used as an implied political endorsement in an election. Second, we don’t think there is a clear or consistent mix of rhetoric that is optimal. While we think all elected officials should not engage in personal attacks, some citizens may care about policy over all else, while others might care most about supporting representatives who prioritize compromise. We offer rankings on each category of rhetoric so that users can understand how their representatives compare to others, but refrain from describing any particular politician as “good” or “bad”. Our goal is to provide accurate information, with interpretation left to the end user. This is especially important for fostering objective and nonpartisan research.
What should I do if I disagree with how you assessed some text?
If you disagree with how one of the text examples is categorized, you can flag the text for the Lab to review. Simply click the flag icon next to the example text and a PRL researcher will examine the passage. Thank you for your help!