Legislators don’t just vote on legislation. Their rhetoric shapes the tenor of American political discussion. Incivility among the legislators fosters incivility among the citizens who elect them.
The PRL believes that civil discourse lies at the foundation of a healthy democracy. Incivility should be discouraged, but doing so requires data-driven awareness of where and when incivility occurs.
With this tool, we hope to provide a set of resources for citizens, donors, and scientists to track and monitor the discourse of their elected officials.
Can I use your data?
Yes. Our data is free and publicly available under a Creative Commmons Attribution license. We ask that you cite the data as:
Westwood, Sean, Yphtach Lelkes, and Matthew Wetzel. (2024). America’s Political Pulse: Elected Officials. https://polarizationresearchlab.org/
Where does this data come from?
Each day, we collect and analyze everything US legislators say from:
- Congressional speeches (via Congressional Record Parser and internal tools)
- Twitter/X (via internal tools),
- In newsletters to constituents (via DCinbox)
- Press releases (via internal tools).
We use modern AI (Large Language Models) to characterize the data we collect – which forms the basis of our rhetoric rankings.
How good are your AI models?
Our rhetoric classification is done using OpenAI’s ChatGPT 4o model. Leveraging carefully engineered prompts, we find that our approach outperforms highly trained humans on rhetoric classification tasks.
Our approach also provides context and explanation for how a text was classified. This doesn’t mean that our approach is flawless, but we’ve made extensive efforts to minimize error – allowing us to reach an industry-leading level of accuracy exceeding that of human raters.
Model Performance
Below is a comparison of how the model we use performs against trained human annotators – when it comes to classifying personal attacks and constructive debate.
Accuracy | Precision | Recall | |
---|---|---|---|
Trained annotators | 92% | 56% | 82% |
GPT 4.5 | 97% | 91% | 100% |
Accuracy | Precision | Recall | |
---|---|---|---|
Trained annotators | 80% | 83% | 91% |
GPT 4.5 | 81% | 84% | 92% |
How can elected officials change their rankings?
Rankings are provided for the current Congress and are updated daily based on data collected and categorized in real time. The rankings are not manually assigned, and reflect how an elected official compares to their colleagues in each rhetoric category (e.g., personal attacks; policy discussion). For an elected official to change their score, they need to change how they talk about politics in the public sphere.
Why don’t you generate a single score for each elected official?
We have designed our project to avoid single scores for each representative for two reasons. First, we do not want to produce a score that can be taken out of context and used as an implied political endorsement in an election. Second, we don’t think there is a clear or consistent mix of rhetoric that is optimal. While we think all elected officials should not engage in personal attacks, some citizens may care about policy over all else, while others might care most about supporting representatives who prioritize compromise. We offer rankings on each category of rhetoric so that users can understand how their representatives compare to others, but refrain from describing any particular politician as “good” or “bad”. Our goal is to provide accurate information, with interpretation left to the end user. This is especially important for fostering objective and nonpartisan research.
What should I do if I disagree with how you assessed some text?
If you disagree with how one of the text examples is categorized, you can flag this text for the Lab to review. Simply click the flag icon next to the example text and a PRL researcher will examine the passage. Thank you for your help!
What is the Polarization Research Lab?
The Polarization Research Lab is a research group and resource hub dedicated to applying science to the study of polarization and democracy.
Are you partisan?
No. We are nonpartisan academics.
Who funds your work?
The Polarization Research Lab is supported by the:
How can I check on what you’re doing?
In the spirit of open science and peer review, we provide open access to all the code used for the production of our public goods – so that other researchers can use our resources as well as monitor our work.
The code for our data collection, rhetoric classification, summary statistics, and public dashboards can be found on Github.