A tool for understanding manipulative framing in media.
RageCheck is a free tool that analyzes online content for linguistic patterns commonly associated with manipulative framing—the kind of language designed to provoke emotional reactions rather than inform.
Modern social platforms reward engagement, and outrage generates more engagement than nuance. This creates incentives for content creators to frame information in emotionally provocative ways, regardless of whether that framing is accurate or fair.
RageCheck helps you see these patterns so you can make more informed decisions about what to believe, share, and engage with.
RageCheck does not verify claims or assess accuracy. A high score means content uses manipulative framing—it doesn't mean the underlying claims are false. Conversely, a low score doesn't mean content is true.
Manipulative framing exists across the political spectrum. RageCheck analyzes linguistic patterns regardless of political orientation. Content from any viewpoint can score high or low depending on how it's framed.
RageCheck is a tool, not an authority. Use it as one input among many when evaluating content. Your own judgment, multiple sources, and critical thinking remain essential.
The attention economy has created perverse incentives. Content that makes you angry, afraid, or tribal performs better algorithmically than content that informs or nuances. This isn't a conspiracy—it's basic economics. Outrage is engaging, and engagement is monetizable.
The result is an information environment where even accurate information often comes wrapped in manipulative framing. We're all being nudged toward emotional reactions rather than thoughtful responses.
RageCheck exists to make these patterns visible. When you can see the manipulation, you can choose how to respond to it rather than being unconsciously driven by it.
RageCheck is open source. You can inspect the code, see exactly how detection works, and suggest improvements.
View on GitHubFound a bug? Have a suggestion? Open an issue on GitHub or reach out via the repository. We're especially interested in false positives/negatives and edge cases that could improve detection accuracy.