Understanding Sentiment in Key Congressional Meetings
Congressional Sentiment Analysis
In late 2023, we tested several large cloud platforms and a few software as a service (SaaS) platforms to better understand the capabilities available for sentiment analysis. Typically, sentiment analysis is done with small content sets such as tweets or customer service bot interactions. For our purposes, we focused on analyzing sentiment in large files, sometimes even video, from key congressional meetings.
We found that getting data from the Library of Congress or CSPAN could be more complicated than implementing the technology itself.
Navigating Technical Limitations
Sentiment Analytics Challenges
Most of the large cloud providers had similar token and character limits—essentially a cap on file size. While not insurmountable, this limitation meant that long meetings (over two hours) needed to be broken up and then stitched back together. This is a doable but labor-intensive process. Moreover, when video is the source, which is increasingly common, an additional task of transcribing the content arises.
We did find several SaaS providers that bypassed these limitations, could source directly from a URL, and generate analytics as we had hoped. However, the costs associated with these services were quite high.
Optimism for Emerging Technologies
Future of Sentiment Analysis
The analytics above come from a proof of concept we conducted using a SaaS platform. The topics in the meeting we analyzed were not contentious, which is why the discussion was largely neutral. It would be more interesting to examine engagement on issues that are highly partisan.
Although we haven’t revisited these technologies since late 2023, we remain very optimistic about future innovations and potential use cases. Over the coming months, we plan to explore what has changed and whether we can achieve our goals with minimal effort and manageable costs. The pace of innovation in Artificial Intelligence is astonishing, with weekly announcements, continual improvements in large language models (LLMs), and advancements in input methods that extend far beyond simple text prompts.