Skip to content

CBP and Sentiment Analysis

CBP sentiment

TO POST

[404]

When you’re traveling through borders, CBP not only wants your likeness for facial recognition, but they also want to know your vibe.

Customs and Border Protection (CBP) has onboarded Fivecast’s ONYX product, which can do a sentiment analysis by drawing from sites like social media, including Reddit, X, Meta and even smaller niche platforms. Sentiments like joy, fear, anger, sadness, surprise, disgust.

It can also link information like your SSN and location to your online identities on social media.

The platform can also recognize objects in images and videos and glean sentiment on specific topics over time.

Sentiment analysis is actually used all the time. For example, companies use it to screen open-ended customer reviews to figure out seller, service, and product areas for improvement.

How does it work? Essentially, words, phrases are assigned sentiment values. 

The US version of this is slightly different with “Very Bad” at the top of the chart.

There are different models depending, which could be rules-based, which uses a predetermined lexicon that has already been categorized, it could be that a model learns as it goes, or even a hybrid of both. It could also be a highly specialized model, with medical terminology for instance.

Yes, there is room for error. There are so many variables and huge margins for error because of the way we humans express ourselves. For instance, we often double negatives like, “I don’t NOT like the pillow,” or use sarcasm like, “I love when the product goes to the wrong address,” or even our creative use of emojis.

But of course, algorithms just get more developed with more data. More data means all of the ways we express ourselves online across social media. So better algorithms can pick out those nuances.

Also, sentiment analysis is not just with words, but we also have visual sentiment analysis. Our social media images can be grabbed and scanned for sentiment .

But now back to CBP’s use of sentiment analysis. 

We already know how many algorithms have been trained on, implemented, and used in ways to criminalize Black and brown folks, poor people, immigrants, and disabled folks.  

As humans, we each have biases and learned stereotypes, which influence how we interact with and interpret the world around us. Think about how we as humans are not foolproof at reading body language and facial expressions, at interpreting texts and emails even. 

Now on top of that, think about our biases and stereotypes that all of us have.

Now on top of that, think about how police, co-workers, and neighbors often frame their outlook with a baseline of anti-Blackness and anti-Indigeneity, as well as ableism, classism, and transphobia.

And finally on top of that, layer on algorithms that have been absorbing and vacuuming up all of those stereotypes and biases. Think about how that might impact AI’s potential to read emotion.

So what can we do? Protect your online presence and protect your accounts. Be aware that your online presence can be used as a measure of your sentiment.

And push back against initiatives and legislation that allow CBP and other entities to use of sentiment analysis for things like worthiness, legitimacy, and character.

As always opt out of biometric theft and data tracking when you can.

Published inOpt Out

Comments are closed.