Of late, we’ve been hearing about Twitter bots in the news due to the whole saga of Elon Musk buying Twitter. One of the reasons the deal took so long to pan out was Musk’s concerns about the number of spam bots running rampant on the platform. While Musk believes that bots make up more than 20% of accounts on Twitter, Twitter states that the number of bots on its platform is marginal.
So, what’s this Twitter bot thing?
A Twitter bot is essentially a Twitter account controlled by software automation rather than an actual human. It is programmed to behave like regular Twitter accounts, liking Tweets, retweeting, and engaging with other accounts.
Twitter bots can be helpful for specific use cases, such as sending out critical alerts and announcements. On the flip side, they can also be used for nefarious purposes, such as starting a disinformation campaign. These bots can also turn nefarious when “programmed” incorrectly.
This is what happened with Tay, an AI Twitter bot from 2016.
Tay was an experiment at the intersection of ML, NLP, and social networks. She had the capacity to Tweet her “thoughts” and engage with her growing number of followers. While other chatbots in the past, such as Eliza, conducted conversations using narrow scripts, Tay was designed to learn more about language over time from its environment, allowing her to have conversations about any topic.
In the beginning, Tay engaged harmlessly with her followers with benign Tweets. However, after a few hours, Tay started tweeting highly offensive things, and as a result, she was shut down just sixteen hours after her launch.
You may wonder how can such an “error” happen so publicly. Wasn’t this bot tested? Weren’t the researchers aware that this bot was an evil, racist bot before releasing it?
These are valid questions. To get into the crux of what went wrong, let’s study some of the problems in detail and try to learn from them. This will help us all see how to handle similar challenges when deploying AI in our organizations.
Data
Data is often a big reason why AI models fail. In the case of Tay, shortly after her release, Twitter trolls started engaging the bot with racist, misogynistic, and anti-Semitic language. And because Tay had the capacity to learn as she went, it meant that she internalized some of the language taught by the trolls. Tay just repeated some of this language. Tay uttered bad language because she was fed bad data.
Take note: Poor-quality, prejudiced, or downright bad training data can significantly impact how machine learning models behave. You train ML models with nonrepresentative data, and they will churn out biased predictions. If you starve models of data or feed models incomplete data, they will make random predictions instead of meaningful ones. Questionable learning/training data = questionable output.
Questionable training data = questionable ML model output
Design
While we don’t often relate model or solution design to erratic model behaviors, it’s often more common than you think. By design, Tay continuously learned from external input (i.e., the environment). Among all the benign Tweets that Tay consumed from her environment were also abrasive Tweets. The more abrasive Tweets Tay saw, the more she learned that those were typical types of responses to Tweet.
This is true of any ML model. The dominant patterns influence the predictions of the ML models. Fortunately, it’s not necessary for ML models to learn continuously from their environment. ML models can learn from controlled data. So, Tay’s design itself was risky.
Take note: The design of your ML models impacts how it behaves in reality. So, when designing ML systems, developers and business stakeholders should consider the different ways in which the system can fail, operate suboptimally, be breached, and adjust the design accordingly. In the end, you need a fail-safe plan.
In the case of Tay, such thinking early on would’ve made clear that not all Tweet engagements would be benign. There could be bad actors tweeting and engaging in a highly offensive manner, not far-fetched at all from reality. The realization that the bot could be consuming bad data may have stopped the team from using data from other Twitter accounts. They may also have considered consuming data from approved Twitter accounts.
The design of your ML models impacts how it behaves in reality.
Testing
One of the key steps in the machine learning development lifecycle is testing—not just during development, but testing right before full deployment. I call this post-development testing (PDT).
The ML Development Life Cycle
In the case of Tay, It’s unclear how much PDT went on before releasing the bot, but obviously, it wasn’t enough! Had Tay been subjected to different types of tweet engagements during PDT, the dangers of releasing Tay would’ve become obvious.
Take note: In practice, PDT is often overlooked due to a rush to release a new feature or product. It’s often assumed that if a model works well during development, it will naturally perform well in practice. Sadly, that’s not always the case. So, take note that PDT is critical when it comes to AI deployment.
During PDT, you can stress test your AI solution to find points of failure. In the case of Tay, subjecting it to different types of Twitter users (e.g., trolls, benign users, and passive aggressives) could’ve surfaced risky behaviors of the bot. PDT can also help evaluate your solution’s impact on relevant business metrics. For example, suppose your business metric measures speed improvement in completing a particular task. PDT can give you early insights into such metrics.
During PDT, you can stress test your AI solution to find points of failure. PDT can also help evaluate your solution’s impact on relevant business metrics.
Monitoring
Another critical component in the ML development lifecycle is monitoring after deployment. With Tay, monitoring the bot’s behavior eventually led to it being shut down within 24 hours of its release (side note: negative press also had a hand in it). If the bot hadn’t been monitored long after its release, this could’ve led to a whole lot more negative press and many more groups being offended.
Take note: While model monitoring is often done as an afterthought, it should be prioritized before its release to end users. The initial weeks after a model’s release is the most crucial, as unpredictable behaviors not seen during testing could emerge.
The initial weeks after a model’s release is the most crucial, as unpredictable behaviors not seen during testing could emerge.
Summary
While what went wrong with Tay may be surprising and intriguing to many, from a machine learning best practices perspective, Tay’s behavior could’ve been predicted. Tay’s environment wasn’t always positive, and she was designed to learn from that environment which led to a perfect recipe for a dangerous experiment.
So decisions around data, model design, testing, and monitoring are critical to every AI initiative. And this is not just the responsibility of the developers but also the business stakeholders. The more thought we put into each element, the fewer the surprises and the higher the chances of a successful initiative.
That’s all for now!
Keep Learning & Succeed With AI
- Join my AI Integrated newsletter, which clears the AI confusion and teaches you how to successfully integrate AI to achieve profitability and growth in your business.
- Read The Business Case for AI to learn applications, strategies, and best practices to be successful with AI (select companies using the book: government agencies, automakers like Mercedes Benz, beverage makers, and e-commerce companies such as Flipkart).
- Work directly with me to improve AI understanding in your organization, accelerate AI strategy development and get meaningful outcomes from every AI initiative.