Singapore’s Instagram users got a bit of a surprise in January: dozens of influencers (users with anywhere from 1,000 to 35,000 followers) posting mundane statuses about finance and budgeting. Singapore’s Ministry of Finance (MOF) paid over 50 influencers to post about the budget in advance of Budget Day, when MOF seeks public feedback on the next year’s budget in an effort to increase citizen engagement. The sponsored posts did not go unnoticed and immediately followers began mocking the idea of using semi-famous Instagram personalities to raise awareness of inflationary pressure and cost of living.
In a way, the MOF’s plan worked: people are already talking about Budget Day, well before February 19th. This initiative is a follow-up to an earlier Budget Day awareness campaign in 2017, suggesting that influencer marketing worked at least well enough to merit a second attempt. As connectivity increases, particularly through mobile devices and among younger demographics, it seems likely that similar initiatives will crop up around the world. These may help boost civic engagement and awareness of current affairs—even if it comes across as a bit corny or forced. But, as with any technology, there is also the potential for misuse.
In preparation for Budget Day, MOF engaged a marketing company, StarNgage, to identify and recruit around 50 influencers to be involved in the campaign. Exact payment and deliverable terms were not disclosed, but the ministry expects the campaign will reach 225,000 Instagram users—about four percent of Singapore’s population of 5.6 million and six percent of the country’s social network users. MOF did not rely solely on influencers to spread the word: the ministry also released a YouTube video series, Tweeted, created a special website, and put out conventional press releases, among other outreach efforts. Engaging influencers is cheap compared to television or print advertising, so if it does help the MOF reach even a small portion of the population, that could be enough of a boost to encourage the government and other countries around the world to create similar campaigns.
Mobile phone ownership is now higher than desktop ownership globally and the number of active Instagram, Facebook, and Twitter users is growing 5 to 30 percent per year, depending on the platform. Using popular influencers to spread messages about upcoming elections, raise awareness of environmental issues, or draw attention to health programs seems like a natural progression for the use of social media.
A recent New York Times exposé pointed out that the world of influencer marketing is complex. Companies like Devumi offer a tantalizing proposition for influencers and hopefuls alike: for a relatively small fee they will provide tens of thousands of followers. Many of them, it turns out, are bots, like those infamously used to influence the 2016 U.S. election and the Brexit vote. Influencers who have purchased additional followers from Devumi alone include reality show hopefuls, U.S. congressional candidates, political appointees, an editor at China’s Xinhua state-run news outlet, and an adviser to the president of Ecuador. Devumi does not operate on Instagram, but there are plenty of other companies that advertise similar services for users looking to artificially inflate their follower count and get more likes and reposts.
With these services so readily available and social media algorithms designed to promote content that seems, based on follower amount and amount of engagement, popular, conditions are ripe for organizations like the Russian troll farms that use artificial influencers to spread dissent and meddle in the affairs of others. Examples already abound. A British PR firm has been accused of running a campaign where fake Twitter accounts posted racially-charged messages aimed at opponents of South Africa’s President Zuma. Elsewhere, right-wing activists in the United States used bots to amplify a repository of hacked emails from France’s then-presidential candidate Emmanuel Macron. And in the United States, an investigation is underway looking into the origin of thousands of fake comments on net neutrality sent to the FCC (separate from the many real ones). The international network of bots and fake influencers that engaged in the 2016 U.S. election and the Brexit vote serves as a stark backdrop and chief example of the negative potential for social media.
The ways in which social media capabilities can be used positively (to benefit humanitarian causes and international development, for example) are always contrasted by the ways in which it can be used negatively, most commonly to spread propaganda and sow confusion. Heads of Twitter and Facebook among others are facing public pressure to crack down on bots and trolls, but at the end of the day they are corporations, not elected officials or public utility providers, and have poorly defined obligations to prevent users from leveraging their platforms to spread fake news. A solution may come from better educating social media users on how to spot a bot or an influencer with false popularity. Perhaps influence marketing can help raise the alarm?