TikTok campaign uses AI to impersonate Sudan’s ex-president

Since late August, an unidentified account has been sharing purported “leaked recordings” of the ex-president, Omar al-Bashir.

However, the voice in dozens of posted clips is artificial, adding a deceptive element. Al-Bashir, accused of orchestrating war crimes, was ousted by the military in 2019 and has not been seen publicly for a year, reportedly due to serious illness. He denies the war crimes accusations.

The uncertainty surrounding his location compounds the crisis in Sudan, exacerbated by conflicts between the Sudanese military and the rival Rapid Support Forces (RSF) that erupted in April.

A channel named “The Voice of Sudan” shared the recordings. The posts seem to be a mixture of old clips from coup attempts, news reports, and various “leaked recordings” attributed to Bashir. Often presented as from meetings or phone conversations, the posts had a grainy quality resembling what one might expect from a poor telephone line.

Campaigns like this are significant, showcasing how new tools can rapidly and inexpensively disseminate fake content through social media. Hany Farid, a digital forensics researcher at the University of California, Berkeley, expresses concern about the democratization of access to advanced audio and video manipulation technology.

Al-Bashir’s recording corresponds to a Facebook Live broadcast aired two days prior by a prominent Sudanese political commentator, Al Insirafi. Despite not sounding alike, the scripts are identical, and playing both clips together reveals perfect synchronization. Although Al Insirafi is believed to reside in the United States and has never shown his face on camera, the audio waveforms exhibit similar patterns in speech and silence, as Mr. Farid observed

The TikTok account is exclusively political and requires deep knowledge of what’s going on in Sudan, but who benefits from this campaign is up for debate.

AI experts have expressed longstanding concerns that the proliferation of fake video and audio could trigger a wave of disinformation with the potential aim to incite unrest.

Scroll to Top