Computer science professor Mohammad Irfan calls it an historic summit. Earlier this month U.S. government officials, including Attorney General Loretta Lynch and FBI Director James Comey, met with top tech company executives, among them Apple chief executive Tim Cook.
The meeting, which took place in California’s Silicon Valley, was called to address ways of combating the growing threat from terror groups like Islamic State, also known as ISIS, who are making effective use of the Internet and social media to spread their message and attract new recruits.
“Terror groups have been using social media for quite some time, so there’s nothing new there,” says Irfan, whose research areas include digital and computational studies. But what is new, he says, is the apparent thaw in relations between the government and industry on the issue that the summit signifies.
Relations between the White House and Silicon Valley are historically strained by privacy issues, says Irfan. Last February, for example, top executives from three major tech companies – Google, Yahoo and Facebook — declined to attend a cybersecurity conference organized by President Obama, sending more junior representatives instead.
When it comes to tackling the online terror threat, Irfan says there are two major challenges, one of them constitutional, and the other technological.
This touches on the delicate issue of how to prevent hateful speech from being disseminated. Irfan says it’s a delicate subject because free speech – even if it’s hateful – is protected by the First Amendment unless it presents a “clear and present danger” to the nation. This has led to fierce debate over the extent to which Islamic State’s online activity constitutes an imminent threat.
This leads to the question of how to combat these messages. “At the summit the government talked about sending out alternative messages, to counter the terror recruitment drive” says Irfan. “One of the key messages is ‘think again, turn away’ and there are Twitter and Facebook pages that promote this message through the government initiative.”
But while the government has a clear message and a platform to send it out on, “it still faces an interesting problem”, Irfan observes, “in that this message needs to go viral and nobody knows how to make something go viral on its own.”
The technological side of the struggle has two parts to it, says Irfan. Firstly, there’s the Herculean task of locating this hateful content on the web, amid the sea of other material out there. “Companies like Facebook and Twitter have publicly said they’ll remove hate speech when they become aware of it,” says Irfan, but this begs the question, ‘how do you become aware of it?’ They’re unlikely to find the general public flagging such content, he says, because the vast majority of social media users have no interaction with terror-related sites.
The other way to find hate speech on the net, says Irfran, is to devise computer programs that can automatically detect those kinds of messages – a practice known as Natural Language Processing. “But this raises quantitative questions: how do you measure semantics, like the ‘hatefulness’ of a sentence? How terror-related is it? What are the key words you would look for?” On top of all this, he says, is the language challenge. “Looking for those key words and phrases in multiple languages is not easy.”
A lot of research is being done on Natural Language Processing, says Irfan, “but from what I understand, we’re not yet able to do the kind of job that the government wants.”
The Encryption Issue
The second major technological problem, he says, is that posed by encrypted messages. “Take the November bombings in Paris,” says Irfan. “Investigators examined the terrorists’ devices and found they were using encrypted apps that can send coded messages between two people.” Those encrypted apps, he explains, make the job of surveillance almost impossible, “because even if you intercept the message you still cannot decode it without the proper key.” How to tackle this problem is where government and industry again diverge, he says.
Ideally the government would like the companies who design those apps to hand over the details of them to help federal agents decrypt suspicious messages. “But from what I understand,” says Irfan “the software firms are reluctant to give up such details.” This is not just due to concerns about government surveillance, he adds, “but also down to worries that if you give the encryption technology to the government, it could then be stolen by hackers.”
Irfan says this goes back to the issue of privacy: “You can’t blame the tech companies for being concerned. The apps are meant to facilitate private communications, but this app has of course been used in a bad way.”
There are possible work-arounds, he says. “One way around might be for the app companies themselves to have a computer program that could detect suspicious messages,” he says. “But even this is debatable as it would involve the technology companies effectively spying on their own customers.”
These issues were all on the agenda at the recent Silicon Valley summit, and Mohammad Irfan says while there are few published details available of how the talks went, the fact that it happened at all made it a major event. “The summit has not been highlighted much in the press” he says, “but it’s significant because it’s a collaboration between major entities in the worlds of government and industry.”