The challenge was underscored in Monday's deadly attack on a policeman and his wife in France in which the killer posted on Facebook a live 13-minute video of himself with the victim's child in which he admitted the murders and urged fellow militants to carry out more bloodshed.
Platforms like Facebook and Twitter have been promoting their new live video features, but are struggling to find ways to keep out content that promotes violence.
"Terrorists and acts of terrorism have no place on Facebook," a spokesperson for the leading social network said when asked about the incident in France.
"Whenever terrorist content is reported to us, we remove it as quickly as possible. We treat take down requests by law enforcement with the highest urgency."
The Facebook statement acknowledged "unique challenges" for live-streamed videos, adding, "it's a serious responsibility, and we work hard to strike the right balance between enabling expression while providing a safe and respectful experience."
Twitter, whose Periscope live video feature has been used to show a suicide in France and a rape in the United States, offered a similar policy.
A Twitter spokesperson queried by AFP reiterated its policy stating that "you may not make threats of violence or promote violence, including threatening or promoting terrorism."
Periscope, according to its policy statement, "is intended to be open and safe" and "explicit graphic content is not allowed" including "depictions of child abuse, animal abuse, or bodily harm."
Social networks have long stressed they will help legitimate investigations of crimes and attacks, but have resisted efforts to police or censor the vast amounts content flowing through them.
But social media groups are capable of doing more to prevent and remove horrific content from being streamed worldwide, said Mark Wallace, chief executive of the Counter Extremism Project, a group founded by former diplomats from the United States and other countries to work against extremist ideology.
Wallace said social networks have already implemented systems that filter child pornography, and could do the same for other violent acts.
"There is technology to do that now," he told AFP. "It's a question of will, not technology."
This kind of filtering, Wallace said, would help dissuade the use of these platforms by those seeking to attack the United States or its allies.
"We have to get to place where if I'm a terrorist, I know that my video isn't going to go all over the world."
Gabriel Weimann, a professor of communication at the University of Haifa in Israel and author of a book "Terrorism in Cyberspace," agreed on a need to do more.
"For the terrorist himself, (live video) is an instrument for self-glorification, for eternal reward, for presenting himself and his cause to the world," Weimann told AFP.
Weimann called for "better cooperation between these media (Facebook, YouTube, Instagram, Twitter and more) and the counter-terrorism agencies."
"There is no perfect solution, no way to seal the Internet. But there are better ways to minimised terrorist abuse of these platforms," he said.
Civil liberties activists question however whether the government should be pressuring social networks to limit content that could be protected under the US constitution, and its free speech guarantees.
Social networks "are concerned about not trampling on the contractual rights of their users or acting on behalf of the government to take away people's constitutional rights," said Sophia Cope, an attorney at the Electronic Frontier Foundation.
"They don't want to be investigatory arms of the government or have their business model be overshadowed by another realm of responsibility. That's not to say they can't cooperate when they have the means to do so."
She said civil liberties defenders are concerned about government mandates, such as one proposal that would require social media firms to report terrorist activity.
Hugh Handeyside, an attorney in the American Civil Liberties Union's National Security Project, said it's too soon to know what may be done on live-streaming of violent acts, but that social networks should not be used by government for back-door censorship.
Deciding on what is related to terrorism "is a question experts have difficulty making, and will inevitably be subjective and context-dependent," according to Handeyside.
"We object to the government systematically using these content-flagging mechanisms. If the government is identifying speech it deems offensive but couldn't ban outright and is attempting to leverage these companies' terms of service, that amounts to censorship."
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ