You are not permitted to download, save or email this image. Visit image gallery to purchase the image.
Facebook said it took down a livestream of the shootings and removed the shooter's Facebook and Instagram accounts after being alerted by police.
At least 49 people were killed at two mosques in Christchurch, New Zealand's third-largest city, during Friday prayers at midday.
Using what appeared to be a helmet-mounted camera, the gunman livestreamed in horrifying detail 17 minutes of the attack on worshippers at the Al Noor Mosque in Deans Ave, where at least 41 people died.
Seven more worshippers were killed at a second mosque in the suburb Linwood. Ninety people have also been injured, some critically.
The shooter also left a 74-page manifesto that he posted on social media, identifying himself as a 28-year-old Australian-born man and white nationalist out to avenge attacks in Europe perpetrated by Muslims.
"Our hearts go out to the victims, their families and the community affected by this horrendous act," Facebook New Zealand spokeswoman Mia Garlick said in a statement.
Facebook is "removing any praise or support for the crime and the shooter or shooters as soon as we're aware," she said. "We will continue working directly with New Zealand Police as their response and investigation continues."
Twitter and YouTube owner Google said they were working to remove the footage from their sites.
New Zealand Police have urged people not to share the footage: "We would also like to remind the public that it is an offence to distribute an objectionable publication and that is punishable by imprisonment."
Police said they are were aware there are distressing materials related to this event circulating widely online and urged anyone who had been affected by seeing these materials to seek appropriate support.
The furor highlights once again the speed at which graphic and disturbing content from a tragedy can spread around the world and how Silicon Valley tech giants are still grappling with how to prevent that from happening.
British tabloid newspapers such as The Daily Mail and The Sun posted screenshots and video snippets on their websites.
One journalist tweeted that several people sent her the video via the Facebook-owned WhatsApp messaging app.
Many internet users have called for tech companies and news sites to take the material down.
Some people expressed outrage on Twitter that the videos were still circulating hours after the attack.
"Google is actively inciting violence," tweeted British journalist Carole Cadwalladr with a screen grab of search results of the video.
The video's spread underscores the challenge for Facebook even after stepping up efforts to keep inappropriate and violent content off its platform. In 2017 it said it would hire 3000 people to review videos and other posts, on top of the 4500 people Facebook already tasks with identifying criminal and other questionable material for removal.
But that's just a drop in the bucket of what is needed to police the social media platform, said Siva Vaidhyanathan, author of Antisocial Media: How Facebook Disconnects Us and Undermines Democracy.
If Facebook wanted to monitor every livestream to prevent disturbing content from making it out in the first place, "they would have to hire millions of people," something it's not willing to do, said Vaidhyanathan, who teaches media studies at the University of Virginia.
"We have certain companies that have built systems that have inadvertently served the cause of violent hatred around the world," Vaidhyanathan said.
Facebook and YouTube were designed to share pictures of babies, puppies and other wholesome things, he said, "but they were expanded at such a scale and built with no safeguards such that they were easy to hijack by the worst elements of humanity."
With billions of users, Facebook and YouTube are "ungovernable" at this point, said Vaidhyanathan, who called Facebook's livestreaming service a "profoundly stupid idea."