Following Facebook’s explanation last week about how it is tackling terrorist content, YouTube has just announced the steps it will be taking to address the same issue. In a post that was first published in the Financial Times before appearing online, Google senior VP and general counsel Kent Walker revealed the four new measures being introduced to identify and remove terrorist or violent extremist material.

“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now,” writes Walker.

Like Facebook, Google pledges to increase the use of its technology to help identify terrorism-related content. While AI has been responsible for identifying 50 percent of the videos it removed in the past six months, Google will “devote more engineering resources to apply our most advanced machine learning research to train new 'content classifiers' to help us more quickly identify and remove” extremist material.

Google is also increasing the number of independent experts in YouTube’s Trusted Flagger program. It is adding 50 expert NGOs to the 63 organizations who already take part, and intends to support them with operational grants. Walker says that while some flags reported by users can be inaccurate, the reports prove accurate over 90 percent of the time.

Google is also going to clamp down on videos that, while not clearly violating its policies, come close enough to warrant action. This type of content, which includes inflammatory religious or supremacist content, will not be monetized, appear behind an interstitial warning, and have user comments and recommendations disabled. All of which should make them harder to find and lower user engagement.

Finally, YouTube is expanding its “Redirect Method” more broadly across Europe. The system uses targeted ads to direct potential ISIS recruits toward anti-terrorist videos that will hopefully convince them not to join the organization.

According to Walker: “In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.”

Google added that it is working with other tech firms, including Facebook, Microsoft, and Twitter, to share resources and technology in the fight against terrorism online.

“Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them. Together, we can build lasting solutions that address the threats to our security and our freedoms.”