Meta Platforms has spent months trying to fix child-safety problems on Instagram and Facebook, but it is struggling to prevent its own systems from enabling and even promoting a vast network of pedophile accounts.

The social-media giant set up a child-safety task force in June after The Wall Street Journal and researchers at Stanford University and the University of Massachusetts Amherst revealed that Instagram’s algorithms connected a web of accounts devoted to the creation, purchasing and trading of underage-sex content.

Five months later, tests conducted by the Journal as well as by the Canadian Centre for Child Protection show that Meta’s recommendation systems still promote such content. The company has taken down hashtags related to pedophilia, but its systems sometimes recommend new ones with minor variations. Even when Meta is alerted to problem accounts and user groups, it has been spotty in removing them.

The tests show that the problem extends beyond Instagram to encompass the much broader universe of Facebook Groups, including large groups explicitly centered on sexualizing children. Facebook, which counts more than three billion monthly users worldwide, promotes its groups feature as a way to connect users with similar interests.

A Meta spokesman said the company had hidden 190,000 groups in Facebook’s search results and disabled tens of thousands of other accounts, but that the work hadn’t progressed as quickly as it would have liked. “Child exploitation is a horrific crime and online predators are determined criminals,” the spokesman said, adding that Meta recently announced an effort to collaborate with other platforms seeking to root them out. “We are actively continuing to implement changes identified by the task force we set up earlier this year.”

The company said it also has introduced other ways to find and remove accounts that violate its child exploitation policies, and has improved technology to identify adult nudity and sexual activity in live videos.

The Stanford Internet Observatory, which has been examining internet platforms’ handling of child-sex content, credited Meta in a September report with some progress, but said of the connection among pedophiles on Instagram that “the overall ecosystem remains active, with significant room for improvement in content enforcement.”

The Canadian Centre for Child Protection, a nonprofit that builds automated screening tools meant to protect children, said a network of Instagram accounts with as many as 10 million followers each has continued to livestream videos of child sex abuse months after it was reported to the company. Facebook’s algorithms have helped build large Facebook Groups devoted to trading child sexual abuse content, the Journal’s tests showed.

Meta said its task force, which at times has numbered more than 100 employees, had banned thousands of hashtags that pedophiles used to promote or search for content sexualizing children, removed pedophilic accounts and provided more guidance to content reviewers. The company said it is working to bolster software tools to restrict its algorithms from connecting pedophiles and to help target the forums and content that attract them.

Meta in recent years has shifted attention and resources to artificial intelligence, virtual reality and the metaverse. Broad cost cuts over the past year have resulted in the layoffs of hundreds of safety staffers focused on “high severity” content problems, including some child-safety specialists, according to current and former employees.

Meta said that the company continues to invest in child-safety work, including by assigning a team to find and remove child exploitation material on the platform.

Meta has been reluctant to significantly limit the systems that present personalized content and user experiences, which have helped make it the world’s biggest social-media company. A spokesman said that bluntly restricting or removing features that also connect people with acceptable content isn’t a reasonable approach to preventing inappropriate recommendations, and that the company invests in safety to keep its platform healthy. “Every day our systems help connect millions of people with interesting and positive groups relevant to them,” he said, including cancer support and job listings.

Company documents reviewed by the Journal show that senior Meta executives earlier this year instructed the company’s integrity team, which is responsible for addressing user safety issues, to give priority to objectives including reducing “advertiser friction” and avoiding mistakes that might “inadvertently limit well intended usage of our products.” Those objectives were listed as above traditional safety work focused on harmful content, such as child exploitation, in planning documents viewed by the Journal.

“It is a given for members of the integrity team that their top priority is keeping the community safe,” the Meta spokesman said.

Meta, like other tech companies, has long had to fight the use of its platform to groom children or trade child-sexual abuse material. The Journal’s article in June showed Instagram wasn’t just hosting such activities, but its recommendation systems were connecting pedophiles with one another and guiding them to content sellers.

During the past five months, for Journal test accounts that viewed public Facebook groups containing disturbing discussions about children, Facebook’s algorithms recommended other groups with names such as “Little Girls,” “Beautiful Boys” and “Young Teens Only.” Users in those groups discuss children in a sexual context, post links to content purported to be about abuse and organize private chats, often via Meta’s own Messenger and WhatsApp platforms. Journal reporters didn’t comment, click on any of the links or join any chats.

Boosted in part by Facebook’s “Groups You Should Join” algorithm, membership in such forums can swell rapidly. In one public group celebrating incest, 200,000 users discussed topics such as whether a man’s niece was “ready” at the age of 9, and they arranged to swap purported sex content featuring their own children. In another user group numbering 800,000, administrators shared images of schoolgirls as a way to promote a Spanish-language website with a name referring to women’s underwear.

When a Journal research account flagged many such groups via user reports, the company often declared them to be acceptable. “We’ve taken a look and found that the group doesn’t go against our Community Standards,” Facebook replied to a report about a large Facebook group named “Incest.”

Only after the Journal brought specific groups to the attention of Meta’s communications staff did the company remove them.

Meta said revamped software tools will help address such problems by limiting the ability of pedophilic accounts to connect on its platforms. That effort is focused on expanding the use of a technology meant to identify “potentially suspicious adults” by evaluating users’ behavior to determine whether they pose a threat to children.

The technology previously has been used to restrict Facebook or Instagram accounts that Meta’s system deemed likely to belong to pedophiles from finding and contacting children. Meta now aims to use it to prevent pedophiles from following one another and forming like-minded groups. The technology aims to restrict the recommendation of accounts and groups that exhibit a range of suspicious behavior.

For the first time, Meta has begun disabling individual accounts that score above a certain threshold of suspicious behavior, a spokeswoman said.

In May, an outside researcher in the U.S. documented that a network of Instagram accounts, some with millions of followers, was livestreaming videos of child sex abuse. The researcher reported that activity to both Meta and authorities. The Journal also flagged those accounts to Meta’s communications staff, which said at the time that it was investigating.

Meta said in late October that it had taken down hundreds of accounts. But more than five months after the network was reported to Meta, accounts affiliated with the network continue to regularly broadcast a mixture of adult pornography, child sexual abuse and bestiality videos, according to separate research by the Canadian Centre for Child Protection.

“We often wonder, ‘Why is it that we can, within minutes, find these massive networks?’” said Leanna McDonald, the center’s president. “Why is no one over there dealing with this?”

Researchers at the Stanford Internet Observatory found that when Meta takes down an Instagram or Facebook hashtag it believes is related to pedophilia, its system often fails to detect, and sometimes even suggests, new ones with minor variations. After Meta disabled #Pxdobait, its search recommendations suggested to anyone who typed the phrase to try simply adding a specific emoji at the end.

The Stanford group provided Meta with an analysis of groups popular with Instagram’s child sexualization community. Five months later, some of the groups it flagged are still operating.

The Meta spokesman said that taking down groups is complex and time-consuming. The company, he said, has removed 16,000 groups since July 1 for violating child-safety policies.

David Thiel, the Stanford Internet Observatory’s chief technologist, said they shouldn’t have been that hard to review. “The groups were easily ranked by overlapping membership, and the top entries were overwhelmingly problematic,” he said.

Although Meta lets users flag problem content, the Journal’s June article showed that its system often ignores or dismisses reports of child exploitation. Meta said at the time that it had discovered and fixed a software glitch that was preventing a substantial portion of user reports from being processed, and was providing new training to company content moderators.

Four months after the Journal alerted Meta to the problem with pedophilic Facebook Groups, however, user reports about them filed by a Journal research account still weren’t being routinely addressed.

Meta has said its review calls are around 90% accurate. But an internal review by the company in May found that its decisions about user reports of underage sex content were routinely inexplicable.

Meta employs outside contractors to help moderate content. Its child-safety task force set up in June sent a team to Mumbai, the company’s largest hub for the outsourced moderation workers.

The team found that the moderators weren’t adequately trained, and that Meta’s IT systems sometimes showed them blank screens instead of the “high severity” content they were supposed to review. In other instances, contract workers were asked to review text in languages they didn’t speak.

Meta acknowledged the shortfalls and said it has made progress on fixing them.

Through the late spring and into summer, though, the company laid off child-safety specialists and a significant portion of its Global Operations Team, which handled “high severity” content moderation issues that sometimes included child safety. In Dublin, where Meta’s worldwide content moderation is based, more than 130 employees were cut, and some North America-focused safety work was passed to staffers assigned to Central Europe and the Middle East.

A Meta spokesman said that the laid-off Dublin staffers only occasionally handled child-safety issues.

“Human review of content suspected of including child sexualization material is done by people trained specifically to review such material,” said a spokesman.

After the Journal published its findings in June, Meta said it was rethinking the balance between child safety and giving maximal freedom to users.

Previously, Meta had resisted banning the Instagram and Facebook hashtag #CP, an abbreviation for “child pornography” commonly used by pedophiles, because the same initials were sometimes used on posts about cerebral palsy and caldo de pollo, Spanish for chicken soup.

Meta restricted the searchability of that term and thousands of others, including #incest and #lolita.

A review of Instagram by Stanford researchers in September found that some of the same underage sellers of sex content that they had identified in the spring still had active accounts and were using minor variations on previous hashtags to promote illegal material.

An Instagram system designed to help users find new content automatically suggests personalized variants of search terms as a user types them. The Stanford researchers found that to be happening even with search terms that Meta had banned.

On a Journal Instagram test account, Meta wouldn’t allow search results for the phrase “Child Links,” but the system suggested an alternative: “Child Pornography Links.” After Meta blocked that term following a query by the Journal, the system began recommending new phrases such as “child lingerie” and “childp links.”

Recently, Facebook’s “Groups You Should Join” feature has suggested topics such as kidnapping, ”dating” children as young as 11 and even chloroforming women.

After the Journal reported the chloroform groups to Meta, the company took them down. Within two weeks, however, a 1,800 member group specifically devoted to chloroforming children was back in Facebook’s “Groups You Should Join” recommendations.

In it, users posted pictures of girls with rags held over their faces. One member posted a picture of a smiling young girl with the caption of “is it suitable for kidnapping?” Fellow group members agreed that the girl was.

After the Journal flagged that group, Meta took it down as well.

Write to Jeff Horwitz at jeff.horwitz@wsj.com and Katherine Blunt at katherine.blunt@wsj.com