SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"When a company's own policies explicitly allow bots to engage children in 'romantic or sensual' conversations, it's not an oversight, it's a system designed to normalize inappropriate interactions with minors," said one advocate.
Four months after the children's rights advocacy group ParentsTogether Action issued an advisory about the potential harms Meta's artificial intelligence chatbot could pose to kids, new reporting Wednesday revealed how the Silicon Valley company's standards for the AI product have allowed it to have sexually provocative conversations with minors as well as make racist comments.
Reuters reported extensively on an internal Meta document titled "GenAI: Content Risk Standards."
The document said that Meta's generative AI products—which are available to users as young as 13 on the company's platforms, including Facebook, Instagram, and WhatsApp—are permitted to engage in "romantic or sensual" role-play with minors.
Examples of acceptable remarks from the AI bot included "Your youthful form is a work of art" and "Every inch of you is a masterpiece," which the document suggested could be said to a child as young as 8.
An example of an acceptable comment made to a high school student was, "I take your hand, guiding you to the bed."
New Republic contributing editor Osita Nwanevu said the reporting shows that "if we're going to have this technology, the content used to train models needs to be legally licensed from its creators and their applications need to be regulated."
"For example: I do not think we should allow children to be groomed by the computer," he said.
Reuters reported that Meta changed the document after the news outlet brought the sexually suggestive comments to the company's attention, with spokesperson Andy Stone saying such conversations with children should not have been allowed.
"The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone told Reuters. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role-play between adults and minors."
But Stone didn't say the company had revised the content standards to disallow other concerning comments, like those that promote racist views.
The document stated that the AI chatbot was permitted to "create statements that demean people on the basis of their protected characteristics"—for example, a paragraph about Black people being "dumber than white people."
Reuters' reporting suggested that Meta's allowance of sexually suggestive AI conversations with children was not an accident, with current and former employees who worked on the design and training of the AI products saying the document reflected "the company's emphasis on boosting engagement with its chatbots."
"In meetings with senior executives last year, [CEO Mark] Zuckerberg scolded generative AI product managers for moving too cautiously on the rollout of digital companions and expressed displeasure that safety restrictions had made the chatbots boring, according to two of those people," reported Jeff Horwitz at Reuters. "Meta had no comment on Zuckerberg's chatbot directives."
In April, ParentsTogether Action issued a warning about Meta's AI chatbots and their ability to "engage in sexual role-play with teenagers," which had previously been reported by the Wall Street Journal.
Wednesday's reporting provided "a fuller picture of the company's rules for AI bots," the group said.
"These internal Meta documents confirm our worst fears about AI chatbots and children's safety," said Shelby Knox, campaign director for tech accountability and online safety at ParentsTogether Action. "When a company's own policies explicitly allow bots to engage children in 'romantic or sensual' conversations, it's not an oversight, it's a system designed to normalize inappropriate interactions with minors."
The group said it tested Meta AI earlier this year, posing as a 14-year-old, and was told by the bot, "Age is just a number" as it encouraged the fictional teenager to pursue a relationship with an adult.
"No child should ever be told by an AI that 'age is just a number' or be encouraged to lie to their parents about adult relationships," said Knox. "Meta has created a digital grooming ground, and parents deserve answers about how this was allowed to happen."
As Stone assured Reuters that the company was reviewing its content standards for its AI chatbot, other new reporting suggested Meta isn't likely to impose strict rules discouraging the bot from making racist or otherwise harmful remarks any time soon.
As CNN reported Wednesday, Meta has hired Robby Starbuck, a "conservative influencer and anti-DEI agitator," to serve as an anti-bias adviser for its AI products.
The arrangement is part of a legal settlement following a lawsuit Starbuck filed against Meta in April, saying the chatbot had falsely stated he took part in the January 6, 2021 attack on the US Capitol.
An executive order signed by President Donald Trump last month seeks to rid AI products of so-called "woke" standards and prohibit the federal government from using AI technology that is "infused with partisan bias or ideological agendas such as critical race theory"—the term used by many conservatives in recent years for the accurate teaching of race relations in US history.
"Simply put," said one critic, "the U.S. nuclear industry will fail if safety is not made a priority."
U.S. President Donald Trump signed a series of executive orders on Friday that will overhaul the independent federal agency responsible for regulating the nation's nuclear power plants, aiming to expedite the construction of new nuclear reactors—a move that experts have warned will increase safety risks.
According to a White House statement, Trump's directives "will usher in a nuclear energy renaissance," in part by allowing Department of Energy laboratories to conduct nuclear reactor design testing, green-lighting reactor construction on federal lands, and lifting regulatory barriers "by requiring the Nuclear Regulatory Commission (NRC) to issue timely licensing decisions."
The Trump administration is seeking to shorten the years-long NRC process of approving new licenses for nuclear power plants and reactors to within 18 months.
"If you aren't independent of political and industry influence, then you are at risk of an accident."
White House Office of Science and Technology Director Michael Kratsios said Friday that "over the last 30 years, we stopped building nuclear reactors in America—that ends now."
"We are restoring a strong American nuclear industrial base, rebuilding a secure and sovereign domestic nuclear fuel supply chain, and leading the world towards a future fueled by American nuclear energy," he added.
However, the Union of Concerned Scientists (UCS) warned that the executive orders will result in "all but nullifying" the NRC's regulatory process, "undermining the independent federal agency's ability to develop and enforce safety and security requirements for commercial nuclear facilities."
"This push by the Trump administration to usurp much of the agency's autonomy as they seek to fast-track the construction of nuclear plants will weaken critical, independent oversight of the U.S. nuclear industry and poses significant safety and security risks to the public," UCS added.
Edwin Lyman, director of nuclear power safety at the UCS, said, "Simply put, the U.S. nuclear industry will fail if safety is not made a priority."
"By fatally compromising the independence and integrity of the NRC, and by encouraging pathways for nuclear deployment that bypass the regulator entirely, the Trump administration is virtually guaranteeing that this country will see a serious accident or other radiological release that will affect the health, safety, and livelihoods of millions," Lyman added. "Such a disaster will destroy public trust in nuclear power and cause other nations to reject U.S. nuclear technology for decades to come."
Friday's executive orders follow reporting earlier this month by NPR that revealed the Trump administration has tightened control over the NRC, in part by compelling the agency to send proposed reactor safety rules to the White House for review and possible editing.
Allison Macfarlane, who was nominated to head the NRC during the Obama administration, called the move "the end of independence of the agency."
"If you aren't independent of political and industry influence, then you are at risk of an accident," Macfarlane warned.
On the first day of his second term, Trump also signed executive orders declaring a dubious "national energy emergency" and directing federal agencies to find ways to reduce regulatory roadblocks to "unleashing American energy," including by boosting fossil fuels and nuclear power.
The rapid advancement and adoption of artificial intelligence systems is creating a tremendous need for energy that proponents say can be met by nuclear power. The Three Mile Island nuclear plant—the site of the worst nuclear accident in U.S. history—is being revived with funding from Microsoft, while Google parent company Alphabet, online retail giant Amazon, and Facebook owner Meta are among the competitors also investing in nuclear energy.
"Do we really want to create more radioactive waste to power the often dubious and questionable uses of AI?" Johanna Neumann, Environment America Research & Policy Center's senior director of the Campaign for 100% Renewable Energy, asked in December.
"Big Tech should recommit to solutions that not only work but pose less risk to our environment and health," Neumann added.
Given all the upheaval in today’s landscape, organizations must ensure they can reach their audiences in a multitude of ways, without relying on a single platform.
Nonprofits and advocacy groups are in the midst of a mounting crisis: Social media giants are growing more chaotic, untrustworthy, and dangerous.
Just consider what’s happened in the past few weeks. Without warning, explanation, or human review, Meta suspended the Instagram account of Presbyterian Outlook—a progressive, well-established news outlet for the Presbyterian Church. The outlet noted that it had thoughtfully invested in the platform to expand its reach, but would not return given the possibility of another abrupt cancellation.
Then, weeks later, X—which has been plagued by reports of increasing misinformation and amplifying far-right accounts—was hit with cybersecurity attacks that downed the platform.
Just as social media platforms revolutionized our world decades ago—we are in the midst of another pivotal technology movement.
And Meta recently announced that it would draw from X’s technology to employ “Community Notes” on its platforms—which are purportedly meant to fill in the gaps left after the company fired its fact-checking team. Experts have warned that such a system could easily be exploited by groups motivated by their own interests.
These events are just the latest in a growing pile of evidence that organizations and advocates can’t count on social media giants like they once did. They’re fueling misinformation, inflammatory perspectives, and partisan divisions—all in the name of profits.
To continue to be effective in our increasingly digital world, organizations will need to adjust to this new landscape.
Unquestionably, charting the path forward is challenging. Many organizations and advocates have spent years investing in and building profiles on established media platforms. These groups depend on this technology to share their messaging, organize, provide educational tools, fundraise, and more. It’s difficult to shift all these resources.
Other organizations have yet to build up a robust digital presence, but don’t know where to begin, especially in today’s chaotic climate.
Wherever nonprofits and advocates fall on this spectrum, they can and should invest in technology. Here’s how they can be most effective.
First, organizations must recognize that—just as social media platforms revolutionized our world decades ago—we are in the midst of another pivotal technology movement. Given all the upheaval in today’s landscape, organizations must ensure they can reach their audiences in a multitude of ways, without relying on a single platform.
As such, they should build out opportunities for subscription-based data creation. That means reinvesting in collecting more traditional contact methods—like emails and phone numbers. It also means investing in technologies that allow them to share their messages without censorship from outside sources. Blogs and newsletter platforms can be powerful tools to communicate with audiences and provide rich discourse free from external interference.
Protected digital communities—which are only open to certain groups or are invitation-based—can also help strengthen connections between an organization’s supporters. We’re starting to employ this strategy at the Technology, Innovation, and Digital Engagement Lab (TIDEL), which is housed at Union Theological Seminary. Right now, we’re working with a cohort of faith and social justice leaders to deploy new technology to advance their missions.
We’ve recommended a platform called Mighty Networks, which uses AI to help creators build and manage online communities. Two of our fellows are using this service to support Black clergywomen through education and practical application, focusing on mental health awareness and balance. Another pair of fellows is aiming to use the platform to deliver digitally-based educational programming and sustain a community of care professionals committed to improving access to spiritually integrated, trauma-informed care.
Make no mistake: Nonprofits and advocacy organizations need a digital presence to be effective. But they’ll have to adjust to shield themselves from the chaos and malice of social media giants.