SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"The need for U.S. support is more urgent than ever," said one advocate. "The lives and well-being of millions depend on it."
Rights groups on Thursday applauded three Democratic lawmakers for their proposal of a bill to restore United States funding to the United Nations' key agency tasked with providing services and humanitarian assistance to Palestinians in Gaza, six months after the U.S. suspended contributions following unverified accusations against the agency.
James Zogby, president of the Arab American Institute, expressed gratitude to Reps. André Carson (D-Ind.), Pramila Jayapal (D-Wash.), and Jan Schakowsky (D-Ill.) for introducing the UNRWA Emergency Restoration Act of 2024, aimed at restoring funding to the U.N. Relief and Works Agency for Palestine Refugees in the Near East—legislation that Zogby said would be "lifesaving" if passed into law.
"UNRWA plays a vital role in providing essential services to millions of Palestinian refugees across the occupied Palestinian territory, Lebanon, Jordan, and Syria," said Zogby. "The ongoing genocide in Gaza has resulted in increased displacement, starvation, and death. It is both inhumane and unconscionable to continue withholding financial support from UNRWA."
The U.S.—the largest international funder of the agency, which relies almost entirely on voluntary contributions from donor states—promptly suspended funding for UNRWA last January after Israel claimed without evidence that 12 out of the agency's 13,000 staff members in Gaza had been involved in a Hamas-led attack on southern Israel last October.
Congress later passed a bill prohibiting UNRWA funding through at least March 2025.
In 2022, the U.S. contributed more than $343 million to the agency.
"UNRWA is the backbone of humanitarian aid in the Gaza Strip. U.S. funding should be restored immediately."
The Biden administration's decision to suspend donations to UNRWA pushed a number of U.S. allies to do the same, but countries including Germany, Sweden, Japan, and the United Kingdom have since reinstated their funding after an independent probe found that Israel had provided no supporting evidence of its claim.
"The United States should join our key allies in restoring this urgently needed funding for UNRWA. There is no time to lose," said Hassan El-Tayyab, legislative director for Middle East policy at the Friends Committee on National Legislation, expressing support for the newly introduced bill.
Bridget Moix, general secretary for the group, added that cutting of funding was "simply unconscionable" because the U.S.—as the Israeli military's largest international funder—bears responsibility for the "horrific violence and a massive humanitarian crisis" in Gaza.
"UNRWA is the backbone of humanitarian aid in the Gaza Strip," said Moix. "U.S. funding should be restored immediately."
Cavan Kharrazian, senior policy adviser for Demand Progress, noted that since the U.S. suspended funding to UNRWA, the humanitarian crisis in Gaza has worsened, with U.N. experts warning in July that famine had spread across the enclave.
"The need for U.S. support is more urgent than ever," said Kharrazian. "The lives and well-being of millions depend on it."
"We call on leaders in Congress to take principled stands like this as future funding bills move," he added, "removing these harmful prohibitions against UNRWA funding."
"We need a clear FEC rule in place to deter fast-proliferating political deepfakes, which threaten electoral integrity and people's basic faith that what they see and hear is real—but the agency has utterly failed to deliver."
The Federal Election Commission on Thursday voted to forgo new rulemaking on the use of artificial intelligence in U.S. political campaign advertising, drawing sharp criticism from a watchdog group that said deepfakes threaten electoral integrity.
Public Citizen, the watchdog group, had last year petitioned the FEC to issue regulations clarifying that the use of deepfakes in political ads is illegal. The commission on Thursday formally declined to do so and instead voted in favor an anodyne "compromise" rule that states that artificial intelligence is subject to current regulations.
Robert Weissman, co-president of Public Citizen, said "compromise" was a "misnomer" and the FEC's position was in fact "compromised."
"We need a clear FEC rule in place to deter fast-proliferating political deepfakes, which threaten electoral integrity and people's basic faith that what they see and hear is real—but the agency has utterly failed to deliver," he said in a statement.
Fellow co-president Lisa Gilbert agreed, saying that "the threat of deepfakes is staring us in the face and unfortunately our elections agency has chosen to look the other way," and calling the decision "spineless and shameful."
It is OUTRAGEOUS that the FEC has not taken action to protect the upcoming election from political deepfakes.
Listen to @Rob_Weissman explain the threat AI-generated deepfakes pose to our democracy. pic.twitter.com/RCHWxKIMgy
— Public Citizen (@Public_Citizen) September 9, 2024
There are six FEC commissioners, including three from each major party, with a rotating chairmanship. Democrats have criticized the structure in recent years, arguing that Republican commissioners block meaningful regulations—four votes are needed to pass any rule—and have made the FEC toothless. They argue that a strong FEC is more necessary than ever given the massive increase in spending on U.S. elections that's occurred since the Citizens United ruling was issued in 2010.
For Public Citizen's petition, however, the problem was not just the Republican commissioners. Two Democratic commissioners, Dara Lindenbaum and Shana Broussard, declined to support the petition and instead helped craft the anodyne interpretative rule.
Democratic Commissioner Ellen Weintraub, the current vice chair, supported the petition and has commended an ongoing effort by the Federal Communications Commission to regulate AI use in political advertising. The FCC has proposed requiring a disclosure when AI has been used in the making of an ad, drawing praise from watchdog groups such as Public Citizen. The two federal agencies have sparred over the FCC's proposal.
Republican FEC members spoke out strongly against the Public Citizen petition at an open meeting Thursday, arguing that the commission had neither the authority nor the expertise to regulate an emerging technology. Current Chair Sean Cooksey published an op-ed in The Wall Street Journal last month titled "The FEC Has No Business Regulating AI." He issued a 10-page statement on his opposition to the petition on Thursday.
If Congress hasn't yet granted the FEC such authority—a matter of interpretation of the law, which dates to the 1970s—it's possible that it could do so, as there is some level of bipartisan support for legislation on deepfakes. Multiple bipartisan bills have been introduced to prevent the use of AI in political ads, including one brought forth this week by Reps. Adam Schiff (D-Calif.) and Brian Fitzpatrick (R-Pa.), among others.
Schiff told The Associated Press the bill was "modest" and "really probably the lowest hanging fruit there is" in addressing AI misuse in politics. He and Fitzpatrick acknowledged their bill was a long-shot but said they would try to attach it to must-pass legislation later in the year.
"Online platforms use sophisticated and opaque techniques of data collection that endanger young people and put their healthy development at risk," said one children's advocate.
Child welfare advocates renewed calls for U.S. lawmakers to pass a pair of controversial bills aimed at protecting youth from Big Tech's "dangerous and unacceptable business practices" after the Federal Trade Commission published a report Thursday detailing how social media and streaming companies endanger children and teens who use their platforms.
The FTC staff report—entitled A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services—"shows how the tech industry's monetization of personal data has created a market for commercial surveillance, especially via social media and video streaming services, with inadequate guardrails to protect consumers."
The agency staff examined the practices of Meta platforms, which include Facebook, Instagram, and WhatsApp; YouTube; X, formerly known as Twitter; Snapchat; Reddit; Discord; Amazon, which owns the gaming site Twitch; and ByteDance, the owner of TikTok.
"The report finds that these companies engaged in mass data collection of their users and—in some cases—nonusers," Bureau of Consumer Protection Director Samuel Levine said in the paper. "It reveals that many companies failed to implement adequate safeguards against privacy risks. It sheds light on how companies used our personal data, from serving hypergranular targeted advertisements to powering algorithms that shape the content we see, often with the goal of keeping us hooked on using the service."
The publication "also finds that these practices pose unique risks to children and teens, with the companies having done little to respond effectively to the documented concerns that policymakers, psychologists, and parents have expressed over young people's physical and mental well-being."
FTC Chair Lina Khan said in a statement that "the report lays out how social media and video streaming companies harvest an enormous amount of Americans' personal data and monetize it to the tune of billions of dollars a year."
"While lucrative for the companies, these surveillance practices can endanger people's privacy, threaten their freedoms, and expose them to a host of harms, from identify theft to stalking," she added.
Researchers at Boston Children's Hospital and Harvard University published an analysis last December that revealed social media companies made nearly $11 billion in 2022 advertising revenue from U.S.-based users younger than 18.
According to the FTC report:
While the use of social media and digital technology can provide many positive opportunities for self-directed learning, forming community, and reducing isolation, it also has been associated with harms to physical and mental health, including through exposure to bullying, online harassment, child sexual exploitation, and exposure to content that may exacerbate mental health issues, such as the promotion of eating disorders, among other things.
The publication also flags "algorithms that may prioritize certain forms of harmful content, such as dangerous online challenges."
The report accuses social media companies of "willful blindness around child users" by claiming that there are no children on their platforms because their sites do not allow them to create accounts. This may constitute an attempt by the companies to avoid legal liability under the Children's Online Privacy Protection Act Rule (COPPA). Last December, Khan
proposed sweeping changes to COPPA to address the issue.
Josh Golin, executive director of Fairplay—a nonprofit organization "committed to helping children thrive in an increasingly commercialized, screen-obsessed culture"—said in a statement that "this report from the FTC is yet more proof that Big Tech's business model is harmful to children and teens."
"Online platforms use sophisticated and opaque techniques of data collection that endanger young people and put their healthy development at risk," Golin added. "We thank the FTC for listening to the concerns raised by Fairplay and a coalition of advocacy groups, and we call on Congress to pass COPPA 2.0, the Children and Teens' Online Privacy Protection Act, and KOSA, the Kids Online Safety Act, to better safeguard our children from these companies' dangerous and unacceptable business practices."
On Wednesday, the House Energy and Commerce Committee voted to advance COPPA 2.0 and KOSA, both of which were overwhelmingly passed by the Senate in July.
However, rights groups including the ACLU condemned KOSA, which the civil liberties organization warned "would violate the First Amendment by enabling the federal government to dictate what information people can access online and encourage social media platforms to censor protected speech."
In May 2023, U.S. Surgeon General Dr. Vivek Murthy issued an advisory on "the growing concerns about the effects of social media on youth mental health."
The White House simultaneously announced the creation of a federal task force "to advance the health, safety, and privacy of minors online with particular attention to preventing and mitigating the adverse health effects of online platforms."
Murthy has also called for tobacco-like warning labels on social media to address the platform's possible harms to children and teens.
According to a study published in January by the corporate power watchdog Ekō, in just one week that month there were more than 33 million posts on TikTok and Meta-owned Instagram "under hashtags housing problematic content directed at young users," including suicide, eating disorders, skin-whitening, and so-called "involuntary celibacy."