SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
"For all the claims Trump and the GOP have made about being the voice of working-class voters, firing Chopra... only satisfies unscrupulous corporations and unelected billionaires like Elon Musk," one advocate said.
U.S. President Donald Trump moved Saturday morning to fire Consumer Financial Protection Bureau Director Rohit Chopra, who had earned the praise of consumer advocates and the ire of Wall Street for his efforts to return more than $6 billion to ordinary Americans.
Chopra announced his firing on social media, also sharing a letter to the president in which he touted the work of the CFPB and outlined possible priorities for his successor.
"Every day, Americans from across the country shared their ideas and experiences with us," Chopra wrote to his followers. "You helped us hold powerful companies and their executives accountable for breaking the law, and you made our work better. Thank you."
In his letter, Chopra mounted a full-throated defense of the CFPB, which has often been attacked by Republicans and pro-Trump figures, including billionaire Elon Musk. He wrote that the 2008 financial crisis "made Americans question whether regulators and law enforcement would hold companies and their executives accountable for their mismanagement or wrongdoing," especially since many of the companies responsible for the crash only got larger and more powerful following a taxpayer-funded bailout.
"That's what agencies like CFPB work to fix: to make sure that the laws of our land aren't just words on a page," he wrote, adding that "with so much power concentrated in the hands of a few, agencies like the CFPB have never been more critical."
Chopra, who was appointed by former President Joe Biden to head the CFPB in 2021, said that he was "proud the CFPB had done so much to restore the rule of law" during his tenure.
"Since 2021, we have returned billions of dollars from repeat offenders and other bad actors, implemented dormant legal authorities and long-overdue rules required by law, and given more freedom and bargaining leverage to families navigating a complex and confusing financial system," he wrote.
"If civil society does its job, every person unnecessarily taken advantage of by a financial institution will attribute the blame to the right person—Donald Trump."
Chopra also touted the CFPB's regulation of junk fees, inaccurate medical bills, and digital surveillance by Big Tech. Under Chopra, the CFPB sued major financial institutions such as Bank of America and JP Morgan Chase and finalized a rule to strike around $49 billion worth of medical debt from credit reports, according to CNN.
With Chopra in charge, the bureau "has fought against junk fees, repeat offenders, big tech evasions, and corporate deception. It has championed competition, transparency, accountability, and consumer financial health," Adam Rust, director of financial services for the Consumer Federation of America, said in a statement reported by NPR.
Despite the fact that Chopra was originally appointed by Trump in 2018 to serve on the Federal Trade Commission, Chopra's firing was expected as soon as Trump took office, with both major banks and tech companies urging the new president to oust him.
While anticipated, the move was criticized by progressive advocates and lawmakers.
"For all the claims Trump and the GOP have made about being the voice of working-class voters, firing Chopra and attacking the CFPB only satisfies unscrupulous corporations and unelected billionaires like Elon Musk," Revolving Door Project founder and executive director Jeff Hauser said in a statement. "If civil society does its job, every person unnecessarily taken advantage of by a financial institution will attribute the blame to the right person—Donald Trump."
Rep. Pramila Jayapal (D-Wash.) called his firing "an enormous loss for the American people."
"My friend Rohit Chopra has done an incredible job leading the CFPB—standing up to big corporations, protecting consumer data, and saving money for poor and working families," Jayapal said on social media.
Former Labor Secretary Robert Reich wrote on social media: "Under Rohit Chopra's tenure, the CFPB continued to serve as a shining example of government working on behalf of the people. Chopra took on corporate greed, unnecessary junk fees, predatory lending, and other financial shenanigans. It's telling that Trump just fired him."
According toThe New York Times, the CFPB under Trump is expected by financial industry officials to roll back some of Chopra's regulations and to issue fewer new rules and weaken enforcement.
However, Sen. Elizabeth Warren (D-Mass.) pointed out that this would run counter to Trump's own campaign rhetoric.
"President Trump campaigned on capping credit card interest rates at 10% and lowering costs for Americans. He needs a strong CFPB and a strong CFPB director to do that," she said in a statement. "But if President Trump and Republicans decide to cower to Wall Street billionaires and destroy the agency, they will have a fight on their hands."
Chopra himself, in his farewell letter to Trump, suggested steps the CFPB could take under new leadership. These included:
"We have also analyzed your promising proposal on capping credit card interest rates, and we see a path for enacting meaningful reform," he wrote to Trump. "I hope that the CFPB will continue to be a pillar of restoring and advancing economic liberty in America."
All told, 92 million low-income people in the United States—those with incomes less than 200% of the federal poverty line—have some key aspect of life decided by AI.
The billions of dollars poured into artificial intelligence, or AI, haven’t delivered on the technology’s promised revolutions, such as better medical treatment, advances in scientific research, or increased worker productivity.
So, the AI hype train purveys the underwhelming: slightly smarter phones, text-prompted graphics, and quicker report-writing (if the AI hasn’t made things up). Meanwhile, there’s a dark underside to the technology that goes unmentioned by AI’s carnival barkers—the widespread harm that AI presently causes low-income people.
AI and related technologies are used by governments, employers, landlords, banks, educators, and law enforcement to wrongly cut in-home caregiving services for disabled people; accuse unemployed workers of fraud; deny people housing, employment, or credit; take kids from loving parents and put them in foster care; intensify domestic violence and sexual abuse or harassment; label and mistreat middle- and high-school kids as likely dropouts or criminals; and falsely accuse Black and brown people of crimes.
With additional support from philanthropy and civil society, low-income communities and their advocates can better resist the immediate harms and build political power needed to achieve long-term protection against the ravages of AI.
All told, 92 million low-income people in the United States—those with incomes less than 200% of the federal poverty line—have some key aspect of life decided by AI, according to a new report by TechTonic Justice. This shift towards AI decision-making carries risks not present in the human-centered methods that precede them and defies all existing accountability mechanisms.
First, AI expands the scale of risk far beyond individual decision-makers. Sure, humans can make mistakes or be biased. But their reach is limited to the people they directly make decisions about. In cases of landlords, direct supervisors, or government caseworkers, that might top out at a few hundred people. But with AI, the risks of misapplied policies, coding errors, bias, or cruelty are centralized through the system and applied to masses of people ranging from several thousand to millions at a time.
Second, the use of AI and the reasons for its decisions are not easily known by the people subject to them. Government agencies and businesses often have no obligation to affirmatively disclose that they are using AI. And even if they do, they might not divulge the key information needed to understand how the systems work.
Third, the supposed sophistication of AI lends a cloak of rationality to policy decisions that are hostile to low-income people. This paves the way for further implementation of bad policy for these communities. Benefit cuts, such as those to in-home care services that I fought against for disabled people, are masked as objective determinations of need. Or workplace management and surveillance systems that undermine employee stability and safety pass as tools to maximize productivity. To invoke the proverb, AI wolves use sheep avatars.
The scale, opacity, and costuming of AI make harmful decisions difficult to fight on an individual level. How can you prove that AI was wrong if you don’t even know that it is being used or how it works? And, even if you do, will it matter when the AI’s decision is backed up by claims of statistical sophistication and validity, no matter how dubious?
On a broader level, existing accountability mechanisms don’t rein in harmful AI. AI-related scandals in public benefit systems haven’t turned into political liabilities for the governors in charge of failing Medicaid or Unemployment Insurance systems in Texas and Florida, for example. And the agency officials directly implementing such systems are often protected by the elected officials whose agendas they are executing.
Nor does the market discipline wayward AI uses against low-income people. One major developer of eligibility systems for state Medicaid programs has secured $6 billion in contracts even though its systems have failed in similar ways in multiple states. Likewise, a large data broker had no problem winning contracts with the federal government even after a security breach divulged the personal information of nearly 150 million Americans.
Existing laws similarly fall short. Without any meaningful AI-specific legislation, people must apply existing legal claims to the technology. Usually based on anti-discrimination laws or procedural requirements like getting adequate explanations for decisions, these claims are often available only after the harm has happened and offer limited relief. While such lawsuits have had some success, they alone are not the answer. After all, lawsuits are expensive; low-income people can’t afford attorneys; and quality, no-cost representation available through legal aid programs may not be able to meet the demand.
Right now, unaccountable AI systems make unchallengeable decisions about low-income people at unfathomable scales. Federal policymakers won’t make things better. The Trump administration quickly rescinded protective AI guidance that former U.S. President Joe Biden issued. And, with President Donald Trump and Congress favoring industry interests, short-term legislative fixes are unlikely.
Still, that doesn’t mean all hope is lost. Community-based resistance has long fueled social change. With additional support from philanthropy and civil society, low-income communities and their advocates can better resist the immediate harms and build political power needed to achieve long-term protection against the ravages of AI.
Organizations like mine, TechTonic Justice, will empower these frontline communities and advocates with battle-tested strategies that incorporate litigation, organizing, public education, narrative advocacy, and other dimensions of change-making. In the end, fighting from the ground up is our best hope to take AI-related injustice down.
The industry holds that we will need so much electricity for the data centers to keep this technology running that we’ll have to give up on dealing with climate change for now. A new, more efficient AI challenges that.
Cince we’ve all been weathering the head-spinning assault on the Constitution by the new administration (and, at Third Act and elsewhere, trying to do something about it), I thought it might make sense to provide you with one interesting piece of good news.
It concerns this DeepSeek Chinese AI program that you’ve doubtless been reading about in recent days. I’m the last person to turn to for an analysis of its virtues (I remain fully dependent on my highly-developed Natural Cluelessness), but I am very clear that it complicates the main current task of the fossil fuel industry: glomming onto AI as the latest excuse for building out a bunch of gas-fired power plants.
That narrative—which has been building for a year or so—holds that we will need so much electricity for the data centers to keep this technology running that we’ll have to give up on dealing with climate change for now. It reached its zenith last week when the new administration announced something called Stargate, a $500 billion plan that, as U.S. President Donald Trump put it, would be “the largest AI infrastructure project in history.” This was the moment when he declared an “energy emergency” so that we could build more power plants (but not, of course, the solar or battery parks that Silicon Valley experts have testified would be the most efficient way to power these megacenters).
The increasingly gloomy idea that there was no possible way we could every deal with climate because AI would soak up every new electron that sun and wind could ever provide, may not be quite as true as it seemed to some a week ago.
I would venture to say, given Trump’s predilections, that he neither understands nor cares much about the AI part of all of this, but he completely groks it as a way to pay back Big Oil for the $445 million they invested in the last election. (Political donations come in millions with an M, and the paybacks come in billions with a B—of our money). As Bloombergreported, the whole DeepSeek incident shows how dependent on this AI story the fossil fuel industry is as an excuse for expansion (just as a couple of years ago it was dependent on the Ukraine war story):
In one brutal blow, DeepSeek has revealed just how many energy-related businesses in the U.S. have been banking on an artificial intelligence boom—and the surge in power demand it was supposed to bring.
For the past year, their growth expectations and share prices were boosted by the belief that AI would require an unprecedented wave of data center construction, with some centers needing as much electricity as entire cities. Utilities and power plant operators benefited, too, but the effect went far wider than such obvious industries, touching an astonishing array of companies.
That became clear the moment China’s DeepSeek unveiled a chatbot that could rival the best American AI programs while using just a fraction of the electricity, perhaps as little as 10%. DeepSeek’s announcement hammered the shares of uranium producers and natural gas pipeline operators alike. Companies that supply power plant equipment and data center cooling systems suffered as well in Monday’s big selloff.
I don’t think we know enough yet to know if that claim—”rival the best American AI programs while using just a fraction of the electricity, perhaps as little as 10%”—is actually true. There are voices in the U.S. today beginning to claim that DeepSeek plundered American code to make its breakthroughs (which is truly funny, since American AI merrily plundered everything everyone has ever written, to make its breakthroughs). And there are others saying that DeepSeek, by making AI more affordable, will actually increase the amount that it is used.
But it does seem as if something new is afoot—the search for efficiency, instead of just massive brute force—in constructing artificial intelligence. As the investment gurus Dylan Lewis and Tim Beyers at The Motley Foolput it:
One of the main things that has popped up a lot in the reporting on this is that the compute necessary for what is running on DeepSeek is a fraction of the compute for some of the other systems. Watching the way that the market is processing this, we are seeing Big Tech companies take a hit. We are seeing some of the chip companies take a hit. We're also seeing energy companies take a hit because there is this feeling that maybe as we get a little bit more technologically advanced as other players start coming into the space, some of the energy demands for this technology won't be as big as people have maybe originally thought.
and
There is no way we are going to be building out the amount of energy infrastructure required to service all this at the level we are talking about in the timeframe we were talking about. Then what happens? You have a constraint. Do you keep doing what you're doing and overwhelm the energy infrastructure, knowing full well you can't build it out at the level that you want to, in the timeframe you want to, or do you do what the industry always does, which is find areas of efficiency to scale in a better, more economical way? That's what always happens.
I’m not beginning to tell you how all this comes out. All I’m saying is, the increasingly gloomy idea that there was no possible way we could every deal with climate because AI would soak up every new electron that sun and wind could ever provide, may not be quite as true as it seemed to some a week ago. (It would be awfully nice if this kind of move toward computing efficiency catches on—here’s another story from this week, about new software fixes that seem capable of reducing power demand at these data centers by 30%.)
You could, I think, even draw a crude analogy between DeepSeek and solar power, in that it seems to be producing the same thing that OpenAI and Meta are producing for a fraction of the cost, the same way that photovoltaics produce power more cheaply than Exxon. And since it’s open source, it undercuts them in another way too: Anyone can get their hands on this and work with it. (“Anyone” meaning anyone who knows what they’re doing—not me, obviously). The advantage of hoarding chips, which has been Big Tech’s strategy, may turn out to be kind of like the advantage of hoarding “reserves” of hydrocarbons—less solid than might have been expected. To complete this imperfect analogy, AI, like the solar cell, may have been invented in the U.S., but it’s China who may figure out how to make the most of it.
It was only two (very long) weeks ago that former President Joe Biden, in his farewell address, warned us against the “tech-industrial complex.” Some youngsters, working around the constraints imposed by the U.S., seem to have struck a blow in that direction. It’s obviously far too much to hope that the U.S. and China might cooperate to develop this new technology in some rational way—the best we can hope for, I think, is that they won’t actually destroy the planet en route to whatever nirvana these new intelligences have in mind for us.