SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Will turning off our smart phones—even for a few minutes or (gasp!) hours per day—make the future any different or better? We won't know unless we try.
Recently, I’ve been turning off my iPhone — all the way off! — for 10 to 30 minutes at a time. I leave it somewhere in the house, while I try to live IRL (“in real life”), washing dishes, hanging up laundry, or even going for a walk, phoneless.
In this hyper-connected world of ours, doing so, even for such a short time, often feels like an enormous act of self-deprivation — no podcasts, no long-distance communication with those I’m closest to, no social media, no para-social relationships, no steps of mine being counted, or micro-health-tracking going on. So much, in other words, missing in action. I’m not a digital native. In fact, I am what they call a late adopter. I didn’t get a cell phone until the fall of 2003. So I remember when it was normal to go about your business without a powerful computer attached to your person. Even with that perspective — recalling the not-so-long-agos of answering machines and public phones with grubby buttons and Internet cafes — I feel unsettled when I’m untethered from my digital leash and experiencing what might pass for freedom, even for a few minutes.
But as unsettling as it is, I also want to start new patterns. Lawyer friends tell me that activists often turn their phones off for the first (and maybe only) time as they commit acts of political property destruction. It’s almost a rite of passage for the newly politicized, and it’s as incriminating as the massive data trails that other activists might leave.
Did you hear about the Tesla saboteur? Home from college in Boston for spring break, the 19-year-old wanted to express his rage at billionaire Elon Musk’s government takeover. He went to a Kansas City Tesla dealership in the middle of the night and used a homemade Molotov cocktail to set a Cybertruck on fire. The fire spread, destroying charging stations and setting a second truck aflame, causing more than $200,000 in damage. He was caught in the act — at least in data terms. The cameras at Tesla (and inside Tesla vehicles themselves) pinpointed the time of the property destruction, while images of someone who looked like him were caught on multiple cameras in the vicinity.
As for new patterns, turning off my cellphone for a period of time every day means a small window of datalessness that offers a twenty-first-century version of rebellion. It dams up the stream of free data that flows from my device with every tap-tap and swipe. By doing so, I create a tiny space for surprise, for rebellion, for precious secrecy.
I don’t have any plans to sabotage a Tesla showroom, nor am I in a current conspiracy with anyone trying to stop a shipment of U.S. weapons to the Israeli Defense Forces for its genocidal campaign against Gaza. I’m not trying to organize a workers’ strike at my kids’ school or local grocery store. To my shame, I’m not actively planning any of these actions. For those who don’t want to make rookie activist data mistakes, the Internet (and here’s a nod toward the irony) is full of crash courses on security culture and avoiding self-incrimination or entrapment through careless reliance on tech.
As I power down that ubiquitous device, I remind myself of my own power, too. Yes, I still know how to get places without a map app. I know the answers to the random trivia that comes into my mind any day. (Who sang that song? Who was president in 1954?) Or I can live with the not-knowing. Amazingly enough, I’ve discovered that I still know how to live in my own mind alone, without being distracted or entertained by a podcast. I’ve realized that just because I have the urge to reach out to so-and-so, it doesn’t actually mean that it has to happen that very second. It’s bracing and helpful to remember I can live without this device.
Dehumanizing Technology?
I’m well aware of the research on how bad the online world can be for anyone, especially young people. And believe it or not, my kids — 11 and 12 — still don’t have cellphones and don’t live online. They don’t play video games on and off all day long or have access to their own devices at home. But that doesn’t mean that they’re living some Montessori or Waldorf fantasy of Luddite delight. I kind of wish they were. But that life is for a much higher income bracket than mine. It’s worth noting that many in the tech world take great pains to shield their children from this technology. Every other kid on my daughter’s bus undoubtedly has a phone and I’m sure she’s craning to look over someone’s shoulder whenever she can. My son’s friends all have phones — no surprise in this world of ours — and play video games regularly. He’s a little left out of the chatter about this or that gaming platform, but I’m not giving in just so he can fit into a culture that I don’t think is all that healthy to begin with.
As a parent, I think a lot about the kind of world I’m preparing my kids for. And I guess there’s an argument to be made for preparing them for a world lived largely online, since that’s where we are these days. But I’m going to try and hold the line and reject that very world as much as humanly possible. (Humanly indeed!)
I want my kids running, swimming, noticing the world around them, creating art, hearing bird songs and cries of warning, reading good books (or even not-so-great ones) — almost anything but playing video games and diving into the deep end of a cyber-cesspool of bullying, eating disorders, and a fixation on looks.
I read about the connections between video games and war fighting today and in the future. And it’s strange (at least to me) to imagine war as a video game and the degradation that goes with it. After all, dehumanization is the name of the grisly game these days for the Israeli Defense Forces. Soldiers are taught that the Palestinian people — even children — are less than fully human. Technology may not make them feel that way, but it certainly does make it easier to execute orders involving collective punishment, total surveillance, technological harassment, and ethnic cleansing.
Spending Time with Jennifer Lopez
On Wednesdays, my kids walk to the library, where they can log onto public computers and watch unboxing videos or tutorials on contouring (whatever that may be!). And then they have to walk home in time for dinner. It’s a little over a mile round trip, and I figure it’s a good trade-off. I tell them that they can have a smartphone when they can pay for it themselves, but in my dreams what I’d really like would be a communications device that, in order to use, they had to power with a bicycle or a hand crank. I would want it to feel like work. Because it’s not a value-neutral object and the network it relies on is not value-neutral either. At every juncture, this technology that we take for granted has a high labor, material, and environmental cost.
My daughter Madeline is 11. I notice her putting ever more attention into her appearance, primping and carefully considering her outfits. Still, she smiles when she looks in the mirror, delighting in her strong sense of style and dancing to the beat of her own drummer. Her once-a-week plunge into YouTube hasn’t dissipated her sense of self the way daily (hourly?) immersion would. She plays softball, runs at recess, and has a healthy appetite. She isn’t isolated from the world, and she and I talk about body image, aging, and the way old-fashioned media, social media, and AI create impossible standards for women.
Recently, we watched an ad featuring the multi-hyphenate Jennifer Lopez who, at the age of 55, is acting, singing, dancing, and representing high-end brands like a full-time mogul model. “Gosh, Mom. I can’t believe she is older than you,” Madeline said with the unalloyed frankness of the young. She didn’t have to mention my wrinkles and rolls and masses of white hair. It was all implied in her incredulous tone.
“Well, my Love, it’s not my job to look a certain way,” I replied.
Jennifer Lopez is, of course, a knockout. I have loved her since Out of Sight and Jenny From the Block. As a public figure and a professional beauty, she’s in a position to maintain her looks, no matter what the cost. She undoubtedly spares no expense when it comes to trainers, treatments, makeup, and clothes to keep that look (or at least something close to it), and then computers and lighting do the rest.
Believe me, it’s good to have these conversations with my kid, to have her understand the effort and cost that go into looking like Jennifer Lopez, or any other celebrity. As I pointed out to Madeline, I don’t have a deal with a face-cream company or a clothing line or a perfume outfit or some kind of alcohol company that requires me to devote myself to my persona. And in her own fashion, she heard me.
As I reflect now, I realize that, without such conversations, she might think she’s supposed to look that way, too, and that there’s something wrong with her if she doesn’t. That degraded sense of self is easy pickings for our consumerist culture which sends unrelenting messages that this or that product will fill the hole.
Making the Future Different?
All my yellow thumbs up, all my mindless clicks and swipes, the time traps I fall into — full disclosure: it’s videos of thrifters on the hunt for deals and the posts of the hauls they buy to resell that grab me every time! That’s my weak spot. But every minute online is captured in a huge data profile of ME that I can’t contest or contrive or unravel. But I can turn away. Turn off the iPhone. Turn away from the screen. Disconnect the stream of data. This pervasive technology and its promises of ease and a frictionless existence are a downright lie. After all, the same technological framework powers DoorDash and the weaponized drones that are now raining terror down on children just like mine in Gaza.
I live far enough away from Gaza (in so many senses) that I could mindlessly embrace DoorDash while rejecting killer drones. But now that I’ve made the connection, I can’t un-make it. So I am going to say as big a NO as possible to both.
As the world gets more networked and more automated, the basic knowledge of how to survive in it gets lost, commodified, or controlled. How to find and purify water, how to grow and prepare food — lost! The “cloud” won’t bring rain to end drought conditions. The Internet is not going to feed us in a supply chain collapse. These are the things that keep me up at night, so without freaking out too much, my kids and I work on life skills together. Eye contact, stamina for walking, tolerance of discomfort, strategic decision-making, map-reading, determining threat levels, and assessing someone’s trustworthiness. These are all skills that will help my kids in a distinctly precarious future.
A few years ago, an artist named Simon Weckert borrowed a few dozen iPhones from friends, put them in a red wagon and took a walk through the streets of Berlin. With just an hour or so of lag time, Google Maps showed all the streets and roads he had walked on bottlenecked in traffic jams. Video of his mobile art piece shows him strolling down the center of empty roads. It’s absorbing to watch that video, a split screen of him in a yellow jacket with the jaunty gait of a wagon puller and those red-lined Google Maps. Weckert’s performance demonstrates how our sense of reality is mediated by, filtered through, and dependent on a technology we simply don’t fully grasp or understand.
What we see isn’t what is real. In these dystopian Trumpy days, deep in our bones, we know that. Trump rants about White genocide and radical-left judicial monsters and tweets out AI-constructed images of himself as the Pope, a Jedi master, a golden statue in a renovated Gaza resort. What we see isn’t what’s real. And yes, I am in awe of it. I am afraid of it. I know it cannot feed me. I know it is trying to cleave my attention from the question of how we survive this violent present and make a different and far better future.
Peter Maurin, who co-founded the Catholic Worker movement with Dorothy Day, was fond of saying that we make the future different by making the present different.
So, I am turning my iPhone off. It makes my present different. Will it make the future any different?
It won’t hurt to try!
What we need is not a renewed arms race fueled by fear, competition, and secrecy, but its opposite: a global initiative to democratize and demilitarize technological development.
“History repeats itself, first as tragedy, then as farce.” Marx’s aphorism feels newly prescient. Last week, the U.S. Department of Energy issued a jingoistic call on social media for a “new Manhattan Project,” this time to win the so-called race for artificial intelligence supremacy.
But the Manhattan Project is no blueprint. It is a warning—a cautionary tale of what happens when science is conscripted into the service of state power, when open inquiry gives way to nationalist rivalry, and when the cult of progress is severed from ethical responsibility. It shows how secrecy breeds fear, corrodes public trust, and undermines democratic institutions.
The Manhattan Project may have been, as President Harry Truman claimed, “the greatest scientific gamble in history.” But it also represented a gamble with the continuity of life on Earth. It brought the world to the brink of annihilation—an abyss into which we still peer. A second such project may well push us over the edge.
If we are serious about the threats posed by artificial intelligence, we must abandon the illusion that safety lies in outpacing our rivals.
The parallels between the origins of the atomic age and the rise of artificial intelligence are striking. In both, the very individuals at the forefront of technological innovation were also among the first to sound the alarm.
During World War II, atomic scientists raised concerns about the militarization of nuclear energy. Yet, their dissent was suppressed under the strictures of wartime secrecy, and their continued participation was justified by the perceived imperative to build the bomb before Nazi Germany. In reality, that threat had largely subsided by the time the Manhattan Project gathered momentum, as Germany had already abandoned its efforts to develop a nuclear weapon.
The first technical study assessing the feasibility of the bomb concluded that it could indeed be built but warned that “owing to the spreading of radioactive substances with the wind, the bomb could probably not be used without killing large numbers of civilians, and this may make it unsuitable as a weapon…”
When in 1942 scientists theorized that the first atomic chain reaction might ignite the atmosphere, Arthur Holly Compton recalled thinking that if such a risk proved real, then “these bombs must never be made… better to accept the slavery of the Nazis than to run a chance of drawing the final curtain on mankind.”
Leo Szilard drafted a petition urging President Truman to refrain from using it against Japan. He warned that such bombings would be both morally indefensible and strategically shortsighted: “A nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction,” he wrote, “may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.”
Today, we cannot hide behind the pretext of world war. We cannot claim ignorance. Nor can we invoke the specter of an existential adversary. The warnings surrounding artificial intelligence are clear, public, and unequivocal.
In 2014, Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race.” In more recent years, Geoffrey Hinton, referred to as the “godfather of AI,” resigned from Google while citing mounting concerns about the “existential risk” posed by unchecked AI development. Soon after, a coalition of researchers and industry leaders issued a joint statement asserting that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Around this time, an open letter, signed by over a thousand experts and tens of thousands of others, called for a temporary pause on AI development to reflect on its trajectory and long-term consequences.
Yet the race to develop ever more powerful artificial intelligence continues unabated, propelled less by foresight than by fear that halting progress would mean falling behind rivals, particularly China. But in the face of such profound risks, one must ask: win what, exactly?
Reflecting on the similar failure to confront the perils of technological advancement in his own time, Albert Einstein warned, “The unleashed power of the atom has changed everything except our mode of thinking, and thus we drift toward unparalleled catastrophe.” His words remain no less urgent today.
The lesson should be obvious: We cannot afford to repeat the mistakes of the atomic age. To invoke the Manhattan Project as a model for AI development is not only historically ignorant but also politically reckless.
What we need is not a renewed arms race fueled by fear, competition, and secrecy, but its opposite: a global initiative to democratize and demilitarize technological development, one that prioritizes human needs, centers dignity and justice, and advances the collective well-being of all.
More than 30 years ago, Daniel Ellsberg, former nuclear war planner turned whistleblower, called for a different kind of Manhattan Project. One not to build new weapons, but to undo the harm of the first and to dismantle the doomsday machines that we already have. That vision remains the only rational and morally defensible Manhattan Project worth pursuing.
We cannot afford to recognize and act upon this only in hindsight, as was the case with the atomic bomb. As Joseph Rotblat, the sole scientist to resign from the Project on ethical grounds, reflected on their collective failure:
The nuclear age is the creation of scientists… in total disregard for the basic tenets of science… openness and universality. It was conceived in secrecy, and usurped—even before birth—by one state to give it political dominance. With such congenital defects, and being nurtured by an army of Dr. Strangeloves, it is no wonder that the creation grew into a monster… We, scientists, have a great deal to answer for.
If the path we are on leads to disaster, the answer is not to accelerate. As physicians Bernard Lown and Evgeni Chazov warned during the height of the Cold War arms race: “When racing toward a precipice, it is progress to stop.”
We must stop not out of opposition to progress, but to pursue a different kind of progress: one rooted in scientific ethics, a respect for humanity, and a commitment to our collective survival.
If we are serious about the threats posed by artificial intelligence, we must abandon the illusion that safety lies in outpacing our rivals. As those most intimately familiar with this technology have warned, there can be no victory in this race, only an acceleration of a shared catastrophe.
We have thus far narrowly survived the nuclear age. But if we fail to heed its lessons and forsake our own human intelligence, we may not survive the age of artificial intelligence.
"Americans deserve both meaningful federal protections and the ability of their states to lead in advancing safety, fairness, and accountability when AI systems cause harm."
Demand Progress on Monday led over 140 organizations "committed to protecting civil rights, promoting consumer protections, and fostering responsible innovation" in a letter opposing U.S. House Republicans' inclusion of legislation that would ban state and local laws regulating artificial intelligence in a megabill advanced by the Budget Committee late Sunday.
Section 43201(c)—added by U.S. Rep. Brett Guthrie (R-Ky.) ahead of last Tuesday's markup session—says that "no state or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this act."
"Protections for civil rights and children's privacy, transparency in consumer-facing chatbots to prevent fraud, and other safeguards would be invalidated, even those that are uncontroversial."
In the new letter, the coalition highlighted how "sweeping" the GOP measure is, writing to House Speaker Mike Johnson (R-La.), Minority Leader Hakeem Jeffries (D-N.Y.), and members of Congress that "as AI systems increasingly shape critical aspects of Americans' lives—including hiring, housing, healthcare, policing, and financial services—states have taken important steps to protect their residents from the risks posed by unregulated or inadequately governed AI technologies."
"As we have learned during other periods of rapid technological advancement, like the industrial revolution and the creation of the automobile, protecting people from being harmed by new technologies, including by holding companies accountable when they cause harm, ultimately spurs innovation and adoption of new technologies," the coalition continued. "In other words, we will only reap the benefits of AI if people have a reason to trust it."
According to the letter:
This total immunity provision blocks enforcement of all state and local legislation governing AI systems, AI models, or automated decision systems for a full decade, despite those states moving those protections through their legislative processes, which include input from stakeholders, hearings, and multistakeholder deliberations. This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm—regardless of how intentional or egregious the misconduct or how devastating the consequences—the company making that bad tech would be unaccountable to lawmakers and the public. In many cases, it would make it virtually impossible to achieve a level of transparency into the AI system necessary for state regulators to even enforce laws of general applicability, such as tort or antidiscrimination law.
"Many state laws are designed to prevent harms like algorithmic discrimination and to ensure recourse when automated systems harm individuals," the letter notes. "For example, there are many documented cases of AI having highly sexualized conversations with minors and even encouraging minors to commit harm to themselves and others; AI programs making healthcare decisions that have led to adverse and biased outcomes; and AI enabling thousands of women and girls to be victimized by nonconsensual deepfakes."
If Section 43201(c) passes the Republican-controlled Congress and is signed into law by President Donald Trump, "protections for civil rights and children's privacy, transparency in consumer-facing chatbots to prevent fraud, and other safeguards would be invalidated, even those that are uncontroversial," the letter warns. "The resulting unfettered abuses of AI or automated decision systems could run the gamut from pocketbook harms to working families like decisions on rental prices, to serious violations of ordinary Americans' civil rights, and even to large-scale threats like aiding in cyber attacks on critical infrastructure or the production of biological weapons."
The coalition also called out "Congress' inability to enact comprehensive legislation enshrining AI protections leaves millions of Americans more vulnerable to existing threats," and commended states for "filling the need for substantive policy debate over how to safely advance development of this technology."
In the absence of congressional action, former President Joe Biden also took some steps to protect people from the dangers of AI. However, as CNNpointed out Monday, "shortly after taking office this year, Trump revoked a sweeping Biden-era executive order designed to provide at least some safeguards around artificial intelligence. He also said he would rescind Biden-era restrictions on the export of critical U.S. AI chips earlier this month."
Today, Demand Progress and a coalition of artists, teachers, tech workers and more asked House leaders to reject a measure that would stop states from regulating AI. Read the full story by @claresduffy.bsky.social at @cnn.com
[image or embed]
— Demand Progress (@demandprogress.bsky.social) May 19, 2025 at 10:15 AM
The groups asserted that "no person, no matter their politics, wants to live in a world where AI makes life-or-death decisions without accountability... Section 43201(c) is not the only provision in this package that is of concern to our organizations, and there are some provisions on which we will undoubtedly disagree with each other. However, when it comes to this provision, we are united."
"Americans deserve both meaningful federal protections and the ability of their states to lead in advancing safety, fairness, and accountability when AI systems cause harm," concluded the coalition, which includes 350.org, the American Federation of Teachers, Center for Democracy & Technology, Economic Policy Institute, Free Press Action, Friends of the Earth U.S., Greenpeace USA, Groundwork Collaborative, National Nurses United, Public Citizen, Service Employees International Union, and more.
In a Monday statement announcing the letter, Demand Progress corporate power director Emily Peterson-Cassin blasted the provision as "a dangerous giveaway to Big Tech CEOs who have bet everything on a society where unfinished, unaccountable AI is prematurely forced into every aspect of our lives."
"Speaker Johnson and Leader Jeffries must listen to the American people and not just Big Tech campaign donations," she said. "State laws preventing AI from encouraging children to harm themselves, making uninformed decisions about who gets healthcare, and creating nonconsensual deepfakes will all be wiped away unless Congress reverses course."