SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:#222;padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.sticky-sidebar{margin:auto;}@media (min-width: 980px){.main:has(.sticky-sidebar){overflow:visible;}}@media (min-width: 980px){.row:has(.sticky-sidebar){display:flex;overflow:visible;}}@media (min-width: 980px){.sticky-sidebar{position:-webkit-sticky;position:sticky;top:100px;transition:top .3s ease-in-out, position .3s ease-in-out;}}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
Leveraged buyouts and stock buybacks have been killing tens of millions of jobs, not AI. And they will continue to do so until we have a political movement with the guts to take on high finance and protect the needs and interests of the rest of us.
Isn’t it wiping out our jobs, stealing our creativity, blurring fact and fiction with deep fakes, and pushing us into a dystopian future that will suck the humanity out of us all? Are we doomed?
For historian Yuval Harari and would-be politician Andrew Yang, AI will create self-driving trucks that will decimate the working-class. Peter Truchin, the mathematical historian, seriously imagines a future in which AI robots are used to colonize asteroids with new weapons that will allow a few powerful men, or maybe just one, to rule the universe.
Getta grip!
While inflammatory prognosticators predict that hundreds of millions of jobs will be gobbled up by AI, only 10,000 jobs were cut due to AI in the first seven months of 2025. That sure-to-be-slaughtered trucking industry is expected to experience an 11 percent total increase, not decrease, in truck drivers through 2030. It’s not at all clear that more jobs will be destroyed than created as AI spreads. Predictions of the unemployed roaming the streets due to automation haven’t yet occurred even during rapid periods of technological change. Why should this time be different?
Nevertheless, it’s entirely possible that a wide range of jobs will be dramatically impacted by AI. This wouldn’t be the first time. World War II also ushered in an amazing array of new technologies and production techniques to compensate for and cope with the vast needs of a war economy that was missing 17 million workers in the armed forces.
We seem in total awe of AI, falling on our knees before its vast power, while experiencing no power of our own to change the course of events.
But then, as opposed to now, Wall Street didn’t run the country, and we had a powerful labor movement that understood that vast productivity increases could lead to increased wages and shorter workweeks, not just job destruction.
Today, things are more than a little different. We seem in total awe of AI, falling on our knees before its vast power, while experiencing no power of our own to change the course of events. As a result, we aren’t even discussing how AI could and should be used to create a four-day work week without reduced pay.
Imagine for a moment that AI does have the potential to eliminate one-fifth of all jobs without new jobs filling the breach. Going to a four-day work week would make certain that unemployment would remain low, while tens of millions gain more time away from work without loss of pay. Mind-numbing work could be replaced. Work and home life could be enriched.
That kind of dream was alive and well when labor unions represented more than one out of every three private sector workers, instead of about one in 20 today. During and after WWII, labor unions were a force to be reckoned with. Government and corporate leaders understood that the fruits of productivity needed to be shared with working people or there would be big trouble in the form of mass strikes.
We no longer think these thoughts. We no longer have these discussions. We no longer imagine having enough power to make such substantive changes to our collective work lives.
In October 1955, a congressional committee held hearings on “Automation and Technological Change,” to deal with the unease the country felt about technological change. The report said what was obvious then but is totally absent from today’s AI hysteria.
“The prevailing workweek in manufacturing today, as is well known, is about 40 hours per week compared to about 45 in the mid-1920s and about 60 at the turn of the century. The hope is frequently expressed that the fruits of automation may permit us to reduce this still further to 30, 32, or 35 hours per week in the not-so-distant future.”
Not so distant future? That was written 70 years ago.
We no longer think these thoughts. We no longer have these discussions. We no longer imagine having enough power to make such substantive changes to our collective work lives. We expect the fruits of productivity to go entirely to the corporations and Wall Street – their reward for their great insights and ingenuity. (With the exception of Professor Juliet Schor, who is conducting research on the value of a four-day work week and helping corporations try it.)
And why? Because we have lost our collective will to power as expressed by labor unions. And our political representatives have allowed Wall Street to run wild all over us.
At this very moment, Wall Street and large corporations like Google, Facebook, Amazon, and Apple are killing tens of thousands of jobs to finance stock buybacks, the tool of choice to enrich the largest shareholders and richest executives. (For the data to prove that point see Chapter 11 of Wall Street’s War on Workers.)
Leveraged buyouts and stock buybacks have been killing tens of millions of jobs, not AI. And they will continue to do so until we have a political movement with the guts to take on high finance and protect the needs and interests of the rest of us.
As I will show in more detail in upcoming columns, the Democratic Party is not it. Working people want something new….and soon.
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm," said one advocate.
Privacy advocates on Saturday said the AI Act, a sweeping proposed law to regulate artificial intelligence in the European Union whose language was finalized Friday, appeared likely to fail at protecting the public from one of AI's greatest threats: live facial recognition.
Representatives of the European Commission spent 37 hours this week negotiating provisions in the AI Act with the European Council and European Parliament, running up against Council representatives from France, Germany, and Italy who sought to water down the bill in the late stages of talks.
Thierry Breton, the European commissioner for internal market and a key negotiator of the deal, said the final product would establish the E.U. as "a pioneer, understanding the importance of its role as global standard setter."
But Amnesty Tech, the branch of global human rights group Amnesty International that focuses on technology and surveillance, was among the groups that raised concerns about the bloc's failure to include "an unconditional ban on live facial recognition," which was in an earlier draft, in the legislation.
The three institutions, said Mher Hakobyan, Amnesty Tech's advocacy adviser on AI, "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning AI regulation."
"While proponents argue that the draft allows only limited use of facial recognition and subject to safeguards, Amnesty's research in New York City, Occupied Palestinian Territories, Hyderabad, and elsewhere demonstrates that no safeguards can prevent the human rights harms that facial recognition inflicts, which is why an outright ban is needed," said Hakobyan. "Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space, and rule of law that are already under threat throughout the E.U."
The bill is focused on protecting Europeans against other significant risks of AI, including the automation of jobs, the spread of misinformation, and national security threats.
Tech companies would be required to complete rigorous testing on AI software before operating in the EU, particularly for applications like self-driving vehicles.
Tools that could pose risks to hiring practices would also need to be subjected to risk assessments, and human oversight would be required in deploying the software,
AI systems including chatbots would be subjected to new transparency rules to avoid the creation of manipulated images and videos—known as deepfakes—without the public knowing that the images were generated by AI.
The indiscriminate scraping of internet or security footage images to create facial recognition databases would also be outright banned.
But the proposed AI Act, which could be passed before the end of the European Parliament session ends in May, includes exemptions to facial recognition provisions, allowing law enforcement agencies to use live facial recognition to search for human trafficking victims, prevent terrorist attacks, and arrest suspects of certain violent crimes.
Ella Jakubowska, a senior policy adviser at European Digital Rights, told The Washington Post that "some human rights safeguards have been won" in the AI Act.
"It's hard to be excited about a law which has, for the first time in the E.U., taken steps to legalize live public facial recognition across the bloc," Jakubowska told Reuters. "Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
Hakobyan also noted that the bill did not include a ban on "the export of harmful AI technologies, including for social scoring, which would be illegal in the E.U."
"Allowing European companies to profit off from technologies that the law recognizes impermissibly harm human rights in their home states establishes a dangerous double standard," said Hakobyan.
After passage, many AI Act provisions would not take effect for 12 to 24 months.
Andreas Liebl, managing director of the German company AppliedAI Initiative, acknowledged that the law would likely have an impact on tech companies' ability to operate in the European Union.
"There will be a couple of innovations that are just not possible or economically feasible anymore," Liebl told the Post.
But Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, told The New York Times that the E.U. will have to prove its "regulatory prowess" after the law is passed.
"Without strong enforcement," said Shrishak, "this deal will have no meaning."
The agreement "is a step in the right direction for security," said one observer, "but that's not the only area where AI can cause harm."
Like an executive order introduced by U.S. President Joe Biden last month, a global agreement on artificial intelligence released Sunday was seen by experts as a positive step forward—but one that would require more action from policymakers to ensure AI isn't harmful to workers, democratic systems, and the privacy of people around the world.
The 20-page agreement, first reported Monday, was reached by 18 countries including the U.S., U.K., Germany, Israel, and Nigeria, and was billed as a deal that would push companies to keep AI systems "secure by design."
The agreement is nonbinding and deals with four main areas: secure design, development, deployment, and operation and maintenance.
Policymakers including the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, forged the agreement with a heavy focus on keeping AI technology safe from hackers and security breaches.
The document includes recommendations such as implementing standard cybersecurity best practices, monitoring the security of an AI supply chain across the system's life cycle, and releasing models "only after subjecting them to appropriate and effective security evaluation."
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly told Reuters. The document, she said, represents an "agreement that the most important thing that needs to be done at the design phase is security."
Norm Eisen, senior fellow at the think tank Brookings Institution, said the deal "is a step in the right direction for security" in a field that U.K. experts recently warned is vulnerable to hackers who could launch "prompt injection" attacks, causing an AI model to behave in a way that the designer didn't intend or reveal private information.
"But that's not the only area where AI can cause harm," Eisen said on social media.
Eisen pointed to a recent Brrokings analysis about how AI could "weaken" democracy in the U.S. and other countries, worsening the "flood of misinformation" with deepfakes and other AI-generated images.
"Advocacy groups or individuals looking to misrepresent public opinion may find an ally in AI," wrote Eisen, along with Nicol Turner Lee, Colby Galliher, and Jonathan Katz last week. "AI-fueled programs, like ChatGPT, can fabricate letters to elected officials, public comments, and other written endorsements of specific bills or positions that are often difficult to distinguish from those written by actual constituents... Much worse, voice and image replicas harnessed from generative AI tools can also mimic candidates and elected officials. These tactics could give rise to voter confusion and degrade confidence in the electoral process if voters become aware of such scams."
At AppleInsider, tech writer Malcolm Owen denounced Sunday's agreement as "toothless and weak," considering it does not require policymakers or companies to adhere to the guidelines.
Owen noted that tech firms including Google, Amazon, and Palantir consulted with global government agencies in developing the guidelines.
"These are all guidelines, not rules that must be obeyed," wrote Owen. "There are no penalties for not following what is outlined, and no introduction of laws. The document is just a wish list of things that governments want AI makers to really think about... And, it's not clear when or if legislation will arrive mandating what's in the document."
European Union member countries passed a draft of what the European Parliament called "the world's first comprehensive AI law" earlier this year with the AI Act. The law would require AI systems makers to publish summaries of the training material they use and prove that they will not generate illegal content. It would also bar companies from scraping biometric data from social media, which a U.S. AI company was found to be doing last year.
"AI tools are evolving rapidly," said Eisen on Monday, "and policymakers need to keep up."