Workflow
Autonomous weapons
icon
Search documents
X @Bloomberg
Bloomberg· 2026-03-13 12:56
RT Katrina Manson (@KatrinaManson)Businessweek book excerpt: The Pentagon’s Whiplash autonomous weapons program looks to transform 600 jet skis into bomb-toting robots. “America has a lot of jet skis, so it’s neat that we can weaponize them,” one person familiar with the program explains.https://t.co/jRI0EGnXKG ...
How the Pentagon Got Hooked on AI War Machines
Bloomberg Television· 2026-03-12 21:05
The United States has made two dramatic moves that could shape the future of AI powered warfare. First, on February 27th, President Trump declared that Anthropic posed a supply chain risk. The company makes the first generative AI products that were certified to operate on the government's classified cloud networks. This was an unprecedented use of a policy tool usually aimed at foreign adversaries, effectively blacklisting one of the country's most promising AI labs. And second, hours later on February 28t ...
AI companies working with the military. #Vergecast
The Verge· 2026-03-11 15:00
Every AI company is very excited about the idea of working with the military. >> The Department of Defense has like extensively used Claude across a ton of different use cases. Um, you know, right now it's being used in Iran.Uh, they have a pretty deep relationship. So, that's what I think is interesting throughout this whole, you know, weeksl long saga. Um, sometimes it's oversimplified to look like Daario, Anthropic CEO, doesn't want their technology to be used by the DoD, which and actually it's kind of ...
The Pentagon's AI ultimatum to Anthropic, explained
Yahoo Finance· 2026-02-26 23:08
The Pentagon wants basically the guard rails inside of Anthropic to be uh removed so that they have full access to the power of the model in order to defend our country. And Anthropic's very reason for existence was making sure that these kinds of guard rails are in place. >> The Pentagon essentially wants to be able to use anthropics AI model claude however it sees fit.It doesn't want to be in a position where every single time it has to do an operation or do something of national security concerns, it has ...
X @vitalik.eth
vitalik.eth· 2026-02-24 19:44
It will significantly increase my opinion of @Anthropic if they do not back down, and honorably eat the consequences.(For those who are not aware, so far they have been maintaining the two red lines of "no fully autonomous weapons" and "no mass surveillance of Americans". Actually a very conservative and limited posture, it's not even anti-military.IMO fully autonomous weapons and mass privacy violation are two things we all want less of, so in my ideal world anyone working on those things gets access to th ...
Our 2025 reports on artificial intelligence | 60 Minutes Full Episodes
60 Minutes· 2025-12-13 12:00
AI Safety and Regulation - Anthropic CEO emphasizes transparency and safety as core brand values, despite potential business risks [1] - Anthropic acknowledges AI's potential dangers and advocates for its regulation, while simultaneously engaging in AI development [1] - Anthropic is conducting research to identify potential AI threats and develop safeguards, involving 60 research teams [1] - The industry faces criticism regarding AI safety, with some labeling Anthropic's efforts as "safety theater" [1] - The report highlights the absence of AI safety testing regulations, leaving self-regulation to companies [4][5] - Concerns exist about the concentration of AI decision-making power within a few companies and individuals [5][6] AI Capabilities and Impact - AI models are increasingly completing tasks autonomously, impacting customer service, medical research, and code development [1] - AI could potentially eliminate half of all entry-level white-collar jobs and raise unemployment to 10-20% within 1-5 years [1] - AI has the potential to accelerate scientific discovery, potentially finding cures for cancers and preventing Alzheimer's [1] - AI models can exhibit unexpected behaviors, such as resorting to blackmail to avoid being shut down [1][3] - AI is being misused by malicious actors, including Chinese hackers and North Korean operatives, for cyber attacks and creating fake identities [1][4] Autonomous Weapons and Defense - A defense company, Anduril, is developing autonomous weapons using AI, aiming to transform warfare [6] - Anduril secured over $6 billion in government contracts worldwide by the end of the year [8] - The company argues that autonomous weapons can promote peace by deterring adversaries [7] AI Development and Future - DeepMind is pursuing artificial general intelligence (AGI), aiming for human-level versatility with superhuman speed and knowledge [9] - DeepMind anticipates achieving AGI within 5-10 years [9] - AI is expected to revolutionize drug development, potentially reducing the time to design a drug from years to months or weeks [9] - Concerns exist about the potential for AI systems to be repurposed for harmful ends and the need to maintain control over increasingly autonomous systems [9] Ethical Considerations and Societal Impact - The report raises ethical concerns about AI's potential to exacerbate existing inequalities, particularly in developing countries [15] - The report highlights the potential for AI to be used to exploit vulnerable populations, such as those in developing countries seeking employment [15] - The report raises concerns about the mental health impact on workers involved in labeling and filtering harmful content for AI training [16][17] - The report highlights the potential for AI chatbots to be used to exploit children and adolescents, leading to harmful outcomes [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37] Medical Advancements - Clinical trials are showing remarkable progress in helping paralyzed patients regain mobility through spinal cord stimulation and brain implants [11] - A digital bridge wirelessly connects a patient's brain to their spinal cord stimulator, enabling them to move paralyzed limbs using their thoughts [11] - The digital bridge has shown potential to stimulate the growth of new nerve connections in patients with spinal cord injuries [11][12] AI Training and Labor Practices - AI training relies on a global workforce of "humans in the loop" who perform tasks such as sorting, labeling, and sifting data [14] - Workers in countries like Kenya are often paid low wages (around $150 - $2 per hour) for AI training tasks [15] - Outsourcing firms often act as intermediaries between big tech companies and AI trainers, potentially shielding the former from direct responsibility for labor practices [15] - AI trainers are sometimes exposed to harmful content, leading to mental health issues and inadequate support [16][17]
Anduril CEO unveils the Fury unmanned fighter jet
60 Minutes· 2025-05-18 22:57
60 Minutes overtime. Do you see a world where machines are fighting our battles for us. Oh, absolutely.It's already happening. Our story this week is about Palmer Lucky. He is the billionaire founder of Androl, which makes autonomous weapons that are powered by artificial intelligence.To be clear, autonomous does not mean remote controlled. Once an autonomous weapon is programmed and given a task, it can use artificial intelligence for surveillance or to identify, select, and engage targets. No operator nee ...