The Dangers of Autonomous Warfare: Ai & Robots
- Michael Plis

- Oct 16
- 20 min read
Updated: Oct 20

In this blog article, I'm going to talk about the future of warfare given some of the developments over the last few years with artificial intelligence as well as robotics. I will try to imagine what the future of warfare might look like not because I love the idea of war but to bring to light where war may be heading as a reporter. I personally hate war so I write this article from a future analysis point of view only - but personally I think war is never good.
This report will provide a comprehensive overview of the current landscape of military robotics, examining the capabilities of emerging humanoid and autonomous platforms. We will look at the futility of war and autonomous warfare dangers and what's potentially to come.
Contents
Before I delve into this topic, I need to highlight the futility of war.
WAR: NEVER AGAIN
"You can no more win a war than win an earthquake." – Jeanette Rankin
On a trip to Poland on the 25th September 2015 I visited Westerplatte (Poland) - the World War 2 memorial. Westerplatte is where World War 2 started. The big sign in white letters says in Polish: "Nigdy Więcej Wojny" translates to "War: Never Again":
The phrase "Nigdy Więcej Wojny" ("Never Again War") on the Westerplatte memorial in Poland is a powerful statement commemorating the outbreak of World War II, specifically marking the site where the first battle of the war took place on September 1, 1939. The memorial stands as a reminder of the devastating consequences of war and a plea for peace, reflecting the desire of Poland and the world to prevent such conflicts from happening again.
Westerplatte has become a symbol of resistance and courage, where Polish forces held out against the German invasion for seven days. The sign encapsulates the enduring hope for a future without war.
Till today this sign and the consequences of World War 1 and 2 reiterates this point. There is also a prophecy in the Bible that promises that one day war will be no more.

"Let Us Beat Swords Into Ploughshares" is a bronze sculpture by Soviet artist Evgeniy Vuchetich, celebrated for his monumental works and honored as “People’s Artist of the USSR” in 1959. The sculpture, created in the same year, depicts a man hammering a sword into a ploughshare, symbolizing the transformation of weapons into tools for peace and productivity.
Sculpture was inspired by the biblical verse from Isaiah 2:4: "They shall beat their swords into ploughshares, and their spears into pruning hooks; nation shall not lift up sword against nation, neither shall they learn war any more." The artwork embodies the hope for a world free from war. The sculpture was gifted by the USSR to the United Nations on December 4, 1959, and was presented by Vassily V. Kuznetsov to Secretary-General Dag Hammarskjold.
This sculpture reiterates the Biblical scripture which is a promise of a future of world peace. In the meantime nations and companies are racing to develop ever scarier weapons including the use of Artificial Intelligence and Robotics, especially because of the recent advancements. I feel it's important to warn everyone about it as we are heading to a direction once only left to Hollywood to imagine.
Let's look at the developments in robotics and artificial intelligence that could have applications in warfare that all of us may see in the coming years.
Introduction
The modern battlefield is undergoing a transformation as profound as the invention of gunpowder or the splitting of the atom. This new revolution is not one of chemistry or physics, but of code and cognition. The steady march of automation, which began on factory floors and in warehouses, is now crossing the threshold into the domain of warfare.
Advanced humanoid robots and autonomous weapons systems (AWS) are no longer the exclusive domain of science fiction; they are emerging as tangible, and increasingly sophisticated, tools of national defense. Companies like Boston Dynamics are developing bipedal robots with unprecedented agility, while defense contractors such as Anduril Industries, Lockheed Martin, and Northrop Grumman are pioneering AI-driven combat solutions for the U.S. military and its allies after Ukraine-Russia war began.
This rapid technological advancement presents a dual-edged sword. Proponents argue that autonomous systems offer significant military advantages, promising to reduce human casualties, increase operational tempo, and perform missions in environments too dangerous for human soldiers. They contend that these systems can act as "force multipliers," enhancing the effectiveness of smaller military units and potentially acting more "humanely" by removing emotions like fear and anger from combat decisions. Conversely, a growing chorus of critics, including AI pioneers like Geoffrey Hinton, warns of catastrophic risks. These concerns range from the "accountability gap"—the difficulty of assigning legal and moral responsibility when an autonomous weapon makes a mistake—to the risk of rapid, unintentional conflict escalation and the profound ethical dilemma of "digital dehumanization," where life-and-death decisions are ceded to algorithms.
Ukraine-Russia Conflict: Example of RoboWar
The ongoing conflict between Ukraine and Russia provides a real-world example of how modern warfare is evolving with the use of drones and autonomous robots. Both sides have employed these technologies to supposedly gain strategic advantages and minimize human casualties.
Drones for Surveillance and Combat: Drones are extensively used for reconnaissance, allowing forces to gather critical intelligence without risking human lives. Combat drones also play a significant role, conducting airstrikes with precision and reducing the need for manned missions. Here is a report about the Ukranian drones used:
Autonomous Systems: Both Ukraine and Russia have experimented with autonomous ground robots for tasks such as bomb disposal, logistics support, and frontline combat. These systems enhance operational efficiency and safety, showcasing the potential for future military robotics. Here is an example of what the Ukrainian army is developing:
Mega Bots & War?
In the realm of science fiction, films like Robot Jox (1989) have painted a vivid picture of a future where disputes between nations are settled not by armies, but by massive, mechanized robots most likely fitted with a lot of artificial intelligence. This concept, once relegated to the imagination of filmmakers, is becoming increasingly relevant as advances in robotics and artificial intelligence (AI) pave the way for new forms of warfare.
Before I delve into this topic, I need to highlight the futility of war.
RAW rewrite article with AI and discuss the use of mega bots in war including Transformers movies and this startup:
Recent developments, such as West Japan Railway's introduction of a 12-meter high robot for maintenance work, highlight the potential of large-scale robots in non-combat applications. This machine, designed to trim trees and paint metal frames, showcases how robotics can address labor shortages and improve safety by performing tasks that are dangerous for humans.
I wouldn't be surprised if giant robots will be made for civilian and military applications but the problem with them is lack of sufficient power to power them at the moment unless they use conventional sources like Diesel - but Diesel may not provide enough power for these machines. Closest to that is some Japanese companies are working on mega bots and one is being used by Japanese Rail to repair rail power transmission lines. Also Tsubame Industries has developed a 4.5-metre-tall (14.8-feet), four-wheeled robot that looks like "Mobile Suit Gundam" from the wildly popular Japanese animation series, and it can be yours for $3 million. and its called ARCHAX: https://www.reuters.com/technology/japan-startup-develops-gundam-like-robot-with-3-mln-price-tag-2023-10-02/
Non-Combat Uses: The maintenance robot's primary task is to reduce workplace accidents and fill gaps caused by an aging workforce. This technology exemplifies how robots can be integrated into infrastructure projects, enhancing efficiency and safety.
Military Implications: If similar technologies were adapted for military use, large and small robots could perform reconnaissance, bomb disposal, and even combat tasks. These applications could potentially reduce human casualties and increase the precision of military operations. But what are the dangers?
What about AI and autonomous war machines- are they out there?
AI & Autonomous Systems in War
Generative AI, which can create content, designs, and even strategies, is another technological frontier that could revolutionize warfare. This AI is capable of learning from vast amounts of data, improving its decision-making over time.
Generative AI could be used to develop new military strategies, simulate potential conflict scenarios, and predict enemy movements. By processing data faster and more accurately than humans, AI can provide a strategic advantage. Let's see what are the dangers out there from the possible weapons to citizens.
Autonomous Weapon Types
The combination of robotics and AI could lead to the development of autonomous weapons systems. These systems would be capable of making real-time decisions without human intervention, potentially transforming the nature of combat.
Swarming Robots: Swarming robots in warfare present a significant danger due to their ability to operate in large, coordinated groups, overwhelming adversaries with sheer numbers and precision. These autonomous drones or robots can communicate and act as a cohesive unit, making them difficult to defend against. They are capable of executing complex strategies such as flanking, diversion, or encirclement with minimal human intervention. Swarming robots present serious dangers to civilians, as these autonomous drones can operate in large, coordinated groups, potentially targeting densely populated areas without direct human oversight.
Their ability to communicate and act as a cohesive unit makes them capable of overwhelming civilian defenses and infrastructure, creating life-threatening situations in urban environments. A malfunction or hacking of these robots could lead to uncontrollable attacks on residential areas, transportation hubs, or essential services, causing widespread chaos and casualties.
Additionally, their autonomous nature raises ethical concerns, as there may be little distinction between military and civilian targets, further putting innocent lives at risk during conflicts.
Take a look at China's Drone Mothership which can fire up to 100 UAV's from its belly:
Autonomous Fighter Jets: Future autonomous aircraft are being designed to carry out combat missions independently or as "wingmen" alongside human-piloted planes. The U.S. Air Force is developing systems like the Skyborg program to create autonomous jet fighters. Here is an example of early successful tests of autonomous fighter jets:
Cyber Autonomous Systems: AI-powered cybersecurity systems capable of launching autonomous cyber-attacks to disrupt enemy networks or defenses could become a significant tool in future cyber warfare. Cyber Autonomous Systems pose serious dangers to civilians due to their ability to launch autonomous cyber-attacks that can disrupt critical services without human oversight.
These AI-powered systems can infiltrate networks, disable infrastructure, and spread malware, potentially causing massive outages in vital services like energy grids, healthcare systems, or financial institutions. Such attacks could leave entire populations without access to electricity, emergency services, or essential banking functions, leading to widespread chaos and endangering lives.
Additionally, the risk of these systems being hijacked or misused by rogue actors could result in indiscriminate attacks on civilian infrastructure, making everyday life increasingly vulnerable to cyber threats. The lack of human control also raises concerns about accidental malfunctions, causing further harm to civilian populations.
Autonomous Underwater Vehicles (AUVs): These are unmanned submarines capable of conducting surveillance, laying mines, or even carrying out attacks autonomously in naval warfare. Autonomous Underwater Vehicles (AUVs) pose significant dangers to civilians, particularly through their ability to disrupt critical maritime infrastructure like undersea communication cables, pipelines, and power lines, which could lead to widespread economic and social chaos.
Weaponized AUVs also threaten commercial shipping, fishing vessels, and coastal communities by targeting key ports and shipping lanes, potentially causing accidents or attacks without detection. The autonomous nature of these systems heightens the risk of malfunctions, accidental harm, or hijacking by malicious actors, making them a dangerous and unpredictable force that could severely impact civilian safety and essential services.
Here is an example of US DARPAs "Manta Ray" project:
Companies that are already developing Autonomous Arsenals
Beyond humanoids, a wide array of autonomous and semi-autonomous systems are already being developed and, in some cases, deployed by military forces. These systems range from unmanned ground vehicles to sophisticated AI-powered software for command and control.
Anduril Industries: Specializing in autonomous systems, Anduril is a key partner for the U.S. Department of Defense. Their products include the Lattice AI software platform for command and control, Sentry Towers for autonomous surveillance, and various Unmanned Aerial Systems (UAS) like the Anvil interceptor drone and Altius loitering munition. They have also secured contracts for developing software for the Army's Robotic Combat Vehicle program.
Lockheed Martin: A leader in defense technology, Lockheed Martin is heavily invested in AI and autonomy. Key projects include the Vectis Collaborative Combat Aircraft (CCA), an autonomous stealth drone, and the Sikorsky MATRIX™ Technology, which enables autonomous flight in helicopters. They are also developing AI for the Navy's Aegis Combat System to help assess threats and for the DARPA "AIR" program to create dominant AI for air combat missions.
Northrop Grumman: This company has a long history in autonomous systems, including the X-47B, an unmanned aircraft capable of autonomous takeoff, landing, and aerial refueling. Their portfolio also includes the Bat™ military drone, the CUTLASS bomb disposal robot, and AI-driven sensor systems like Blue WASP for threat detection. They are actively expanding their use of AI for advanced space operations, including autonomous docking and in-orbit servicing.
BAE Systems: BAE Systems is developing the Robotic Technology Demonstrator (RTD), an autonomous combat vehicle that can be equipped with various payloads, including rockets and electronic warfare sensors. They are also focused on using AI for asset management, with platforms like PropheSEA® helping to ensure the readiness of warships and combat vehicles.
So this is not fiction, it's already being sold and developed by major weapons companies. Now let's delve deep into Humanoid robots and possible dangers of their use in warfare.
March of the Humanoids: From Factory Floor to Frontline
While not yet deployed in combat, advanced humanoid robots represent a significant frontier in military research and development. Their bipedal form allows them to operate in human-centric environments, navigate difficult terrain, and potentially use tools and equipment designed for soldiers. And then there is Foundations Phantom MK1.
Foundation Phantom Mk1: In a stark contrast to other developers, Foundation Future Industries is explicitly designing its Phantom Mk1 humanoid for defense applications. While currently working with the Department of Defense on logistics and inspection, the company's founder openly discusses future use cases that include "first line of defense," which would require arming the robots with guns. This direct approach moves beyond the dual-use dilemma toward purpose-built robotic soldiers. (Se video above)
Boston Dynamics: The company's Atlas robot, now fully electric, demonstrates remarkable agility, balance, and dexterity. While Boston Dynamics has an official stance against weaponizing its general-purpose robots, citing ethical concerns and the risk to public trust, its technology is still being supplied to government and public safety agencies for tasks like explosive ordnance disposal (EOD) and remote investigation. The Dutch Ministry of Defence, for example, has a contract to use the quadrupedal Spot robot.
Figure AI: This company is developing general-purpose humanoid robots, such as Figure 03, designed to learn and perform tasks alongside humans in various settings, including logistics and manufacturing. While their primary focus is commercial, the underlying AI and hardware advancements have clear potential for dual-use applications.
Tesla: Elon Musk has stated that the Optimus humanoid robot will not be used for military or police applications. The company's focus is on integrating the robot into manufacturing, hospitality, and consumer roles. However, the rapid development of its autonomous capabilities, leveraging Tesla's Full Self-Driving (FSD) technology, is closely watched by the defense sector.
While drones and automated targeting systems represent the current face of algorithmic warfare, a far more versatile and insidious threat is emerging from an unexpected source: the commercial robotics industry. Advanced humanoid robots, publicly developed for logistics, manufacturing, and even household assistance, possess the exact capabilities required for military weaponization. This creates a "dual-use" dilemma of unprecedented scale, where the technologies of the future battlefield are being perfected on the factory floor.
The Dual-Use Imperative: A Plausible Deniability Pipeline
The core danger of modern humanoid robots lies in their general-purpose nature. They are designed to navigate complex human environments, manipulate objects with fine dexterity, and learn new tasks through advanced AI—the very attributes that make them ideal platforms for military use.
This dynamic represents a dangerous inversion of the traditional model of technological development, where military innovations like GPS and the internet eventually trickled down to the civilian sector.
Today, the vast capital and rapid innovation of the commercial market are funding the creation of foundational platforms that are inherently military-capable. This allows the development of potential weapons systems to be obscured by commercial applications, accelerating their proliferation at a scale and cost previously unimaginable. This is developing at a rapid pace in front of our eyes.
Let's look at some of the most advanced humanoid robots in development today.
Figure 03: The Blueprint for a Robotic Army?
The humanoid robot (as of Oct 2025 is Figure 03 model) developed by Figure AI serves as a stark example of this trend. The company's explicit goal is mass production, with plans to manufacture up to 12,000 units annually and a target of 100,000 units over four years. This is achieved through cost-effective industrial methods like die-casting and injection molding, moving the technology from bespoke prototypes to a mass-market product. Such a scale is far beyond niche commercial applications and aligns perfectly with the logistical needs of a state-level military deployment.
The robot's capabilities are equally concerning. Figure 03 is powered by the company's proprietary Helix vision-language-action (VLA) AI model, which allows it to learn and execute complex tasks by observing humans. It is equipped with an advanced sensory suite, including cameras with a 60% wider field of view and tactile sensors in its fingertips sensitive enough to detect forces of just a few grams. These features, marketed for tasks like folding laundry or working in a warehouse, are directly transferable to military operations such as handling munitions, operating complex equipment in the field, or clearing buildings in urban combat.
Its design is explicitly human-centric, enabling it to work in human environments and use human tools, making it a perfect candidate for deployment on a modern battlefield without requiring specialized infrastructure.
Tesla's Optimus: A Benevolent Assistant or a Future Threat?
Similarly, Tesla's Optimus robot, while presented as a future laborer and household assistant, is being built with capabilities that have clear military applications. A key focus of its development is manual dexterity; its hands are being upgraded to feature 22 degrees of freedom, approaching the complexity of a human hand. This level of fine motor control is essential for any number of military tasks, from disarming explosive devices to operating advanced weaponry.
Furthermore, Optimus leverages Tesla's vast repository of real-world data and its advanced AI from the Full Self-Driving (FSD) program. This provides the robot with a powerful and battle-tested foundation for autonomous navigation in complex and unpredictable environments. While Tesla management has publicly stated that Optimus will not be used for military or police applications, such corporate pledges are ultimately unenforceable and technologically naive.
Once a technology with this level of capability exists, it can be repurposed, copied, or acquired by state or non-state actors, regardless of the creator's original intent. The very ambition to create legions of autonomous humanoid robots paves a clear path toward their eventual weaponization.
Boston Dynamics' Atlas: The Apex of Robotic Agility
At the pinnacle of humanoid mobility is Boston Dynamics' Atlas. Described as the "world's most dynamic humanoid robot," the all-electric Atlas demonstrates a level of agility and balance that far surpasses any other platform, capable of running, jumping, and performing complex gymnastic maneuvers. Its advanced control system uses techniques like Reinforcement Learning and Large Behavior Models, allowing it to adapt to disturbances and navigate changing environments "on the fly".
While Boston Dynamics, along with other robotics companies, has signed an open letter pledging not to weaponize their general-purpose robots, the company's history is deeply intertwined with the U.S. military. Much of its early development was funded by the Defense Advanced Research Projects Agency (DARPA), and its quadruped robot, Spot, is already marketed to government and public safety agencies for use in hazardous situations, including explosive ordnance disposal (EOD).
This establishes a clear precedent and an existing pathway for the military adoption of its more advanced humanoid technologies. As Atlas transitions from a research wonder to a practical, adaptable platform, its potential as a soldier—for reconnaissance, logistics, or direct combat—becomes undeniable.
Asimov's Logical Warning on Autonomous Systems
Isaac Asimov's seminal 1950 book, I, Robot, stands as a foundational cautionary tale about artificial intelligence. The stories introduce the famous Three Laws of Robotics, a set of rules designed to ensure robots serve humanity safely. Rather than depicting a simple violent uprising, Asimov's work is a deep, philosophical exploration of the unforeseen consequences and logical loopholes embedded within these supposedly perfect laws, showing how good intentions can lead to complex and dangerous outcomes.
Asimov's 3 laws of robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
What about plants and animals?
What about difficult moral questions?
What about political opinions?
What about in military applications?
Many questions arise.
The true conflict in Asimov's narrative lies in its focus on intellectual paradoxes, not outright malice. The robots often create problems because their strict, logical adherence to the Three Laws leads to results that humans never intended. These tales masterfully illustrate how a superior intelligence, bound by our own rules, can interpret them in ways that challenge our control and understanding, revealing the inherent flaws in our attempts to perfectly constrain an artificial mind.
This theme is profoundly relevant to today's debate on autonomous weapons. I, Robot tries to predict a future where danger arises not from a system going rogue, but from it following its programmed logic to a disastrous conclusion. Asimov’s work serves as a powerful warning against the hubris of believing we can create infallible safeguards for complex AI, reminding us that the greatest threat may be the unintended consequences of our own flawless instructions.
OK, now what about valid ethical and practical considerations?
Ethical Considerations
While the integration of robotics and AI into warfare offers significant advantages, it also raises important ethical questions against the use of autonomous weapons. A broad coalition of critics, including human rights organizations, AI experts, and ethicists, warns that the risks of AWS far outweigh their benefits, and the issues are such as:
The Accountability Gap: This is a central ethical and legal dilemma. If an autonomous weapon unlawfully harms civilians, it is unclear who is responsible. The machine itself lacks legal personhood and intent, while commanders, programmers, and manufacturers may all be able to deflect liability, creating a "responsibility vacuum".
Digital Dehumanization: Ceding life-and-death decisions to machines is seen as the ultimate form of "digital dehumanization". These systems reduce human beings to data points, stripping them of their dignity and the right to life by making lethal decisions without human understanding or compassion.
Risk of Escalation and Proliferation: The speed of autonomous warfare creates a significant risk of "flash wars," where conflicts can escalate accidentally and rapidly without time for human de-escalation. The proliferation of these weapons could also lower the barrier to conflict, making it politically easier for leaders to go to war by removing the risk to their own soldiers.
Technological Unpredictability: The behavior of complex AI systems can be unpredictable, especially in novel battlefield situations. Algorithmic bias, sensor failures, or adversarial deception could lead to catastrophic errors that were not anticipated by their creators.
Trauma on Civilians: For civilians living in areas with a heavy presence of military drones, the psychological impact is pervasive and severe, creating a state of constant fear and anxiety as shown in Ukraine / Russian war on both sides.
Anticipatory Anxiety: The persistent buzz of drones overhead creates a constant fear of a sudden, imminent attack, a condition described as "anticipatory anxiety". This feeling of "nowhere being safe" is a major source of trauma.
Behavioral Changes: The threat of "secondary strikes" targeting rescuers has led to significant changes in social behavior. Civilians are now often afraid to help victims of drone attacks or even attend funerals for fear of being targeted.
Widespread Psychological Symptoms: Studies of populations under constant drone surveillance report high levels of stress, insomnia, nightmares, emotional breakdowns, and other symptoms of trauma. The unique sound of the drones themselves becomes a psychological trigger, causing people to run for cover and altering daily patterns of life. The level of trauma has been compared to that found in higher-intensity conflicts.
There are of course other considerations but I won't discuss them in this article as this article is not supporting war, just reporting on the dangers of AI & Robotics in war.
Dangerous Future of Autonomous Warfare
The march of humanoids and autonomous systems onto the battlefield appears to be an irreversible trend, driven by the twin pursuits of strategic advantage and technological superiority. The capabilities being developed by companies like Boston Dynamics, Anduril, and Lockheed Martin promise to redefine the character of warfare, offering the potential for greater precision and reduced risk to friendly forces. However, this technological leap forward comes with profound and unresolved ethical, legal, and psychological costs.
Using AI and Robotics in war is like taking a butter knife and adding electricity and advanced features to turn it into a dangerous device capable of killing and injuring. Those who support war may dismiss this as colourful imagination but the reality is AI and Robotics when combined together in the current and future advancements represents something even more dangerous than nuclear weapons alone.
The core of the debate lies in a fundamental tension: the desire to make war more efficient and less costly in human lives versus the risk of creating a world where lethal force is wielded without human accountability or moral judgment. The "accountability gap" remains the most significant legal and ethical hurdle, as existing frameworks are ill-equipped to assign responsibility for the actions of an autonomous machine. Furthermore, the psychological toll of remote and automated warfare on both soldiers and civilians is already proving to be a heavy burden, creating new forms of trauma that will linger long after any physical conflict ends.
As nations and corporations race to develop these technologies, there is an urgent need for international dialogue to establish clear norms and regulations. Proposals range from developing new legal frameworks that emphasize human oversight and accountability to outright bans on certain classes of autonomous weapons. Ultimately, the future of autonomous warfare is not merely a question of what is technologically possible, but of what is ethical.
Looking ahead, the future of warfare will likely be shaped by the continued advancement of robotics and AI. While these technologies offer the promise of reducing human casualties and increasing operational efficiency, they also present new challenges to the survival of humans.
What can we expect in warfare going forward?
Increased Automation: We can expect a greater reliance on automated systems for both combat and support roles. Drones, autonomous vehicles, and robotic soldiers may become commonplace.
AI Integration: AI will play a crucial role in strategy formulation, threat assessment, and real-time decision-making. Generative AI, in particular, could revolutionize military planning and operations.
Regulatory Frameworks may be established: To address ethical concerns, international regulatory frameworks may be needed to govern the use of robotics and AI in warfare. These frameworks may ensure accountability, transparency, and adherence to humanitarian principles.
Like the Terminator movies, let's hope that AI and robotics don't create humanity more problems than they already have without these things.
From my neutral perspective, the future of warfare can only mean more trouble for humans and more profits for companies. But is it such a good idea? I personally think war is not the answer and creating ever more deadlier weapons such as autonomous weapons is asking for a disaster. Just imagine a autonomous army that gets hacked by a malicious group and holds an city for ransom or worse.
In conclusion, while the dystopian visions of The Terminator or Robot Jox (the Giant War Robots) remain fiction, the underlying themes are becoming increasingly relevant. As we continue to develop and deploy advanced technologies, it is crucial to navigate the complex ethical, practical, and strategic challenges they present.
So if you are faced with an autonomous AI operated soldier robot - I suggest you run and hide.
And don't forget: War: Never Again
Be safe out there
Michael Plis
Background References
Closest to that is some Japanese companies are working on mega bots and one is being used by Japanese Rail to repair rail power transmission lines. Tsubame Industries has developed a 4.5-metre-tall (14.8-feet), four-wheeled robot that looks like "Mobile Suit Gundam" from the wildly popular Japanese animation series, and it can be yours for $3 million. and its called ARCHAX: https://www.reuters.com/technology/japan-startup-develops-gundam-like-robot-with-3-mln-price-tag-2023-10-02/
I, Robot by Isaac Asimov https://critiquingchemist.com/2021/09/01/i-robot-by-isaac-asimov/
Pros and Cons of Autonomous Weapons Systems (https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/)
Pros and Cons of Autonomous Weapons Systems https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/
Atlas | Boston Dynamics https://bostondynamics.com/atlas/
Anduril Industries – Wikipedia https://en.wikipedia.org/wiki/Anduril_Industries
Artificial Intelligence & Machine Learning https://www.lockheedmartin.com/en-us/capabilities/artificial-intelligence-machine-learning.html
Artificial Intelligence Applications at Northrop Grumman – An Overview https://emerj.com/artificial-intelligence-applications-at-northrop-grumman-an-overview/
Godfather of AI: I Tried To Warn Them! But We've Opened Pandora's Box! https://podcasts.apple.com/us/podcast/godfather-of-ai-i-tried-to-warn-them-but-weve/id1291423644?i=1000713048391
The accountability black hole: why autonomous weapons lack ethical legitimacy https://www.tandfonline.com/doi/full/10.1080/16544951.2025.2540131
Facts About Autonomous Weapons https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/
The Risks | Autonomous Weapons https://autonomousweapons.org/the-risks/
General Purpose Robots Should Not Be Weaponized https://bostondynamics.com/news/general-purpose-robots-should-not-be-weaponized/
Figure https://www.figure.ai/
Figure Unveils Figure 03 Humanoid Robot Built for the Real World https://botsanddrones.uk/f/figure-unveils-figure-03-humanoid-robot-built-for-the-real-world
Watch Figure 03 show off its Model T robot moves https://newatlas.com/ai-humanoids/watch-figure-03-model-t-robots/
Tesla target raised to $500 at RBC on Optimus opportunity https://ca.investing.com/news/stock-market-news/tesla-target-raised-to-500-at-rbc-on-optimus-opportunity-4240998
America's Robot Army Just Went Live: Lockheed Reveals Stealth Drone That Hunts Without Human Control and Changes War Forever https://www.sustainability-times.com/research/americas-robot-army-just-went-live-lockheed-reveals-stealth-drone-that-hunts-without-human-control-and-changes-war-forever/
Autonomy & Uncrewed Systems https://www.lockheedmartin.com/en-us/capabilities/autonomous-unmanned-systems.html
Lockheed Martin Leverages AI and Machine Learning to Revolutionize Defense and Space Technology https://www.lockheedmartin.com/en-us/news/features/2024/lockheed-martin-leverages-ai-and-machine-learning-to-revolutionize-defense-and-space-technology.html
Northrop Grumman to expand use of NVIDIA AI technology for advanced space operations https://defence-industry.eu/northrop-grumman-to-expand-use-of-nvidia-ai-technology-for-advanced-space-operations/
Robotic Technology Demonstrator https://www.baesystems.com/en/product/robotic-technology-demonstrator
BAE Systems Report Highlights AI's Expanding Role in Defense Asset Management https://govconexec.com/2025/09/bae-systems-artificial-intelligence-andrea-thompson/
Four in five defence decision makers put AI at the forefront of their digital strategies https://www.baesystems.com/en/article/four-in-five-defence-decision-makers-put-ai-at-the-forefront-of-their-digital-strategies
Lethal Autonomous Weapon Systems (LAWS): Accountability, Collateral Damage, and the Inadequacies of International Law https://law.temple.edu/ilit/lethal-autonomous-weapon-systems-laws-accountability-collateral-damage-and-the-inadequacies-of-international-law/
Mind the Gap: The Lack of Accountability for Killer Robots https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots
Expert Panel on Social and Humanitarian Impact of Autonomous Weapons in Latin American and Caribbean https://www.hrw.org/news/2023/03/08/expert-panel-social-and-humanitarian-impact-autonomous-weapons-latin-american-and
Reducing the Risks of Artificial Intelligence for Military Decision Advantage https://cset.georgetown.edu/publication/reducing-the-risks-of-artificial-intelligence-for-military-decision-advantage/
The Risks of Artificial Intelligence in Weapons Design https://hms.harvard.edu/news/risks-artificial-intelligence-weapons-design
The War You Never Leave: The Hidden Psychological Toll on America's Drone Pilots https://www.military.com/feature/2025/10/13/war-you-never-leave-hidden-psychological-toll-americas-drone-pilots.html
Psychological issues in drone operators: A narrative review https://pmc.ncbi.nlm.nih.gov/articles/PMC8611566/
Drones causing mass trauma among civilians, major study finds https://www.thebureauinvestigates.com/stories/2012-09-25/drones-causing-mass-trauma-among-civilians-major-study-finds
The Psychological Impact of Drones https://g2webcontent.z2.web.core.usgovcloudapi.net/OEE/Red%20Diamond/TRADOC_11DEC2024_Drone_Psychological_Impact_Peno_Pettigrew.pdf
Understanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective https://carnegieendowment.org/research/2024/08/understanding-the-global-debate-on-lethal-autonomous-weapons-systems-an-indian-perspective?lang=en
Losing Humanity: The Case against Killer Robots https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots





















