Bold claim: a new robotic system from Israel’s Ruby AI can let tank mechanics perform 48 hours of maintenance in just two hours, cleaning engines, locating faults, and handling tasks that once required human presence. But here’s where it gets controversial: does replacing humans with autonomous tools in dangerous military work truly enhance safety, or does it shift risk and accountability in ways we’re not ready for?
A recent reveal from the IDF shows a mechanical AI robot designed to work inside tanks. Resembling a high-tech octopus with multiple robotic arms, it can reach into engines, carry out cleaning, scanning, fault‑finding, and other operations that used to require a technician in a hazardous environment. The robot operates autonomously, reducing soldiers’ exposure to chemicals, heat, and other dangers.
Ruby AI, founded by Daniel Ben Dov in Bar Lev High‑Tech Park in northern Israel, has grown to about 20 employees, largely engineers in software, hardware, aerospace, and physical AI. Ben Dov notes they’re building both the arms and the brain of the system, including a Physical AI core capable of learning elements of the physical world, interpreting what it sees, and executing precise actions. He emphasizes this isn’t a sterile factory robot; it’s designed to work in mud, dust, heat, and cold.
The technology emerged during wartime as a way to keep soldiers away from risky maintenance tasks, and Ben Dov says the system is now fully operational.
Beyond tank maintenance, Ruby AI has global clients and notable achievements such as UAE refueling robots. These are claimed to be the only robots in the world that can refuel in explosive environments without human contact, adhering to strict safety standards. Ben Dov suggests that if a robot can refuel, it can also handle hazardous materials, opening doors to broader military maintenance use cases.
Looking ahead, the company envisions robots that can: replace heavy wheels on tanks and bulldozers (a physically demanding, injury‑prone task); lift and manipulate weights over 100 kilos with precision, and work alongside soldiers on dangerous chores. He stresses the aim is not to produce cute, domestic robots but to tackle difficult, repetitive physical work in places where humans should not be.
Ruby AI also highlights ambitious goals like tunnels-clearing robots for terror tunnels, arguing that mission-focused design trumps appearance. Ben Dov explains that a humanoid design isn’t feasible for narrow, muddy tunnels; robots are built to fit the mission profile.
The company even hints at future advances in military medicine, including five‑fingered bionic arms capable of therapeutic operations or handling hazardous environments where chemical or biological threats exist, enabling care without exposing personnel.
The IDF’s stance is cautious but purposeful: these developments are aimed at reducing soldier risk and lifting heavy loads of dangerous, repetitive work rather than replacing soldiers on the battlefield. The central question remains: as robots take on more dangerous maintenance and support roles, how should we balance safety, accountability, and the value of human skills in high-risk environments?