top of page
  • B Briant

The Remote Future of Conflict

Updated: Jun 26, 2023

Could improved technology reduce the terrible suffering of civilians in conflict areas, asks B Briant. (Note: this piece was commissioned and written before the start of the Ukraine War)

With algorithms and computers increasingly controlling our lives at home, several governments have looked into how they could be put into use in combat. This, however, has raised concerns with many people, questioning whether these systems would be able to tell who is a combatant and when to use lethal force.

Before I get started, it is important to take everything I say with several kilograms of salt. Many far more experienced and knowledgeable people have tried and failed to make predictions on the future of warfare and no doubt others are making predictions on the same subjects being discussed in this article.

"Automatous Weapon System could reduce war crimes and civilian suffering"

There have been many ideas that were meant to change the way wars were fought, but few have succeeded. Notable examples include tanks, machine guns and aircraft. Each of these succeeded because they could be adapted to overcome countermeasures: tanks can gain more or better armour to stop new shells; machine guns were made more portable so they could not be simply avoided; aircraft flew higher and faster to avoid anti-aircraft (AA) fire. Flares, chaff and electronic countermeasures (ECM) were then developed to confuse and disrupt AA missiles. Even in classical warfare, adaptability often allowed a general to choose favourable ground to attempt to initiate battle on, giving them a decisive advantage.

Before we take a look at semi- and fully-autonomous weapons systems and the ethical minefield that surrounds them, I think it would be useful to look at their robot cousins - drones. An UAV is, according to the Oxford English dictionary, “an unmanned aerial vehicle (an aircraft piloted by remote control or onboard computers)” with a drone being “a remote-controlled pilotless aircraft or small flying device”. The word drone is often used to describe remotely-controlled vehicles of any type. If asked, most people will agree with the statement “there is no future in combat without at least the partial use of drones”. They offer many advantages compared to soldiers undertaking the same role. An UAV can pull tighter turns, be pushed right to the limit of its range and fly into areas with extremely strong air defence networks all the while not endangering a human pilot. The MQ-9 Reaper saw action as early as 2008, when it first hit an enemy target. Since then, Reapers have accumulated at least 2 million flight hours, with the actual figure probably being closer to 5 million. In 2020, a Reaper conducted the assasination of two high-profile Iranian military commanders - Qasem Soleimani and Abu Mahdi al-Muhandis. Unmanned turrets allow for higher crew survivability in the event that the turret is hit, by removing the crew from the turret. They have drawn interest from both the USA and Russia - with Russia planning on putting one on their next generation tank and the Americans already having put one on their wheeled M1128 Mobile Gun System (which is being retired for unrelated reasons).

The only fully autonomous weapons systems currently in service (that I know of) are the CIWS (Close In Weapons Systems) guns commonly mounted on ships. They carry one radar for searching the air, looking for missiles and aircraft, and another for locking onto a target and guiding the autocannon. The idea is that they can destroy aircraft and missiles that make it through the AA network, before they can cause damage to the ship. In the debate surrounding “killer robots” these seem to have flown under the radar despite their widespread use.

The main pro of fully autonomous weapon systems is their inability to “lose their cool”. They cannot be overloaded with information to the point that they make an error and endanger their squadmates. This ability is what makes these robots so powerful - they can take in all the information available and make the correct decisions based on that information. Robots can hear things that a human brain would filter out as background noise, forewarning them of an incoming threat. They could also make war crimes less likely for three reasons: firstly, they can be programmed so it is impossible for them to commit a war crime; secondly, they lack the motives for committing a war crime as they cannot experience hate or stress; finally, whilst human soldiers may close ranks and testify for a friend who has committed a crime, robots would feel no such obligations and would have photographic evidence of the events, making people much more unlikely to commit a war crime in front of them. The final main pro of robots comes from their fire control systems - they can predict, with great accuracy, where to shoot to hit a target. Fire control computers are already used on armoured vehicles, and were used on later battleships, such as USS Iowa. This increased accuracy will only get more pronounced as higher velocity projectiles come into use - weapons like railguns, electrothermal-chemical guns and lasers - as they reduce the time in which the target has to escape the sights. The concern that keeps coming up from the opponents of autonomous weaponry is what happens if they go wrong. Their durability and lethality are what sets them apart from their human counterparts, but while being impervious to incoming fire is a great asset for their allies, when one “goes rogue” it becomes the main problem for forces looking to protect innocent civilians that could be targeted by the drone. It is feared that such a situation could cost tens of lives before it is resolved. Doubt is also cast on the identification systems mounted on the robots e.g. can they tell the difference between an armed insurgent and a civilian? As this is a struggle even for experienced soldiers, most would agree that this is a very real problem and an accident waiting to happen. There are two solutions that spring to my mind: firstly, robots could be made to ask for permission to engage a suspicious contact or only attack targets “lit up” by a laser rangefinder; secondly, only deploy autonomous weapons systems in combat against combatants carrying weapons openly (“privileged” combatants) e.g. in campaigns against regular opposition forces. These solutions would eliminate almost all of the foreseen problems and hold people accountable if errors do creep through. I think that drones and “killer robots” absolutely can be made so that civilians are safe around them. I think that they will enter service and prove an invaluable tool in any general’s toolkit, providing: durable, flexible and precise fire support for infantry squads; tactical reconnaissance for armour; safer breaching methods for SOF; more manoeuvrable and efficient aircraft; effective missile defence systems such as Israel’s Iron Dome network, which has been used to great effect against rocket strikes.

It is my opinion that these advances in technology will save not just soldiers but civilians too, by providing more accurate fire and very fast reaction times.

Picture: Wix


Recent Posts

See All
bottom of page