Daily Newsletter

20 October 2023

Daily Newsletter

20 October 2023

Autonomous systems learn faster in low-fidelity simulations, says DARPA

The US Defense Advanced Research Projects Agency (DARPA) will use low-fidelity simulations to train flexible, “generalisable autonomy”.

John Hill October 19 2023

A US Defense Advanced Research Projects Agency (DARPA) investigation has put forward a new theory for enabling autonomous military systems to learn faster and more efficiently.

Autonomous systems learn by troubleshooting in a risk averse setting in simulated environments. However, this method can take months or even years to fine-tune and this makes autonomous systems vulnerable when faced with unknown situations or observations in the real world.

DARPA experts have taken a step back to consider the purpose of high and low-fidelity simulations. They found that while high-fidelity simulations train autonomous systems through a memorisation approach to learning, low-fidelity environments train systems in a more general way, making them more flexible to differences in environments.

“Modelling everything in high fidelity makes it so the AI [artificial intelligence] agent overfits to the dynamics of the simulation,” said Dr Alvaro Velasquez, DARPA’s programme manager for the effort. “When you go to the real world, nothing looks exactly like what you modelled/simulated. We want generalisable autonomy across a variety of platforms and domains.”

Moreover, the agency theorises that learning and transferring autonomy across diverse, low-fidelity simulations can lead to a more rapid transfer of autonomy from simulation to reality – “perhaps even as early as the same-day versus weeks or months with traditional approaches.”

Testing autonomous systems

AI applications in defence cover a range of systems from unmanned autonomous vehicles to intelligence, surveillance and reconnaissance sensor systems.

GlobalData estimates the total AI market will be worth $383.3bn in 2030, implying a 21% compound annual growth rate between 2022 and 2030.

DARPA’s effort to determine appropriate simulation conditions to train autonomous systems for a specific purpose may open a new market venture in the machine learning sector.

DARPA’s autonomous system training programme will feature competitions at the end of phases 1 and 2, respectively, with the results of the first competition being used to down-select from six performers to three.

Phase 1 is 18 months and will develop sim-to-sim autonomy transfer techniques and novel methods for automatically developing or refining low-fidelity models and simulations to be used for transfer.

Phase 2 is 18 months and will develop sim-to-real autonomy transfer techniques and novel methods for automatically developing or refining low-fidelity models and simulations to be used for transfer. There will be two in-programme competitions corresponding to the two phases of the programme.

Despite ethical challenges, AI remains a key battleground technology in the defense sector

AI is the latest battleground technology for major military superpowers like the US, China, and Russia. It promises to automate and enhance all aspects of modern warfare, including training and simulation, command, control, communications, computers intelligence, surveillance, reconnaissance (C4ISR), electronic warfare, and frontline service. AI integration presents many ethical challenges, though the prospect of falling behind may put those who do not recognize AI's potential at a clear disadvantage.

Newsletters by sectors

close

Sign up to the newsletter: In Brief

Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Thank you for subscribing

View all newsletters from across the GlobalData Media network.

close