Army pushes two new strategies to safeguard troops under 500-day AI implementation plan
Army looks to test vast AI implementation program over 500 days
{{#rendered}} {{/rendered}}
The U.S. Army this week announced steps it is taking to safeguard its troops as it looks to bolster its ability to successfully implement artificial intelligence under a 500-day plan.
The Army’s acquisition, logistics and technology (ALT) office on Wednesday released two new initiatives, "Break AI" and "Counter AI," which will test ever-developing AI technologies for reliable in-field use and provide protection from adversarial employment of AI against the U.S., the Federal News Network reported this week.
The Army is not only looking at how to safely implement AI across the military branch but how to develop it safely in coordination with outside parties.
{{#rendered}} {{/rendered}}
HOW ARTIFICIAL INTELLIGENCE IS RESHAPING MODERN WARFARE
"One of the obstacles for the adoption is how do we look at risk around AI? We have to look at issues around poisoned datasets, adversarial attacks, trojans and those types of things," Young Bang, principal deputy to the assistant secretary of the Army’s ALT, reportedly said during a tech conference in Georgia Wednesday.
"That’s easier to do if you’ve developed it in a controlled, trusted environment that [the Department of Defense] or the Army owns, and we’re going to do all that," he added. "But this really looks at how we can adopt third-party or commercial vendors’ algorithms right into our programs, so that we don’t have to compete with them.
{{#rendered}} {{/rendered}}
"We want to adopt them."
Bang’s announcement came as the Army wrapped up a 100-day sprint that looked at how to incorporate AI into its acquisitions process.
The goal was to examine ways the Army could develop its own AI algorithms while also working alongside trustworthy third parties to develop the technology as securely as possible, the Federal News Network reported.
{{#rendered}} {{/rendered}}
The Army is now using what it learned over the 100-day sprint to test and secure AI implementation across the board and develop systems for Army use alongside bolstering its defense against adversarial AI employment.
US HOLDS CONFERENCE ON MILITARY AI USE WITH DOZENS OF ALLIES TO DETERMINE 'RESPONSIBLE' USE
The "Break AI" initiative will focus on how AI could evolve under a field known as artificial general intelligence (AGI), which is the development of software that looks to match or surpass human cognitive abilities, a technology that has the potential to employ sophisticated decision-making and learning capabilities.
{{#rendered}} {{/rendered}}
This technology, which has not been fully realized yet, aims to improve upon current AI software that, for now, can only generate a predicted outcome based on data that is supplied.
But this next phase means not only developing but protecting against this ambiguous technology, meaning the Army has its work cut out for it.
"It’s about the notion of how we actually test and evaluate artificial intelligence," Bang reportedly said. "As we move towards AGI, how do we actually test something that we don’t know what the outcome of or what the behaviors are going to be?
{{#rendered}} {{/rendered}}
"You can’t test it the way that we test deterministic models, and we need industry’s help here."
The Army’s second part of its 500-day plan is a bit more straight forward, explained Jennifer Swanson, deputy assistant secretary of the Army’s office of Data, Engineering and Software.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
{{#rendered}} {{/rendered}}
CLICK HERE TO GET THE FOX NEWS APP
"We want to make sure our platforms, our algorithms and our capabilities are secure from attack and from threat, but it’s also about how we counter what the adversary has," she reportedly said. "We know we’re not the only ones investing in this. There’s lots of investment happening in countries that are big adversarial threats to the United States."
The Army officials remained tight-lipped on specific details the military branch will be pursuing to develop AI capabilities due to the sensitive operational security nature of the initiatives.
{{#rendered}} {{/rendered}}
Though, Swanson said, "As we start to learn and figure out what we’re going to do, there’s going to be things we share."