Angus Royal
Futures Portfolio Support Officer
Introduction
2023 has been a prominent year in the development of Artificial Intelligence (AI), with major developments such as Chat GPT 4 emerging into the global market. Its impact is clear across business, Government, academia, and the lives of everyday individuals.
AI has also caught the attention of high-ranking leaders within the Australian Defence Force (ADF), who see AI as a newly emerging and disruptive technology, which can give the ADF an edge when it comes to analysing data from adversaries[1]. This capability will enable policy makers, high ranking ADF personnel and ministers to make decisions before a potential event occurs and to be able to have an advantage over any state or non-state actor who wants to cause harm to Australia.
Using AI to analyse data to make predictive models to determine the actions of a state or non-state actor is a powerful capability for the ADF. However, there are risks that must be considered when using a capability like AI prediction models in decision making. Before going over the risks of using AI, it is important to understand what an AI predictive model is.
What is an AI Prediction Model and how would it be used by the ADF?
An AI prediction model is designed to analyse various data sets, which enable it to assess potential events or outcomes derived from the data it is presented with[2]. The types of data that an AI model would be inputted with to inform military decision making, include, human intelligence (HUMINT), signals intelligence (SIGINT), imagery intelligence (IMINT), measurement and signatures intelligence (MASINT) and open-source intelligence (OSINT).
The AI prediction model will analyse the inputted data based on the query of the user. For example, an ADF officer may ask the AI to provide an analysis of the likelihood that Russia will use a new aircraft capability in a battle scenario. The AI will then analyse all available and accessible data to provide the ADF Officer with the likelihood.
The AI would then go over all available intelligence it has access to and then will output an answer to the ADF officer with a probability of Russia using that new aircraft capability. This would help inform the ADF officer of probabilities and the threat profile of a nation-state to help develop accurate reports and develop counter measures to that potential threat. However, while this would be a powerful tool for the ADF to have there are risks that need to be considered before using a capability like AI prediction models.
Risks of Using AI Prediction Models
The use of AI prediction models to inform decision making can serve as a valuable tool for the ADF. It has the ability to enhance strategic and tactical planning and give the ADF an edge to potential adversaries. However, it is important to recognise that its use may come with various risks, if not implemented or managed properly.
A prominent risk of relying on an AI prediction models for military decision making, can occur from the incorrect assessment of a situation and an unnecessary recommendation of an escalated response. This can then lead to two nation-states caught up in a feedback loop, in which the AI model of both nations interpret the actions of the other as aggressive or threatening and recommends a military action response. This can lead to a continuous loop and escalation based on the analysis of an AI model.
An example of this was discussed in the book “AI and the Bomb” by Dr James Johnson who is a lecturer in strategic studies in the Department of Politics and International Relations at the University of Aberdeen. He explores a scenario where the US and China heavily rely on their AI prediction models to make crucial military decisions.[3] This scenario showcases the seriousness of the potential dangers of a reliance of AI [i]prediction models in the military decision process, especially two major superpowers with nuclear capabilities. This will result in the death of millions of people. This highlights the importance of having human input at every level of the decision-making process when dealing with what recommendations an AI prediction model makes.
Another risk is the complete replacement of humans in the decision-making process, which includes the analysis of intelligence. As AI is still continuing to be developed, it is prone to failures, glitches and power outages.
Relying completely on AI to develop threat assessments, intelligence reports and recommendations without having personnel who carry out the same tasks, may put a nation state’s military at risk as the flow of intelligence to all areas of Government could be cut off in the event of the AI models failing. This will directly impact military level decision making as there will be lack of clear oversight of the threat environment and information to make an informed and correct course of action is. As relying entirely on an AI prediction model for intelligence is extremely risky, it should only be used as a tool as part of the intelligence gathering, analysing and reporting processes. AI predictive models can be a powerful tool used in conjunction with human input to improve the effectiveness and efficiency of military decision making.
Considerations For the ADF
To mitigate the risks of implementing an AI Prediction Model into the decision-making process of the ADF, there are four key risk reduction functions from the NIST AI Risk Management Framework (NIST, n.d) should be considered:
Governance
The first key risk reduction function is governance, where policies, procedures, and other governance mechanisms should be implemented to govern the use of AI Prediction models within ADF decision making process. The policies that are developed should govern how we use AI Prediction models and enable checks and balances on decisions produced from an AI Prediction Model[4]. Procedures should also establish how the information is going to be vetted by a human before it is used in deciding on how a threat could be countered. Other governance mechanisms that should be considered by the ADF to manage the risk of AI prediction models is developing a decision-making process that removes biases from decisions to reduce the likelihood that information could be influenced when received from an AI.
Mapping Technical and Context-Based Requirements:
The second key risk reduction function is the mapping of technical and context-based requirements for an AI prediction model to be implemented into the decision-making process. This includes what technical standards the AI needs to abide by for it to function safely and efficiently to produce information an ADF officer can use to make a decision. Mapping out the decision-making process of the AI Prediction Model and how it evaluates information is crucial to ensure AI operates within efficient and safe parameters[5]. Finally, using an AI Prediction Model in military decision-making requires the AI to be imparted with the norms and values the ADF abides by, to ensure the information it is producing aligns with ethical and operational standards that are impartial and consider the consequences of a decision being undertaken with the available information the AI has access to.
Measuring Risk at the Organisational Level
The third key risk reduction function is the evaluation and measurement of the trustworthiness of the AI Prediction Model at the organisational level. The key considerations to make for this risk reduction function are to evaluate the performance of the AI, the trustworthiness of the information the AI is producing, and monitor the AI for any biases and errors throughout the decision-making process[6]. To implement these key considerations at the organisational level, the ADF need to test and evaluate AI through war games. This will allow the ADF to make any changes to it prior to it being fully operational.
Managing the Risk of The AI Prediction Model
The fourth key risk reduction function is the development of guidance for the organisation around how to manage the risk of the AI Prediction Model. This includes the development of a risk management strategy with a set of policies that enable the ADF to manage and mitigate the risks of using an AI in the decision-making process. A part of this risk management strategy is the undertaking of ongoing monitoring of the AI and how it carries out the intake of intelligence, analysis of information in conjunction with other parameters, and the recommendations it provides at the end of the process[7]. Finally, the ADF needs to identify any risks or errors in the decision-making process in order to further develop and implement corrective actions of the AI.
Conclusion
The rapid development of AI in the past few years and its continual development in 2023, show that the use of AI will continue to grow across different areas of society, including the Australian Defence Force. However, we must acknowledge the risks associated with the heavy reliance of AI, which could lead to disastrous results. It is then important that we look to implement an AI Risk Management Framework in order to minimise those risks.
Overall, while there are risks in using an AI Prediction Model in military decision-making, if proper risk reduction functions are employed by the ADF, it can be an effective and efficient tool.
Meikai Group
Meikai is a Professional Services Consultancy dedicated to facilitating and solving capability problems and challenges for our clients. Meikai specialises in the provision of engineering, project management and program delivery services to support the implementation of emerging and disruptive technology within the ICT, simulation, and training domains.
About the Author
Angus Royal – Futures Portfolio Support Officer
As a Graduate Project Manager, Angus is committed to supporting projects within our Professional Services Portfolio, thereby assisting the government with the implementation of emerging and disruptive technology. Initially, he will be developing his skills as the Futures Portfolio Support Officer to ensure Meikai delivers various internal projects.
References
[1] Glennn Moy, Slava Shekh, Martin Oxenham & Simon Ellis-Steinborner, Recent Advances in Artificial Intelligence and their Impact on Defence, (Canberra: DST, 2020), 12.
[2] Philip Kerbusch, Bas Keijser & Selmar Smit, Roles of AI and Simulation for Military Decision Making (n.p: NATO, n.d.), 2.
[3] James Johnson, AI and the Bomb: Nuclear Strategy and Risk in the Digital Age, (Oxford, 2023), 3.
[4] Wyatt Hoffman & Heeu M. Kim, Reducing the Risks of Artificial Intelligence for Military Decision Advantage, (n.p: CSET, n.d.),22.
[5] Hoffman & Kim, Reducing the Risk of Artificial Intelligence, 24.
[6] Hoffman & Kim, Reducing the Risk of Artificial Intelligence, 25.
[7] Artificial Intelligence Risk Management Framework, (n.p: NIST, n.d.), 8
Comments