On the 28th of June 2023 in Canberra, Sarah Kreps presented a lecture on “Weaponizing ChatGPT? National Security and the Perils of AI-Generated Texts in Democratic Societies”. This new public lecture series came about from a two-year (2023-2025) research project on Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making. This lecture series is generously funded by the Australian Department of Defence and is being led by Professor Toni Erskine from the Coral Bell School of Asia Pacific Affairs.
Sarah Kreps is the John L. Wetherill Professor of Government, Adjunct Professor of Law, and Director of the Tech Policy Institute at Cornell University. She is also a Non-Resident Senior Fellow at the Brookings Institution and a life member of the Council on Foreign Relations.
In her lecture, Sarah Kreps explored the growing use of AI and automated systems in warfare and its potential implications for future conflicts. Kreps notes that automated systems like drones are already being used on the battlefield, and the rise of ChatGPT and other text generation AI technologies are starting to creep into the political arena. As AI technology advances, we are likely to see more sophisticated use of these systems in the context of warfare. Kreps suggested that the use of generative AI systems have the potential for streamlining the creation and communication of operational orders, day to day tasking among others, but a lot of caution should be had before integrating systems that communicate the approvals of instigating or responding to application of force. Deciding how much, where and when supplies are to be delivered for logistics resupply is one thing, deciding when or where to drop nuclear weapons something else. The use of AI in critical decision making introduces ethical questions about accountability and transparency.
The potential for adversaries both internally and abroad to generate of large quantities of misinformation and propaganda will be challenging to respond to and likely result in the public being very sceptical of any information. However, previous research by Kreps has shown that people can’t discern between news stories written by AI and by mainstream media outlets such as the New York Times. This results in the public, officials and elites in power, all still able to be persuaded and influenced by the information that is created by theses systems. Overall, Kreps stresses the importance of considering both the benefits and risks of automated systems in military operations as we move towards an increasingly AI-enabled future of warfare.
We thank Prof. Toni Erskine and her team and look forward to the next instalment in the series of “AI, Automated Systems, and the Future of War”.
Meikai Group
Meikai is a Professional Services Consultancy dedicated to facilitating and solving capability problems and challenges for our clients. Meikai specialises in the provision of engineering, project management and program delivery services to support the implementation of emerging and disruptive technology within the ICT, simulation, and training domains.
Meikai holds a R&D/Futures branch, that looks to explore emerging technology. This ensures we foster cutting edge thinking, skills and competence in our workforce, to continue to providing value and quality to our clients. Meikai knows that research into Artificial Intelligence, Machine Learning and Autonomous Systems is essential to building an innovative future.
About the Author
Philip Sammons – Requirements Engineer
Philip is a highly motivated requirements Engineer with over 10 years’ experience in the Simulation, Engineering, and ICT domains. He has a strong computer and networking background with a Bachelor of Engineering (Mechatronic) and Master of Engineering from the University of New South Wales, Sydney. Philip has managed small, large, and diverse systems and has extensive experience in the building security and process engineering domain.
Comments