Please use this identifier to cite or link to this item: https://dair.nps.edu/handle/123456789/5034
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSteven Hedgepeth, Ryan Mark Tagatac-
dc.date.accessioned2023-12-27T18:16:32Z-
dc.date.available2023-12-27T18:16:32Z-
dc.date.issued2023-12-27-
dc.identifier.citationPublished--Unlimited Distributionen_US
dc.identifier.urihttps://dair.nps.edu/handle/123456789/5034-
dc.descriptionAcquisition Management / Graduate Student Researchen_US
dc.description.abstractArtificial intelligence (AI)/Large Language Models (LLMs) have shown promise in various tasks, but their use in authoring source selection evaluation factors in the Department of Defense (DOD) is not well studied. Understanding the effectiveness of AI-authored evaluation factors is crucial for reliable decision-making. The integration of LLM technology in the DOD aligns with the rise of AI. This exploratory analysis investigated DOD acquisition professionals’ confidence in and bias toward AI-authored evaluation factors. Surveys at George Mason University (GMU) and Naval Postgraduate School presented professionals with requirements documentation and human or AI-generated evaluation factors. Due to statistically significant differences between the surveys, only the GMU data was relied on. Statistical and qualitative analyses evaluated variations in confidence ratings across different participant groupings and authorship disclosure. Results reveal reduced confidence and slight algorithm aversion to AI-authored factors versus human-authored, especially among older professionals. Despite limitations including sampling constraints, notable discrepancies emerge in perceptions of AI versus human outputs. Recommendations include the development of an AI guide to aid responsible use of AI in acquisitions. Further research with larger, varied samples and various AI tools is needed. This initial work advances AI integration policy discussions and public trust in defense acquisitions.en_US
dc.description.sponsorshipAcquisition Research Programen_US
dc.language.isoen_USen_US
dc.publisherAcquisition Research Programen_US
dc.relation.ispartofseriesAcquisition Management;NPS-AM-24-015-
dc.subjectArtificial Intelligenceen_US
dc.subjectSource Selectionen_US
dc.subjectFARen_US
dc.subjectDFARSen_US
dc.subjectChatGPTen_US
dc.titleAssessing DoD Confidence and Bias in AI/LLM Authored Evaluation Factorsen_US
dc.typeThesisen_US
Appears in Collections:NPS Graduate Student Theses & Reports

Files in This Item:
File Description SizeFormat 
NPS-AM-24-015.pdfStudent Thesis32.76 MBAdobe PDFView/Open
Student Poster.pdfStudent Poster662.85 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.