Please use this identifier to cite or link to this item:
https://dair.nps.edu/handle/123456789/5034
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Steven Hedgepeth, Ryan Mark Tagatac | - |
dc.date.accessioned | 2023-12-27T18:16:32Z | - |
dc.date.available | 2023-12-27T18:16:32Z | - |
dc.date.issued | 2023-12-27 | - |
dc.identifier.citation | Published--Unlimited Distribution | en_US |
dc.identifier.uri | https://dair.nps.edu/handle/123456789/5034 | - |
dc.description | Acquisition Management / Graduate Student Research | en_US |
dc.description.abstract | Artificial intelligence (AI)/Large Language Models (LLMs) have shown promise in various tasks, but their use in authoring source selection evaluation factors in the Department of Defense (DOD) is not well studied. Understanding the effectiveness of AI-authored evaluation factors is crucial for reliable decision-making. The integration of LLM technology in the DOD aligns with the rise of AI. This exploratory analysis investigated DOD acquisition professionals’ confidence in and bias toward AI-authored evaluation factors. Surveys at George Mason University (GMU) and Naval Postgraduate School presented professionals with requirements documentation and human or AI-generated evaluation factors. Due to statistically significant differences between the surveys, only the GMU data was relied on. Statistical and qualitative analyses evaluated variations in confidence ratings across different participant groupings and authorship disclosure. Results reveal reduced confidence and slight algorithm aversion to AI-authored factors versus human-authored, especially among older professionals. Despite limitations including sampling constraints, notable discrepancies emerge in perceptions of AI versus human outputs. Recommendations include the development of an AI guide to aid responsible use of AI in acquisitions. Further research with larger, varied samples and various AI tools is needed. This initial work advances AI integration policy discussions and public trust in defense acquisitions. | en_US |
dc.description.sponsorship | Acquisition Research Program | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Acquisition Research Program | en_US |
dc.relation.ispartofseries | Acquisition Management;NPS-AM-24-015 | - |
dc.subject | Artificial Intelligence | en_US |
dc.subject | Source Selection | en_US |
dc.subject | FAR | en_US |
dc.subject | DFARS | en_US |
dc.subject | ChatGPT | en_US |
dc.title | Assessing DoD Confidence and Bias in AI/LLM Authored Evaluation Factors | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | NPS Graduate Student Theses & Reports |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
NPS-AM-24-015.pdf | Student Thesis | 32.76 MB | Adobe PDF | View/Open |
Student Poster.pdf | Student Poster | 662.85 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.