Join our research team! Recruiting volunteer/paid positions for interested undergraduate students.
Our research aims to develop a usable and explainable language model that enhances tactical decision-making in critical domains such as military operations and emergency response. The key research questions driving our work include: (1) How can open-source large language models (LLMs) assist in tactical decision-making? (2) How effective are zero-shot prompts in assessing threats, such as approaching objects, when using fixed LLMs? (3) Can response quality be improved through retrieval-augmented generation (RAG) with domain-specific documents?
Existing work on LLMs highlights their potential in reasoning and decision support, but challenges remain in ensuring contextual accuracy and reliability. RAG has emerged as a promising approach to enhance language models by incorporating external knowledge retrieval, which we are actively investigating.
Our current research milestone focuses on optimizing a RAG pipeline tailored for tactical decision-making. This involves refining retrieval mechanisms, curating domain-relevant corpora, and systematically evaluating how retrieved knowledge improves response accuracy and contextual relevance compared to previous experiments without RAG.
Ultimately, our research will result in an LLM that integrates RAG to provide more reliable, context-aware insights. The final model will be tested in real-world scenarios with human operators, contributing to the broader adoption of AI-driven decision support in high-stress environments.
2024-Present