A Multi-AI Agent Framework for Interactive Neurosurgical Education and Evaluation: From Vignettes to Virtual Conversations

ABSTRACT

Background and Objectives Traditional medical board examinations present clinical information in static vignettes with multiple-choices, fundamentally different from how physicians gather and integrate data in practice. Recent advances in Large Language Models (LLMs) offer promising approaches to creating more realistic clinical interactive conversations. However, these approaches are limited in neurosurgery, where patient communication capacity varies significantly and diagnosis heavily relies on objective data like imaging and neurological examinations. We aimed to develop and evaluate a multi-AI agent conversation framework for neurosurgical case assessment that enables realistic clinical interactions through simulated patients and structured access to objective clinical data.

Methods We developed a framework to convert 608 Self-Assessment in Neurological Surgery (SANS) first-order diagnosis questions into conversation sessions using three specialized AI agents: Patient AI for subjective information, System AI for objective data, and Clinical AI for diagnostic reasoning. We evaluated GPT-4o’s diagnostic accuracy across traditional vignettes, patient-only conversations, and patient+system AI interactions, with human benchmark testing from ten neurosurgery residents.

Results GPT-4o showed significant performance drops from traditional vignettes to conversational formats in both multiple-choice (89.0% to 60.9%, p<0.0001) and free-response scenarios (78.4% to 30.3%, p<0.0001). Adding access to objective data through System AI improved performance (to 67.4%, p=0.0015 and 61.8%, p<0.0001, respectively). Questions requiring image interpretation showed similar patterns but lower accuracy. Residents outperformed GPT-4o in free-response conversations (70.0% vs 28.3%, p=0.0030) using fewer interactions and reported high educational value of the interactive format.

Conclusions This multi-AI agent framework provides both a more challenging evaluation method for LLMs and an engaging educational tool for neurosurgical training. The significant performance drops in conversational formats suggest that traditional multiple-choice testing may overestimate LLMs’ clinical reasoning capabilities, while the framework’s interactive nature offers promising applications for enhancing medical education.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

This study did not receive any funding.

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

Yes

The details of the IRB/oversight body that provided approval or exemption for the research described are given below:

This project was IRB-approved (i23-00510) and reviewed by Congress of Neurological Surgeons (CNS) leadership. Patient consent was not required as no patients were involved.

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

Yes

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

Yes

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Yes

Comments (0)

No login
gif