GenAI & Healthcare APIs: Securing LLM Access to Sensitive Medical Records
DOI:
https://doi.org/10.15680/IJCTECE.2026.0901007Keywords:
Generative AI, Large Language Models, Healthcare APIs, HL7, FHIR, Interoperability, Patient Data Privacy, Prompt Injection, Electronic Health Records, AI Security, Zero-Trust ArchitectureAbstract
The rapid adoption of generative artificial intelligence (GenAI) in healthcare has introduced transformative opportunities for patient engagement, clinical decision support, and administrative efficiency. Large language models (LLMs), when integrated with electronic health records (EHRs) and ancillary systems via secure healthcare APIs, can enable advanced conversational interfaces for patients, clinicians, and researchers. However, these integrations pose critical challenges related to data privacy, interoperability, and security. Sensitive patient records governed by HIPAA, GDPR, and other regulatory frameworks must be protected from unauthorized disclosure, particularly in scenarios where LLMs risk overexposure of data or manipulation through prompt injection attacks. This research explores a layered architecture for securing LLM access to healthcare APIs, focusing on three core areas: (1) limiting LLM data visibility through controlled API responses, (2) implementing robust defenses against prompt injection and adversarial queries, and (3) ensuring interoperability across heterogeneous systems via HL7–FHIR transformations. The proposed framework emphasizes zero-trust access models, de-identification techniques, and standardized data governance while highlighting practical use cases such as AI-enabled patient portals and clinical chatbots. Through conceptual modeling, threat analysis, and system design, this paper outlines best practices for balancing usability, interoperability, and compliance in GenAI-driven healthcare ecosystems.

