Embark on a healthcare revolution! Witness the unique synergy of LLMs and GraphQL reshaping how doctors access patient data. Join me to explore the innovation behind ‘chatting with my medical database’ feature — a game-changer in data retrieval.
In the ever-evolving landscape of healthcare, doctors face an ongoing challenge: how to swiftly access vital medical information about their patients buried deep within databases. Traditional methods have proven time-consuming and often fall short of providing the comprehensive answers doctors need. But what if I told you that AI, SQL, and GraphQL have walked into fertility clinics, offering a groundbreaking solution?
In my presentation I explore the innovative use of Large Language Models (LLMs) in medical feature development. I delve into a novel approach that leverages LLMs to translate doctors’ intricate questions into SQL and GraphQL queries, enabling prompt and accurate retrieval of patient data. The result? A revolution in the way doctors access and utilize critical information to make informed decisions.
Join me at the coding table as we uncover the objectives behind crafting the “chatting with my medical database” feature. Together, we’ll unravel how LLM-based Python chains became integral to this feature and how GraphQL emerged as the superhero, leaving SQL in the dust. We will delve deep into the key development considerations that influenced our choices, encompassing security, cloud integration, flexibility to handle diverse inputs, and reliability in providing doctors with answers to their questions. Witness how concise and targeted Python code can efficiently achieve these objectives.
Session outline:
Self introduction (40 seconds)
Main challenges of doctors while working with patients’ medical data in databases. (2 minutes)
Outline of the talk (1 minute)
Goals of “chatting with my data” feature for fertility clinics. (2 minute)
Going with the audience step by step on the architecture flow of this kind of feature - showing each step in the flow, what is the input and output of each step, what are our main concerns in every step (7 minutes)
Sharing with the audience the solutions we discussed in our team to achieve the feature goals, and that are adequate to the feature flow. Presenting the pros and cons of each solution with respect to: security, flexibility in input and output, development time and cost, explainability and reliability of answers given back to the doctor. In this part I share a Python code that includes LLM-based chains for each of the solutions. I show how the code enables us to seamlessly test each of the solutions and compare them. The three solutions are: LLM+SQL, LLM+GraphQL and LLM+RestAPI. (9 minutes - three solutions, 3 minutes each)
Discussing why we chose the “GraphQL solution”, how we implement it and present additional challenges in this development section (leaving some of them not solved yet… :)) (5 minutes)
Summary (2 minutes)