Seems to make sense for maximum privacy. Put together a large enough model to answer health queries, have OCR and image recognition to read exams, give it web access to search for medication details and, of course, gather raw data from any devices you use to measure weight, heart rate, etc.
But that’s just in theory and we all know things are hard to put together. In practice, have you had any experience getting anything like this working locally?


For sure, context rot is a problem, but that’s also the easiest thing to control for in this case. If sensor data is relevant to you, having some code to process and reduce it to a dashboard you can read is always a good idea, independently of getting an LLM into the loop.
This becomes more complicated with data you can’t really understand like results from blood tests, for example. But maybe you just don’t summarize any of that.