Love the idea, this could probably be improved through trying to cite real evidence and not just ~vibes~.
E.g. for "Medicaid Work Requirements", the 'Patients' section said "Some patients find employment and coverage stability, but many remain uninsured with delayed care needs."
This just ignores the evidence from states like Arkansas that implemented Medicaid work requirements that 90%+ of people who lost coverage actually qualified, but were overburdened by the paperwork or didn't realize they needed needed attestations. Unemployment in Arkansas actually went up in the population, as some people with jobs got cut from Medicaid, couldn't get medical care, and then because of illness lost their jobs.
First, thanks for reading and sharing the feedback! That input is super valuable for me to make this into a good tool - and not just an illustrative concept.
I think you're right about real evidence - this is definitely an LLM-driven approach (aka internet vibes) and based on my categorizations (aka my vibes). But I'm also balancing what I think will be a degree of subjectivity to get baked into the modeling, regardless of the data quality.
Second, Medicaid Work Requirements is a topic near and dear to my heart right now... been working with a Medicaid managed care plan on some of those upcoming policy changes. You are spot on - I found that to be a limitation as well. The data shows that we see a lot of disenrollment and churn, but the LLM should be prioritizes things that may be separate from health outcomes. Maybe some model tuning is something I can do, but for the most part this thing was relatively vibe-coded, so I'll have to think about the knobs and levers to twist and pull.
Yes, love it! Something that would probably work well in this architecture is having an underlying rubric for each category, e.g. LLM instructions along the lines of [very handwavey/non-scientific, just for illustrative purposes]:
"For 'Patients', evaluate the following facets and provide each a rating based on the provided scale. When possible, try to evaluate evidence based on published data:
- Acute health outcomes: 1-5
- Chronic health outcomes: 1-5
- Financial costs for patients: 1-3
- Autonomy and dignity: 1-2
Sum up the total ratings and provide a rating where:
1-5 = Bad for patients
6-9 = Neutral
10-15 = Good for patients"
Again, very hand-wavey in content, but I've found in my LLM experimentation that this type of framing can be successful and improve reliability and induce more-deterministic outputs. I've also dealt with context window problems when trying to do too much which for me were solved by using multiple LLM calls for different pieces (e.g. in this use case, one call for the Patients, one for Gov't, etc.).
Don't know why I was so late to reply to this - I really appreciate the thoughtful design. I've been a bit busy, but starting to put more heads-down time onto this project.
Still vibe coding on that, but I also am trying to set aside time for real development of this tool that can be more practical for enterprise and policy uses. I'm wireframing the model for maybe a database back-end for scenarios, real-world policies, and other weights so that it will be more deterministic and less "vibes". Will probably put together some real code for that.
I think the weighting will be critical - as well as some predetermined measurements around what incentives are driving stakeholders. The difference between what something like McKinsey or BCG could put out is the element I'm trying to frame up around competing incentives (mainly thinking about societal good, on top of financial good).
My vision is to have a tool like this that institutions can use to see how they should respond to upcoming policy, strategy, and technology changes - so they take action that isn't just beneficial to them, but really puts the whole stakeholder ecosystem in front of them.
Again, really appreciate the thoughtful comment, and if you're interested, would welcome your input on any design elements!
I'm a surgeon and a visual learner, so I greatly appreciate the graphic model of the seesaw. But I believe there's one huge ethereal elephant missing from the room--Cost. Actual cost.
Years ago, I attempted a cost analysis study of one tiny facet in healthcare, the cost of taking care of babies with congenital hip deformities. Thousands of data points and three different health systems later, I learned that no one, not in billing, accounting, or in any other department, could quote the cost of even the simplest intervention. (e.g., the cost of a hip X-ray, a day in the hospital)
Using the same data in three different healthcare systems, private, university, and charity hospitals-- information on "cost" boiled down to answers like "This is what we charge," "This is our budget from last year. We just add 5%," and, "Gosh, that's a good question." Price was the only quotable quote.
Until we look (maybe outside the U.S.) for a rough approximation of true cost in healthcare delivery, I don't think the models will serve us accurately. The prices we charge are based, not on market forces, (as you alluded to) but on an odd, third-party, risk-pooled orgy of consumption, that leave our patients at best woozy and at worst moribund.
Love the idea, this could probably be improved through trying to cite real evidence and not just ~vibes~.
E.g. for "Medicaid Work Requirements", the 'Patients' section said "Some patients find employment and coverage stability, but many remain uninsured with delayed care needs."
This just ignores the evidence from states like Arkansas that implemented Medicaid work requirements that 90%+ of people who lost coverage actually qualified, but were overburdened by the paperwork or didn't realize they needed needed attestations. Unemployment in Arkansas actually went up in the population, as some people with jobs got cut from Medicaid, couldn't get medical care, and then because of illness lost their jobs.
First, thanks for reading and sharing the feedback! That input is super valuable for me to make this into a good tool - and not just an illustrative concept.
I think you're right about real evidence - this is definitely an LLM-driven approach (aka internet vibes) and based on my categorizations (aka my vibes). But I'm also balancing what I think will be a degree of subjectivity to get baked into the modeling, regardless of the data quality.
Second, Medicaid Work Requirements is a topic near and dear to my heart right now... been working with a Medicaid managed care plan on some of those upcoming policy changes. You are spot on - I found that to be a limitation as well. The data shows that we see a lot of disenrollment and churn, but the LLM should be prioritizes things that may be separate from health outcomes. Maybe some model tuning is something I can do, but for the most part this thing was relatively vibe-coded, so I'll have to think about the knobs and levers to twist and pull.
Yes, love it! Something that would probably work well in this architecture is having an underlying rubric for each category, e.g. LLM instructions along the lines of [very handwavey/non-scientific, just for illustrative purposes]:
"For 'Patients', evaluate the following facets and provide each a rating based on the provided scale. When possible, try to evaluate evidence based on published data:
- Acute health outcomes: 1-5
- Chronic health outcomes: 1-5
- Financial costs for patients: 1-3
- Autonomy and dignity: 1-2
Sum up the total ratings and provide a rating where:
1-5 = Bad for patients
6-9 = Neutral
10-15 = Good for patients"
Again, very hand-wavey in content, but I've found in my LLM experimentation that this type of framing can be successful and improve reliability and induce more-deterministic outputs. I've also dealt with context window problems when trying to do too much which for me were solved by using multiple LLM calls for different pieces (e.g. in this use case, one call for the Patients, one for Gov't, etc.).
Don't know why I was so late to reply to this - I really appreciate the thoughtful design. I've been a bit busy, but starting to put more heads-down time onto this project.
Still vibe coding on that, but I also am trying to set aside time for real development of this tool that can be more practical for enterprise and policy uses. I'm wireframing the model for maybe a database back-end for scenarios, real-world policies, and other weights so that it will be more deterministic and less "vibes". Will probably put together some real code for that.
I think the weighting will be critical - as well as some predetermined measurements around what incentives are driving stakeholders. The difference between what something like McKinsey or BCG could put out is the element I'm trying to frame up around competing incentives (mainly thinking about societal good, on top of financial good).
My vision is to have a tool like this that institutions can use to see how they should respond to upcoming policy, strategy, and technology changes - so they take action that isn't just beneficial to them, but really puts the whole stakeholder ecosystem in front of them.
Again, really appreciate the thoughtful comment, and if you're interested, would welcome your input on any design elements!
This is a fantastic post!
Thanks for the feedback - I’m really glad you like it!
I'm a surgeon and a visual learner, so I greatly appreciate the graphic model of the seesaw. But I believe there's one huge ethereal elephant missing from the room--Cost. Actual cost.
Years ago, I attempted a cost analysis study of one tiny facet in healthcare, the cost of taking care of babies with congenital hip deformities. Thousands of data points and three different health systems later, I learned that no one, not in billing, accounting, or in any other department, could quote the cost of even the simplest intervention. (e.g., the cost of a hip X-ray, a day in the hospital)
Using the same data in three different healthcare systems, private, university, and charity hospitals-- information on "cost" boiled down to answers like "This is what we charge," "This is our budget from last year. We just add 5%," and, "Gosh, that's a good question." Price was the only quotable quote.
Until we look (maybe outside the U.S.) for a rough approximation of true cost in healthcare delivery, I don't think the models will serve us accurately. The prices we charge are based, not on market forces, (as you alluded to) but on an odd, third-party, risk-pooled orgy of consumption, that leave our patients at best woozy and at worst moribund.