By 2030, we're going to have access to next-level, cheap, accurate medical diagnostics and treatment outcomes.
The biggest obstacle at the moment, I think, is the cost of hypercomputation and medical supplies. Consider the impressive trajectory of machine learning - accurate detection of cancer, dementia, heart failure. A decade worth of growth in machine learning is going to feel like a century worth of growth.
ML models can detect depression and anxiety in your voice. I'm sure that it's already possible to build an app that could coach you through depression by persuading you to eat certain foods and exercise at certain key times. It could consiously or unconsiously guide your moods down a certain path. Or if you want to try someone a bit less morbid than the onset of depression, it could keep you from hanger. Personally whenever I become hangry, it's too late - nobody wants to be around me and I end up racking up apology debt.
These detection applications can stack. With cheap hypercomputation and a proliferation of specialized models you could get a
holistic (alternative science people have really killed this word) gestalt view of your health. A clear, up to date overview (and list of recommendations) would make managing and maintaining your own health so much easier.
Now take into consideration the efficiency and affordability of modern international shipping and logistics. The idea of a global open source medical hardware market is not an insane thought, and even a single skilled maker per town would have enough economic incentive to be able to assemble, complete, and sell cheap diagnostics equipment (EEG, ECG, pin-prick blood test readers etc). But even in strictly regulated markets, it's inevitable that we'll successfully create a range of non-vaporware Theranos-esque tools.
If (when) hypercomputation becomes as cheap as a data plan, applying these data models on inexpensive, next generation devices will be a game changer. Your confidence levels in GP visits would also soar, because they'd have access to millions of second opinions instantly if future ML architectures have sufficient interoperability.
A million different ML models trained using independent datasets and a variety of approaches could be managed by a meta-learning model that forces the various specialized models to compete for an outcome. The winning theory (or theories) might be presented to the GP after some number crunching.
GTP-8 might even be able to summarize complex interpretations for the doctor. Or you could imagine that general physicians might become phased out if the technology becomes good enough for it to sufficiently fulfill the GP role. Even if our models remain black boxes, an AI doctor could still feasibly provide a high probability diagnosis that has a near-unanimous black box quorum internally (across the various models that independently analyzed the input data). Even if it's only accurate 99% of the time (and defers to humans when it does not have at >0.999 or whatever level of probability) is orders of magnitude more efficient than our world today.
You could argue that we could be most of the way there already - universal healthcare with existing diagnostic tools isn't an unreasonable idea. If we're lucky, we just need to wait for the market to force down the costs of these two things just like it did for solar panels, and economic incentives will do the rest. If we get cosmologically lucky, the people that understand modernity and run countries will roll out smarter policies because they understand that a healthy society is cheaper than a sick one. Either way, I think that we'll be a inconceivably closer to smart, universal healthcare in 2035 than we are today.