- Turing Post
- Posts
- 🎙️ Can AI Сatch Cancer?
🎙️ Can AI Сatch Cancer?
The Future of Cancer Diagnosis: Digital Pathology and AI
This episode of Inference is dedicated to Breast Cancer Awareness Month. I’m talking with Akash Parvatikar – AI scientist and product leader in digital pathology and computational biology. He leads PathologyMap™ at HistoWiz, a digital pathology platform that turns whole-slide images into searchable, analyzable data with AI tools – streamlining research and accelerating insights for cancer and precision medicine.
Subscribe to our YouTube channel, or listen the interview on Spotify / Apple
Digital pathology is a very new field, but an important one, considering that the US is facing a large shortage of pathologists.
In this episode, we discuss:
What “digital pathology” actually is – and why scanning glass slides changes everything
Where AI already helps today and where it’s still just a very promising technology
Why explainability, failure modes, and data standards decide clinical adoption
What is the real bottleneck for using AI in pathology and diagnosis
How agentic workflows might enter the lab in pieces first
A practical timeline for digitization, FDA-type approvals, and hospital rollouts
The human role that stays
Big idea: Digitize first. Validate carefully. Then scale tools that clinicians trust. Telepathology expands access. Good AI here speaks the pathologist’s language. Remember – AI that can’t explain itself in clinical terms won’t ship.
This is a free edition. Upgrade if you want to receive our deep dives directly in your inbox. If you want to support us without getting a subscription – do it here.
This transcript is edited by GPT-5. Let me know what you think. And – it’s always better to watch the full video) ⬇️
Ksenia Se:
Hi Akash, thank you for joining me today. This is a very special episode because we’re airing it in October in dedication to Breast Cancer Awareness Month. Let’s start with the big picture: when will AI meaningfully change the trajectory of cancer diagnostics?
Akash Parvatikar:
I believe right now we’re in the stage of digitizing pathology labs. Even today, clinicians diagnose cancer under a microscope. But there’s now a strong push to digitize the massive volume of glass slides, and that will open the door for intelligent tools like AI to help detect cancer features.
So we’re in a transition phase, moving from physical to digital, but I think of it as running on two parallel roads: one road is digitization, the other is building AI tools to assist clinicians in diagnosis. Very soon – maybe in the next couple of years – these roads will converge. You’ll have glass slides fully digitized and cutting-edge AI identifying features that might otherwise take much longer to find, or that might have been misdiagnosed in the early stages.

Prompt to Kontext.FLux: A massive slide-image of a cell printed on two tennis courts
Ksenia:
If you think in terms of patient outcomes – early detection, better precision – what breakthroughs are real right now, and what’s still aspirational?
Akash:
Right now the focus is adoption by clinicians, who are the domain experts in cancer diagnosis. AI is being built on the same literature doctors have relied on for years. Some features take a long time to identify, others are often missed. So at this stage, AI is mainly a diagnostic assistant – speeding up the process and catching the features doctors look for every day.
At the same time, there are moonshot projects using LLMs and deep learning to explore features that doctors might not have paid much attention to in the past, but which could be important prognostic markers. Those are long-term goals. For quick adoption, the priority is keeping it simple – building AI tools that fit seamlessly into a doctor’s daily practice.
Ksenia:
And when you say doctors have been using AI for some time, how long have you observed that, and what has changed from the early days to now?
Akash:
Some major hospital systems in the US – Mayo Clinic, Mount Sinai, NYU, MSKCC – were early adopters. Initially they used AI on digital images to look at high-level features like tumor, necrosis, trauma. Now, because algorithms have matured, they’re also applying AI to low-level features: detecting specific cells, identifying cell types, and analyzing spatial heterogeneity between them.
The questions are becoming more complex. For example, can AI map the spatial tumor heterogeneity within a tissue sample? What prognostic markers can we extract from that? That’s a big step beyond the earlier use cases.
The challenge now is taking the tools some top hospitals already use and making them accessible to more hospitals – in the US and globally – at a lower cost.
Ksenia:
You’re an AI scientist, so you understand the machinery of AI pathology. This isn’t just one model predicting cancer or not – it’s slides, images, research, and large volumes of diagnoses, truly multimodal inputs. What’s hardest about working with whole slide images? Can you go deep into the machinery – how AI works there specifically?
Akash:
Sure, that’s a very important question. When we deal with whole-slide images, one image can be anywhere from a few hundred megabytes to several gigabytes. To give you a sense of scale: if you print one image at 40x resolution, it would cover one to two tennis courts. That’s the volume we’re talking about – very different from typical computer vision tasks like cat vs. dog or scene detection, which are much lower resolution.
Doctors usually work at 40x magnification, so the detail is extremely high. From a computer vision perspective, it becomes a needle-in-a-haystack problem. On top of that, there are memory constraints. You can’t just load a multi-gigabyte image onto a server and run AI at scale without careful optimization.
The standard approach is tile or patch-based processing. You chop the image into many smaller patches, run AI on those patches, then group the outputs back together. That’s routine in the field. But beyond the technical side, it’s crucial to understand the data. You might be a great computer vision scientist, but pathology is different – you need to know which features matter and how they differ across categories.
That means doing a literature review, but also sitting with pathologists. At the end of the day, you’re building AI for them. They don’t need to code, but their expertise must guide the process from the beginning.
Ksenia:
That’s interesting because what you just described is computer vision, but now we also have generative AI. How does that play out?
Akash:
Right now, the adoption of generative AI for diagnostics is very low. The main reason is that AI still can’t reliably tell its own failure modes. It doesn’t know when it’s giving a false positive or false negative – especially with large LLM-based systems.
One promising direction is few-shot learning on small, well-curated datasets that capture the diversity of cases. If you build AI this way, it can sometimes indicate where it performs well and where it struggles, based on the data. That was the core of my PhD research – an explainable AI framework that could not only show performance but also highlight failure points.
We’re taking baby steps toward adoption. You can’t introduce a GenAI model and say, “this replaces the pathologist.” Doctors won’t accept it, and neither will patients. For now, it has to evolve linearly, step by step, as an assistive tool.
Ksenia:
You mentioned the word “agent.” Would you say agentic workflows are already being implemented in medical practice, or is that still too far away?
Akash:
It’s still too far for pathology overall, but there are pieces of the workflow where agentic approaches can help. For example, after a biopsy you process the slide and digitize it. Then you need quality control – making sure the image is high quality, without pen marks or blood, and suitable for AI analysis. That’s an area where you can bring in an agentic workflow to standardize quality across an entire cohort.
At HistoWiz, we’re exploring exactly that – creating agent workflows to standardize pathology data. But we have to think in chunks, not the whole diagnosis pipeline. Full diagnostic agents are still far away.
Ksenia:
That’s very interesting. From what I hear, the bottlenecks are explainability and what we might call hallucinations or false replies. Is that all, or are there other bottlenecks you see? And when will we deal with them?
Akash:
The biggest bottleneck I’ve seen, both in my PhD and in practice, is the lack of high-quality annotated datasets in pathology. First, patients have to consent for their samples to be used in research. Whether it’s breast, colon, liver, or any other organ, getting enough data is hard.
Second, there’s a high degree of disagreement among domain experts. In my PhD work, we had three expert pathologists annotate pre-invasive breast samples, and we used consensus or majority voting. Even then, the “ground truth” wasn’t perfect.
A sample diagnosed one way at NYU might be read differently at Stanford, or in Europe, or in India. Especially in pre-invasive cases, the confusion is highest. That means the ground truth itself may not be reliable. The field is still struggling to standardize diagnoses and annotations – and on top of that, there are data biases depending on which hospital system you trained with. Once trained, how well does the model adapt to patients outside that system or outside the US? That’s the challenge.
Ksenia:
So it can be that specific – and it can’t just be extracted into one universal truth?
Akash:
Exactly. If an AI model is trained on NYU data, it reflects how NYU trains its pathologists to read features. But that’s not a global standard. Take the same image to Stanford, you might get a different category. In pre-invasive cases, that’s where models can fail completely, because the “ground truth” is so variable. That’s the biggest problem right now.
Ksenia:
That’s very tricky. You’re building PathologyMap – is that something you’re addressing with the platform?
Akash:
Yes. At HistoWiz we’re building an image management platform that lets users manage, access, and analyze high-quality images. On top of that, we’ve built an AI orchestration layer. It lets users run apps from multiple companies on the same slide – for example, four different breast analysis tools at once – and see how they perform.
The idea is to put decision-making in the hands of the user. That’s why we made it open and non-exclusive, able to ingest multiple AI apps or even custom annotations created in Python, MATLAB, or R. We’re one of the first platforms globally to enable that.
Ksenia:
So is this more for medical specialists, or for everyone?
Akash:
It’s mostly for preclinical and clinical pathology research. By preclinical, I mean drug discovery companies running clinical trials, testing how drugs work on human or mouse data. On the clinical side, it could be a hospital wanting to digitize a biopsy done years ago. Physical slides are cumbersome to access, but with a digital platform they’re easy to upload, share, and compare with clinical notes. That’s what PathologyMap is designed for.
Ksenia:
And what about regulation? How is that treating you, given how sensitive this field is?
Akash:
At HistoWiz, we’re GLP compliant. We focus on preclinical research, so we don’t run AI directly for live patient diagnostics – more for historical data and animal models. Regulations are a bit more relaxed there.
But for the field as a whole, very few AI tools have FDA approval. And when approval is granted, it’s usually for a very narrow task – like flagging important regions in an image – not for a full diagnosis like prostate cancer detection. That’s still a big hurdle. The industry is struggling to get approval for tools that would make the most practical sense in diagnostics.
Ksenia:
Let’s return to where we started – Breast Cancer Awareness Month. I want to ground this technical discussion in the human side. Have you seen moments where AI has already made a difference for a patient or a pathologist?
Akash:
Well, yes – to an extent – but most of it is still in the pilot phase. Digital pathology itself is very new. It only became possible in 2017 when the FDA approved digital scanners for primary diagnosis. Before that, radiology had already been digitized and AI tools were being used there, but pathology was still analog.
COVID gave the field a big push. Doctors didn’t want to come into hospitals for diagnosis, so more labs started digitizing. The digitization part is moving fast, but for diagnosis in real-world settings, we’re still some distance away. Hospitals have to ask: will patients accept being diagnosed in part by AI? Will doctors trust it enough to deploy it broadly? Those are open questions.
That said, AI is already helping with workflow optimization. The US faces a significant shortage of pathologists, and that gap will only widen. AI is helping existing pathologists streamline their work – whether it’s quality control or low-level features like mitosis or lymphocyte counts. AI does a very good job identifying and quantifying those. Pathologists use that information as part of their knowledge to make a diagnosis. In that way, yes, we are seeing real change.
Ksenia:
We should probably have started with this, but can you give me a one-sentence description of what digital pathology actually is, so everyone understands it?
Akash:
Absolutely. Digital pathology is about digitizing pathology glass slides, which were always viewed under a microscope for primary diagnosis. With digital pathology, high-throughput scanners capture those slides at very high resolution and display them on a screen. That opens up image management tools, software integrations, and AI applications.
You can compare it to radiology, which went digital over a decade ago. In pathology today, the workflow is: you do a biopsy, put it on a glass slide, send it to histology processing, and then a pathologist examines it under a microscope. Now imagine adding one step – scanning the slide. Suddenly the doctor can view it digitally at very high resolution.
That brings major advantages. It’s easy to access, easy to share. A patient in rural India or rural America can have their slides reviewed by an expert pathologist sitting in New York or California. These are the reasons digitization is growing so quickly.
Ksenia:
Can you give me your timeline for the next milestones? I know it’s predictive, but you’re deep in the field – you might know something.
Akash:
Yes. To give a sense of scale, Mayo Clinic has digitized 12 million slides. NYU is digitizing all of its slides. So the first milestone is simply digitization. Think of it like music: before Spotify we had cassettes and DVDs; now it’s all digital. Or books: when Google Books scanned entire libraries, that enabled tools like Kindle.
Right now, pathology is at the digitization stage. In the next two to three years, I expect more FDA approvals – not necessarily for full diagnosis, but for diagnostic assistance. A pathologist will still be the expert, but they’ll use intelligent tools to avoid bottlenecks.
In two to three years, we’ll see more cases diagnosed by pathologists with the help of AI, better access to high-quality expertise around the world, and higher accuracy within each case because of these tools.
Ksenia:
Do you think we’ll eventually have a Spotify for precision medicine?
Akash:
Absolutely – yes, that’s the goal.
Ksenia:
Do you know if anyone’s building it?
Akash:
Not directly, but there are interesting initiatives. Tempus AI, for example, is building multimodal datasets and inclusion tools – combining pathology reports, radiology reports, blood samples, and genomic analysis. The idea is to profile the patient holistically across all these modalities and drive precision medicine from that. They’re doing fantastic work in that space.
Ksenia:
If we extend your timeline a little and imagine the ideal situation, what do you see as the human role in this workflow with AI?
Akash:
I think the human role will stay the same. I often compare it to Tesla. Even with advanced features, you’re still in the driver’s seat – we’re far from fully self-driving. Pathologists will remain in the driver’s seat, but they’ll have tools that prevent accidents.
AI will detect features they might miss, or retrieve information from decades of slides and reports in seconds. For example, if a patient had a biopsy seven years ago, you could instantly compare notes and images digitally. Doing that with physical slides would take enormous time. The human role won’t disappear – it will be the same, but supported with far more powerful tools.
Ksenia:
And the economics – is the development of this area supported by the numbers?
Akash:
Yes, and it’s an active conversation. In the US, hospitals are required to keep physical slides for up to 25 years. That’s millions of slides – the real estate costs alone are huge compared to digital storage in the cloud or in facilities like Iron Mountain. Digitization is far more economical long term.
There’s also productivity. A pathologist might sign out 30 cases a day with physical slides. With digital, they might do 50. That helps compensate for the shortage of pathologists and allows more patients to be diagnosed. So the economics are promising, but the conversations with healthcare systems are ongoing.
Ksenia:
What concerns you most – and what excites you most – about the world you’re helping to build with AI in pathology?
Akash:
What excites me most is telepathology. Patients outside major cities, where specialist doctors are concentrated, can now access high-quality care. A slide can be digitized in a rural hospital and reviewed by an expert in New York or California. Even for second opinions, it changes everything. What used to take days can now happen in minutes. It’s a communication platform for pathology data and experts – that’s very exciting.
What concerns me are AI models built without domain knowledge. Too often I see models claiming “explainability” with heatmaps or grad-CAMs, but they don’t speak the language of pathologists. If a model can’t tell you what features it detected, in terms that make sense to a doctor, it’s not explainable. It might get you a paper or a patent, but it won’t get clinical approval. Unless you start small, use features pathologists understand, and build together with domain experts, it won’t be adopted. That worries me.
Ksenia:
That’s a very important point. Thank you. My last question: is there a book that shaped your approach to AI, medicine, or life?
Akash:
Since we’ve been talking technical, I’ll choose the “blue book” of pathology – the WHO Classification of Tumours. There’s a volume for each organ – for breast alone, there are 12 to 15 volumes. Each lays out the detailed guidelines for diagnosis. I followed those books throughout my PhD, and I think anyone who wants to get into pathology should start there. They help you really understand the data. It’s an understated but foundational resource that shaped me early on.
Ksenia:
Thank you so much, Akash. That was both deeply technical and deeply human – because you kept bringing it back to how doctors and pathologists actually work with the technology.
Akash:
Thank you, Ksenia. It was very nice talking with you.
Do leave a comment |
Reply