MARD—short for Mean Absolute Relative Difference—is the most common way to measure how accurate a continuous glucose monitor (CGM) is. It calculates the average difference between what your CGM reads and the true blood glucose value. In simple terms, the lower the MARD number, the closer you are to real blood sugar levels.
Sounds solid—but here’s the catch: there’s no universal standard for how MARD is calculated. Different studies vary in reference method (SMBG vs. lab analysis), sampling frequency, or even glucose range tested. That means comparing MARD numbers across CGMs can be misleading.
Beyond that, MARD only tells part of the story—it doesn’t distinguish between consistent bias and random error and often ignores how readings perform when glucose is spiking or dipping.
So while MARD provides a useful snapshot of CGM performance, it’s not the full picture. Real-world users should watch for consistency across ranges and individual trends—not just marketing MARD stats.
We explore how CGM accuracy can be influenced by study design and data selection in our latest conversation with Tim Street of Diabettech. He breaks down where MARD comes from, the problems with it, and why it’s not always the best measure of accuracy. Scroll down and hit play—you’ll see why it matters.
Want more?
For the latest diabetes tech, join our free newsletter.
If you’re enjoying our content, consider joining Diabetech All Access—our premium membership with exclusive stories, Live Q&As, and industry analysis. Your support helps sustain our independent journalism and keeps this platform thriving.
Disclaimer: Diabetech content is not medical advice—it’s for educational purposes only. Always consult with a physician before making changes to your healthcare.