Blog

  • QC and the DLCO Simulator

    I’ll start by saying that I am not associated with Hans Rudolph Incorporated in any way. I just think they make a bunch of good products, and more specifically in this case, a DLCO simulator.

    For as long as I can remember I have always had intermittent problems with DLCO testing and I suspect that every PFT Lab has had them at some time. One of my current concerns has to do with how our test system’s software is calculating exhaled CO and CH4 concentrations from calibration data, but I’ve had other concerns at other times too. One way around to verify that your test system is actually performing the way it should is biological QC but a more precise way is to use a DLCO simulator.

    Every PFT Lab should be performing biological QC (self-testing) regularly (and if you aren’t, why aren’t you?). This is the simplest way to check a test system but it does have limited accuracy. My personal experience for DLCO tests is that there was usually a range of about 1.5 ml/min/mmHg (roughly 5%) within the results from a single testing session and 2.5 ml/min/mmHg within session-averaged results over the course of a year. This is good enough to get a general sense of how well a test system is working but not precise enough to pinpoint specific problems. Used correctly a DLCO simulator can produce highly reproducible and accurate test results.

    The Hans Rudolph 5560 DLCO Simulator was developed and patented by Robert Crapo and Robert Jenson (US Patent 6,415,642) and is conceptually quite simple. Two calibration syringes are connected through a Y-valve. A 5-liter syringe is used to simulate the patient inhalation portion of the DLCO maneuver and can be preset for inhalations from 2.0 to 5.0 liters. The other syringe, filled with a gas mixture with known CO and CH4 concentrations, is used to simulate the patient exhalation.

    DLCO Sim Graphic

    Using the simulator does require a bit of practice and manual dexterity. Three different gas mixtures, representing high, medium and low DLCO results are available from Hans Rudolph for the simulator. Assuming the inhalation syringe volume and the simulated exhaled gas mixture are the same, the biggest difference from test to test in calculated DLCO will likely be due to differences in “breath-holding” time. The DLCO test can be re-calculated manually with a standard 10 second breath-holding time, but strictly speaking reproducibly can be assessed just as well by looking at VA and the “exhaled” CO and CH4 concentrations.

    When looked at over time the measured DLCO, VA and “exhaled” CO and CH4 concentrations measured with the simulator can be affected whenever the DLCO test gas is changed. A test system’s gas analyzer is calibrated with the assumption that the test gas mixture is 0.3% CO and that the insoluble component (CH4, He, Ne) is at its standard concentration. Test gas mixtures always vary from the standard concentration by a greater or lesser amount and this means is that the gas analyzer gain will differ from one tank of test gas mixture to another. Although these changes will not tend to affect patient test results (if you look carefully at the DLCO equation you will see it depends more on ratios than on absolute concentrations) any changes in analyzer gain will also affect the measured “exhaled” gas mixtures. Even so, the measured values from the DLCO simulator will likely remain far more stable and more reproducible over time than biological QC results.

    QC results need to be recorded and trended. You can use a spreadsheet to do this or a program like Easylab QC (I haven’t used it so I can’t recommend it one way or the other). A normal range needs to be established and results reviewed regularly. When QC results start to trend out of normal limits this is often a sign of test system problems that may not appear when calibrating.

    It may be tough to sell your hospital administration on the cost of a DLCO simulator. One approach may be to point out that the hospital’s chemistry and hematology labs perform QC all the time and you need to be able to as well. If your lab is involved with any research then QC is going to be mandatory and you will need to be able to provide trended QC results to show your lab is producing accurate results.

    I am concerned that QC, whether biological or from a DLCO emulator, is not given the importance it deserves. It’s a critical part of running a quality PFT Lab and it should not be overlooked or ignored. Saying you don’t have time just means you need to build it into your schedule. Regular calibrations are a beginning step (and if you’re not doing regular calibrations then shame on you), but calibrations only test individual components. QC is a test of the entire system and the only way to assure that the results you report are “real” and accurate.  

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Chronotropic Index and O2 Pulse

    One of the things that I enjoy most about reviewing cardiopulmonary exercise tests is that I always have to reach back into the basics of physiology in order to get a sense of what the results are trying to say. Oftentimes it isn’t so much the absolute or percent predicted value of a given parameter but its relationship with other parameters that is revealing. One of the bits of human physiology that has always struck me as fascinating is the relationship between heart rate and oxygen consumption.

    For almost everyone there is a linear relationship between heart rate and oxygen consumption. When you plot them against each other you can put a ruler on the plotted points and see that they form a straight line. Chronotropic Index and O2 pulse are two ways of analyzing this relationship.

    The Chronotropic Index is a measurement of the slope of the relationship between heart rate and oxygen consumption and is calculated from: CI Equation 

    A Chronotropic Index that is greater than 1.0 indicates the heart rate is increasing faster than the corresponding oxygen consumption and one less than 1.0 indicates a heart rate that is increasing slower than the corresponding oxygen consumption. I consider 0.8 to 1.3 to be the normal range.

    Chronotropic Index Graph

    The Fick equation showed there is a relationship between oxygen consumption and cardiac output.

     Fick Equation

    The fascinating thing to me is that despite the fact that heart rate, stroke volume, oxygen consumption and mixed venous O2 content do not change linearly during increasing exercise, heart rate and VO2 almost always maintain a linear relationship as long (and this is important) as arterial oxygen saturation (SaO2) remains normal.

    O2 Pulse is simply oxygen consumption (in ml) divided by heart rate. It is usually best to look at O2 pulse in terms of percent predicted and it is used as an index of stroke volume. If you assume that the arterial and mixed venous O2 content difference remains reasonably stable, then you can re-state the Fick equation to say that

    Cardiac Output ~ Oxygen Consumption

    and since stroke volume is cardiac output divided by heart rate, then

    Stroke Volume ~ O2 Pulse

    You can’t use this to say if the O2 pulse is X then the stroke volume is Y basically because you don’t really know what the arterial-venous O2 content difference is, but you can use it as an index. Like the Chronotropic Index, O2 Pulse is useful only as long as the SaO2 remains normal.

    Because O2 pulse is an index of stroke volume it’s course during an exercise test looks something like this:

     O2 Pulse Time Course 

    It might seem that the requirement that the SaO2 needs to remain normal would limit the usefulness of the Chronotropic Index and O2 pulse, but the fact is that a decreased SaO2 is by itself a clear indicator of a pulmonary (either mechanical or vascular) exercise limitation and excludes a cardiovascular limitation (at least as a primary factor). The patients that we see most often for cardiopulmonary exercise tests are ones whose routine pulmonary function or cardiac tests do not explain their shortness of breath. Patient with known pulmonary disease usually don’t need to have an exercise test (or if they do it’s to find out the precise nature of their limitation and then other values come into play anyway), and for this reason relatively few patients we test show a decreased SaO2 during testing.

    It might also seem that the Chronotropic Index and the O2 pulse say the same thing, but the difference is that the O2 pulse is the relationship between heart rate and oxygen consumption at a given moment whereas the Chronotropic Index is the slope of that relationship. A results of a recent patient’s test will highlight the difference.

    The patient was tachycardic at rest, with a baseline heart rate of about 120. Testing ended when the patient was SOB and said they couldn’t go any further, which also happened to coincide with their maximum predicted heart rate.

    Low VO2 Normal CI

    The patient had a low maximum oxygen consumption and a low maximum O2 Pulse and therefore likely had a reduced stroke volume at peak exercise, but despite this they also had a normal Chronotropic Index which meant that despite the resting tachycardia their heart rate advanced normally with exercise.

    In general though, as long as a patient’s resting heart rate is more or less normal, the Chronotropic Index will also say something about their stroke volume. Very fit individuals have a large stroke volume and for this reason their Chronotropic Index tends to be low. Although I use 0.8 as a lower limit of normal, it is not totally unusual to have an exceptionally fit individual (“I’m getting short of breath after running 20 miles, something must be wrong!”) to have a chronotropic index as low as 0.65 or 0.70. What differentiates these individuals from those with chronotropic incompetence (gotta love those beta blockers!) is that their maximum oxygen consumption is also usually well above normal. In fact, for an individual that is exceptionally fit, a Chronotropic Index near 1.0 is probably abnormal.

    When an individual is out of shape or deconditioned, stroke volume is usually reduced and their Chronotropic Index will be elevated and their maximum O2 Pulse will be reduced. Below a certain point, a reduced stroke volume is more likely due to an underlying cardiac dysfunction and not just deconditioning and that is where the upper limit of normal of 1.3 comes into play. When an individual has a stiff heart, a filling problem or a low ejection fraction then their stroke volume will be reduced outside the limits of normal and this will usually show up as a markedly elevated Chronotropic Index. You can suspect this when the Chronotropic index is above 1.3 but it is probably a given for 1.5 and above. 

    After having made the point that the relationship between heart rate and oxygen consumption is linear (as long as SaO2 remains normal!), it is when it is not linear that it can be particularly revealing. Every year we get a certain number of patients that have passed a standard cardiac ECG stress test with flying colors but are still complaining of DOE. When they have a cardiopulmonary stress test, their Chronotropic Index and O2 pulse often looks like this:

    Kinky CI 

    In almost every case these patients have had a problem with their heart valves. Up to a certain heart rate their heart valves were opening and closing properly but above that heart rate they weren’t. Once that occurred either their ventricles weren’t filling properly or there was regurgitation and this is called a rate-related decrease in stroke volume. Because there is often no ECG sign of this they were able to pass a standard cardiac stress test but this syndrome stands out like a beacon when you can compare oxygen consumption to heart rate.

    Chronotropic index and O2 pulse can be windows into stroke volume and cardiac output and for this reason I would strongly recommend that they are calculated and that graphs of heart rate versus VO2 and O2 pulse versus time be included when a cardiopulmonary stress test’s results are evaluated.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • DLCO Dilemma

    For the last several months I’ve noticed what appears to be a greater than normal number of patient test results where the VA volume from the DLCO test was greater than the TLC. This is not impossible of course, but it usually tends to be more on the rare side and when I’ve seen this in the past and inspected the results closely there were usually either problems with the lung volume test or the difference was only a few percent and within the error bar for both tests. We’ve been seeing VA’s that were larger than TLC more frequently lately but when I look at the results closely most of the time I have been unable to see anything wrong with either the lung volume or the DLCO test. At the same time, we have a number of patients that are frequent fliers and have seen what looks to be bigger differences in DLCO from visit to visit than usual as well as a number of patients that have had larger DLCO results than we would have expected.

    The problem is that these apparent problems are really just suspicions with very little real evidence. I’ve been paying very close attention to lung volumes since our hardware and software upgrade last summer so my paranoia level is on the high side and I may well be overreacting. Late last week however, I found myself on the horns of a dilemma. The test results for a patient with a helium dilution TLC that was 68% of predicted but at the same time with a VA that was 93% of predicted and a DLCO that was 129% of predicted came across my desk.

    I inspected the DLCO and lung volume results with a fine-toothed comb and could find nothing wrong with the selected test results. Just to make this more difficult however, the patient had a lot of difficulty performing the pulmonary function tests, and although each of the reported spirometry, lung volume and DLCO results appeared to have adequate test quality, they were not in any way reproducible.

    Usually when there is a problem with helium dilution lung volume tests, FRC tends to be overestimated. About the only time it isn’t is if the test is terminated too soon and in this case there was an essentially flat helium tracing for at least the last minute of testing so it was not. With DLCO tests the most common problem and the one that can affect VA and DLCO the most is an inadequate inspired volume, but in this case the FVC from spirometry, SVC from lung volumes and the inspired volume from the DLCO were almost identical. In the end I had to prevaricate and point to the lack of reproducibility and say that the contradictory results may be due to suboptimal test quality and that the TLC may be underestimated and the DLCO may be overestimated.

    In this particular case I am more inclined to believe the lung volume results than the DLCO results in part because the DLCO of 129% of predicted didn’t fit the patient’s clinical picture all that well. This got me thinking about what would affect both the VA and the DLCO at the same time. VA and DLCO are intimately interconnected and for a given rate of CO uptake, DLCO scales with VA. VA is calculated from the inspired volume and the change in the insoluble component of the DLCO test gas, in this case methane.

    VA Formula But, this calculation depends on the accuracy of the gas analyzer, or more specifically that the analyzer’s zero is accurate and the analyzer output is linear. The analyzer’s specifications claim that it is linear to 1% over the full scale of the analyzer and I’ll have to take that at its word particularly since I have no way to verify it. Linearity is not a given, by the way. Back in the 1970’s the output of the MSA CO analyzer was a curve and I had to use a graph of the analyzer’s calibration curve to convert meter readings to actual percent CO when calculating DLCO test results.

    The zero offset, on the other hand, is checked and logged with every calibration and we calibrate our analyzers daily. Interestingly, when I reviewed the CO and CH4 zero offsets I found that although most systems had reasonably stable zero offsets there was one system that had a highly variable CO zero offset and one system that had a highly variable CH4 offset. Despite the day to day variability in these offsets, no alarms had been triggered because the zero offsets were all within the manufacturer’s normal range.

    In one sense a variable zero offset is not necessarily a cause for alarm as long as the calibrated zero offset remains stable during the testing session. If the zero offset shifts however, and this shift is not accounted for, this can cause the concentration of the exhaled CO or CH4 to be either underestimated or overestimated.

    Exhaled CH4 and VA

    This is where it begins to get interesting. By looking carefully at our test result database I was able to see that the analzyer’s zero offset and gain are not stored as part of the DLCO test record. What is stored is the fractional concentrations of exhaled CO and CH4 and these values are calculated from the analyzer’s output voltage using the zero offset and gain.

    So where is the zero offset and gain for CO and CH4 stored? In the calibration records, of course.

    Isn’t the analzyer calibrated before every DLCO test? Yes it is and a new calibration record is created every time too, but this is also where it gets very confusing because when I looked at these new calibration records every one of them is an exact copy of the initial daily calibration record.

    So what does this mean? It could mean that a new zero offset and gain are created during the pre-test calibration and they are used to calculate the fractional exhaled concentrations of CO and CH4, but that these new values are not being written to the database and are instead discarded after being used. This doesn’t explain why the initial calibration record is copied, however.

    It could mean that despite going through a pre-test calibration that the new calibration results are discarded before being used and the zero offset and gain from the initial calibration are used instead and then re-written to the database.

    I have no idea which of these is correct. This may very well be a software error that only affects record-keeping. On the other hand, if the CO and CH4 zero offsets and gains are not being updated then this opens the door to DLCO calculations that are inaccurate because the exhaled CO and CH4 concentrations are being calculated using information that is out of date.

    This information is, of course, not in the equipment manual. The manual doesn’t even mention the pre-test calibrations or anything whatsoever about how exhaled CO and CH4 concentrations are derived. I have passed a set of questions regarding this problem to our designated technical representative at the equipment’s manufacturer and will be interested at seeing what they have to say about this (and in seeing how long it takes to get an answer).

    There is at least one other major question this investigation has brought up and that is why is the zero offset so highly variable on some analyzers and rock steady on others? Why is it that only one channel seems to be affected? The equipment manufacturer addresses zero offsets and gains that are in or out of range (although that begs the question of where these normal ranges come from) but not the degree of variability. Is this variability a function of the measurement process or the manufacturing process or the analog electronic components or is it an early sign of component failure? Is there anybody involved in the development or manufacture of the analyzer that even knows or is this just an accepted quirk?

    All of our pulmonary function test equipment have become a black boxes. This is in large part a consequence of computers and computer software taking over the mechanical functions and calculations formerly performed by humans. We no longer know or have access to the information about how measurements are being made. Functions are buried in proprietary software and hardware, and we are asked to take the manufacturer’s word that the results are accurate. Please notice that I am not saying they’re not accurate since I know that manufacturers often go to great lengths to insure accuracy, just that we no longer have the ability to assess this for ourselves.

    It is this inability to assess our equipment’s accuracy that bothers me the most. I see many research studies whose results come from pulmonary function or exercise test equipment. Significant physiological conclusions are drawn from these results, but the researchers are accepting as a given that the results are accurate without being able to verify this.

    There are DLCO and exercise simulators that can be used for quality control but they are also expensive and it is difficult to convince hospital administration of the need for them. I know that I put an exercise simulator in my capital budget for several years but was turned down every time. Heck, I was turned down for capital budget money to replace aging equipment that hadn’t been supported by the manufacturer for years and was unrepairable so what chance did a simulator have? Even so I am not sure that a simulator would have made this particular problem any clearer. I know that my inclination would be to use a simulator immediately after performing a calibration and in this instance the more time that had elapsed since the last calibration the more likely the problem would be to have shown up. This is instead a situation where manufacturers need to be able to provide explicit information about how their equipment operates and how calculations are made but since this is usually considered to be proprietary information we’re not likely to get it anytime soon.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Equipment is not detecting glottal closure and cough

    In my last blog on personal spirometers I mentioned that one of the university projects has developed software that assesses spirometry results for ATS-ERS standard including cough, glottal closure and early termination of exhalation. The specific project that claimed this also claimed to have attained a 70% detection rate using waveforms from the NHANESIII study. This should be taken as a challenge by the pulmonary function equipment manufacturers to improve their existing software.

    The test systems I am familiar with are pretty much limited to being able to determine back-extrapolation and whether end-expiratory flow rates meet ATS-ERS end-of-test criteria. So far they have not attempted to assess spirometry efforts for the cough or glottal closure that are also part of the ATS-ERS criteria for test quality. Here is a good example of glottal closure:

    Early pause affecting FEV1

    The FEV1 for this effort (unfortunately also the best the patient was able to perform) is underestimated due to the short sub-1 second pause. Taken at face value, the FEV1 and FEV1/FVC ratio would have indicated the patient had mild airway obstruction. Reading between the lines if there hadn’t been a pause the results might have been within normal limits.

    Even though the manual for our test systems states that its “software complies with the recommendations set forth within ATS/ERS 2005 Standardisation of Lung Function Testing guidelines” our test system passed this spirometry effort with flying colors. I am sure that someone can give me a lawyer’s interpretation of the word “complies” that shows the software is somehow actually in compliance but to me it says that our equipment manufacturer is being selective about which ATS-ERS criteria it actually meets. In some ways this is not a fair statement to make because there are a number of ATS-ERS criteria for different pulmonary function tests that can only be met by actually observing the patient during testing, but that also means that a blanket statement like that shouldn’t be in the manual in the first place.

    I will admit to having a certain level of frustration in dealing with equipment manufacturers. My PFT lab is both busy and relatively sophisticated and we run into oddball problems not all that infrequently. There are always going to be computer glitches and patients that find new ways to do tests incorrectly, but trying to determine what is causing a problem can be difficult in large part because the documentation that comes with our different test systems is inadequate. Getting answers from the manufacturers isn’t particularly easy either. Its one thing when you have a broken test system and can call tech support, its another when you have a system doing something odd intermittently and you don’t know if it is a feature or a bug. We have passed what we thought were reasonably critical problems to a manufacturer and then had to wait a couple weeks to get an answer back. Sometimes we’re told that yes, there is a bug and it may be fixed in the next software release. Sometimes we’re told that we’re misunderstanding the problem and when they explain it, it makes sense, but then why wasn’t it in the manual in the first place?

    Anyway, I am not sure why detecting glottal closure and coughs during exhalation isn’t being attempted. I realize that a software algorithm for detecting glottal closure and coughs can’t be perfect and that is because even experienced human observers can occasionally miss them as well. My programming skills are years out of date but even so, detecting the pause seen in the spirometry example should be a relatively trivial problem. And if a small bunch of undergraduate college students can do it (on a smartphone no less), why isn’t it being done by equipment manufacturers?

    My concern about this is that I have seen a trend towards less and less well-trained staff performing spirometry. I know of several local clinics that have medical assistants performing spirometry that have been, at best, trained by another medical assistant. Often more time is spent teaching them how to correctly enter the patient’s demographics into the computer than in actually performing the test.

    Many physicians are also not as aware of testing errors as they should be and only look at the numbers on the reports. We occasionally have teaching sessions with hospital residents and fellows about pulmonary function testing. These sessions are not about interpretation but how to evaluate test results for inconsistencies and errors. We have used spirometry efforts like this as an example of misleading results and so far this particular error is only rarely noticed. Having said that, presently this is an error that can only be seen on a graph and only if it is understood what the graph is saying, but since the graphics that come with spirometry reports are often quite small (gotta cram everything on one page, after all) an error like this may be too subtle to be easily noticed in the first place.

    I have mixed feelings about the increasing reliance on computers. I started in this field when all the equipment was manual and all the records were paper and I will be the first to admit that a computer can’t be beat for calculations and data management. On the other hand, I see that people often let the computer do their critical thinking and “if the computer said it, it must be so” is all too common an attitude. Even more unfortunately the thought “if a computer doesn’t say it, it must not be so” is a common corollary to this as well. The reality, like it or not, is that we will continue to become more dependent on computers, not less and this puts a burden of responsibility onto the equipment manufacturers that when they say they meet ATS-ERS standards, that they actually do without having to lawyer-up.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Personal spirometers

    Peak flow meters, both mechanical and electronic, have been available to asthma patients for years. A number of inexpensive spirometers that meet ATS-ERS standards are now available for little as $500. Although these spirometers are primarily intended for a doctor’s office and not intended for self-monitoring by asthmatics, a number of even less expensive spirometers intended for personal use have recently appeared on the market.

    Additionally, in the last couple of years there have been at least four different university engineering projects to develop a low-cost spirometer, with a goal of costing substantially less than $100 and are intended for self-monitoring or for use in third world countries. Although these could be considered to be demonstration projects, several have the potential to become viable products.

    Durability, ease of cleaning and accuracy are the primary goals a low-cost spirometer must meet. Three of the four spirometer projects have developed a simple pneumotachometer based on differential pressure across a tube narrowing or screen that are not only inexpensive, but relatively easy to clean. The other project developed an oscillating fluidic sensor that uses a microphone pickup.

    Although the projects have reported that their devices meet ATS-ERS standards for accuracy, I will take this with a grain of salt. In each case accuracy was measured against a calibrated flow rate or a 3-liter calibration syringe. None of the projects have actually compared spirometry results using their device on one or more individuals against spirometry results from standard PFT Lab equipment.

    It is also not clear to me whether any consideration to BTPS correction of the measured results is being made. The published technical details on these projects are limited, but I have seen no mention of ambient temperature measurement. Since there are significant differences in air temperature, humidity and viscosity between inhaled and exhaled air this raises further questions about accuracy claims. Of further concern each project seems to be relying on the characteristics of the device itself to maintain accuracy (but since a calibration syringe costs several hundred dollars this appears to be deliberate and in keeping with the design goal of low cost).

    One very important aspect of these university projects is the use of smartphones. This keeps cost down because signal processing and analysis is done by software on the smartphone and results then have the potential to be sent automatically either to a doctor’s office or to a central database. Smartphones have a tremendous ability to act as a unifying factor in the process of collecting and transmitting personal health information. 

    Although I have doubts about using any of these devices in a Pulmonary Function lab, are they accurate enough for self-monitoring by asthmatics and other pulmonary patients? Maybe they are. Portable peak flow meters have been shown to be inaccurate for decades but that doesn’t seem to have prevented their widespread use. For peak flow meters, physicians have stressed that patients need to utilize a peak flow diary and learn for themselves where their danger levels are.

    Strictly speaking, precision, the ability to repeatedly produce the same results, is more important than accuracy. Since changes in FEV1 over time are critical when monitoring asthma what’s going to matter with a low cost personal spirometer is whether or not it can produce results that are reproducibly stable over long periods of time. This, unfortunately, is exactly what low-cost personal spirometers have not yet demonstrated. This is not to say they can’t, just that it hasn’t been shown.

    Personal spirometers also haven’t been around long enough to demonstrate durability. The sensing element it self may be durable and easy to clean but the electronic components may not be. Still, when produced in sufficient quantity they may be inexpensive enough to be considered semi-disposable.

    Good quality spirometry requires more than just an accurate spirometer. The fun part and the hard part of being a pulmonary function technician is getting patients to perform tests correctly More than one of the university projects recognized that patients need coaching and developed software that would lead a patient through testing. They have also developed software that recognizes cough, glottal closure and early termination of exhalation.

    A variety of medical instruments and applications have been developed for smartphones. Many of these are intended for third-world nations where cell phones are common, but medical care and expertise are not. Medical costs in the United States continue to skyrocket however, and personal medical monitoring devices that work with smartphones may be a way to control costs by reducing office visits and hospitalizations. Many clinics already have contracts with HMO’s and other insurers that pay them to keep patients out of the hospital. For this reason alone I would suggest that personal spirometers will need to be embraced, not avoided.

    Although presently they may make sense for motivated patients that want to be more involved in their own care, for the time being I would have to say that personal spirometers are not quite ready for prime time. Broader adoption will have to wait until the manufacturers of personal spirometers, oximeters, blood pressure monitors, glucose monitors, thermometers and other personal health care monitoring devices (yes, bathroom scales should be included too) adopt a common smartphone (bluetooth?) interface. As importantly, doctor’s offices, clinics and hospitals will need software to manage this personal health information.

    In the long run routine the use of personal spirometers will probably reduce the amount of spirometry performed in hospital clinics and doctor’s offices. This is not because routine spirometry will not continue to be performed during office and clinic visits. Spirometry will still likely be necessary in order to verify a patient’s personal spirometer results but there will likely be fewer patient visits and more managing patient care remotely.

    Pulmonary labs should involve themselves in teaching and validating personal spirometer use by their patients. Pulmonary function labs that depend on clinic spirometry for a significant part of their workload should consider placing more reliance of tests that cannot be performed remotely (lung volumes, diffusing capacity, HAST, CPET) although this will require the cooperation of ordering physicians.

    University Projects:

    Low-cost Spirometer

    Winner of NBIB 2012 Undergraduate Biomedical Engineering competition in the Technology to Aid Underserved Populations and Individuals with Disabilities category.

    Abigail Cohen, Andrew Brimer, Olga Neyman, Braden Eliason, Charles Wu.

    Washington University in St. Louis

    Fluidic flow sensor

    Telespiro

    First prize from the Institute of Electrical and Electronics Engineers (IEEE), 2013Engineering Conference on Point of Care Healthcare Technologies in Bangalore, India.

    Will Carspecken

    Oxford University/Harvard Medical School

    Android smartphone 

    Mobilespiro

    Top Demo Prize First International Workshop on Mobile Systems, Applications and Services for Healthcare.

    Third place in the Microsoft Imagine Cup 2011 World Finals

    Part of the Scalable Health Initiative at Rice University

    Siddhartha Gupta, Peter Chang, Nonso Anyigbo, Ashutosh Sabharwal

    Rice University

    Pneumotachometer

    Android smartphone via bluetooth, remote database

    Low-cost Spirometer

    David Van Sickle, Jeremy Glynn, Jeremy Schaefer, Andrew Bremer, Andrew Dias

    University of Wisconsin at Madison

    Fleisch or venturi Pneumotachograph

    Project since discontinued, looking for students to continue. 

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Is it time to scuttle the FEF25%-75%?

    When we went through our hardware and software upgrade last August, one of the changes we made was to stop reporting the FEF25%-75% (AKA MMEF, MMFR, MMF). The pulmonary physicians had long since stopped using this value when assessing spirometry results and we had kept it on our reports as long as we did only for inter-laboratory compatibility. Along with other changes we made at that time we decided it was time to drop the FEF25%-75% off our reports.

    FEF25%-75% has been used to assess “small airways disease” but more than one of our pulmonary physicians has said that they don’t believe there is such a thing. I’m not a clinician but I’ve always felt that tests and results need to be clinically useful in order to be performed or reported and more than one study has shown little correlation between anatomical findings and FEF25%-75%.

    Regardless of whether or not small airways disease is an actual entity my first objection to the FEF25%-75% has to do with the concept that it measures flow in small airways when for most patients it lies within their FEV1. For this reason it has never been clear to me what the FEF25%-75% is measuring that the FEV1 isn’t. More importantly, I have significant concerns about the limitations involved in measuring the FEF25%-75% in the first place.

    FEF25%-75% is measured by identifying the points at which 25% and 75% of the Forced Vital Capacity has been exhaled and then calculating the change in volume divided by the change in time:

    FEF25-75_graph1 Using the FVC as the primary reference means that the measured FEF25%-75% is highly dependent on the FVC volume. Getting a truly maximal FVC from patients with lung disease requires a lot of effort and cooperation from the patient. An effort where the FVC is underestimated will cause the FEF25%-75% to be disproportionally overestimated. Small changes in FVC can have large changes in FEF25%-75%.

    FEF25-75_graph2

    This was also noted when the FEF25%-75% from pre- and post-bronchodilator spirometry efforts were compared. Numerous investigators saw that FVC and FEV1 could increase significantly post -bronchodilator but the FEF25%-75% often did not. The fact is that the FEF25%-75% from the pre- and post-bronchodilator efforts was being measured across a different set of lung volumes whenever the FVC increased post-bronchodilator. The solution has been to measure the post-bronchodilator FEF25%-75% at exactly the same volume points as the pre-bronchodilator effort. This is called volume adjustment and to some extent it makes sense but at the same time it calls into question exactly what the FEF25%-75% is measuring.

    FEF25-75_graph3 To be honest, I think that adjusting the FEF25%-75% volume smacks of tweaking the results to meet the expectations. I will agree that there is a general correlation between flow rates and airway size during a forced exhalation but strictly speaking this is what the different flow-volume loop contours are all about. The primary problem with applying this concept to the FEF25%-75% is that the FEF25%-75% is an average flow rate that says nothing about the actual flow rates between the two values used to measure it.

    FEF25-75_graph4

    Since the FEF25%-75% lies within the FEV1 it is not surprising that it correlates well with airway obstruction. The correlation between FEV1/FVC ratio and FEF25%-75% is actually too good because at least one study showed that FEF25%-75% is always normal when the FEV1/FVC ratio is normal. This brings into question what additional information the FEF25%-75% adds, if any, towards assessing spirometry results.

    As an alternate to FEF25%-75% some investigators have suggested that FEV3 and the FEV3/FVC ratio provides a much better window onto small airways. I did a pilot study on a few hundred patients using the FEV3/FVC ratio with ambiguous results. I found that like the FEF25%-75% the FEV3/FVC ratio was abnormal when the FEV1/FVC ratio was normal only rarely. It may be possible that with a larger sample of patients the FEV3/FVC can serve a purpose but so far none of our pulmonary physicians have shown an interest in it so its value remains speculative to me.

    Our lab software is able to report over two dozen different values from a single forced vital capacity. Most of these values are not clinically useful. The FEF25%-75% has high inter-test and intra-test variability and is unduly affected by FVC. Given the limitations in how and what it measures it is not clear to me that the FEF25%-75% has much to do with identifying the site of airway obstruction. It is also unlikely that it provides any information not already provided by the FEV1 and FEV1/FVC ratio. I think it is time that everyone should think about dropping the FEF25%-75% from their reports too.

    References:

    Berend N, Wright JL, Thurlbeck WM, Marlin GE, Woolcock AJ. Small airways disease: Reproducibility of measurements and correlation with lung function. Chest 1981; 79: 263-268

    Cockcroft DW, Berscheid BA. Volume adjustment of maximal midexpiratory flow. Importance of changes in Total Lung Capacity. Chest 1980; 78: 595-600

    Gelb AF, Williams AJ, Zamel N. Spirometry. FEV1 vs FEF25-75 percent. Chest 1983; 84: 473-474

    Hansen JE, Sun XG, Wasserman K. Discriminating values and normal values for expiratory obstruction. Chest 2009; 136: 369-377

    Sherter CB, Connolly JJ, Schilder DP. The significance of volume-adjusting the maximal midexpiratory flow in assessing the response to a bronchodilator drug. Chest 1978; 73: 568-571

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Problem selecting an FEV1 highlights a training failure

    A report of a patient’s complete set of PFTS came across my desk yesterday. There were a lot of inconsistencies in the report and I ended up looking at the raw data for every single test. When I looked at the spirometry results I was surprised to see which effort the technician had selected to report the FEV1.

    The patient’s spirometry efforts were highly variable and not terribly reproducible. The FEV1 that had been selected came from an effort with an expiratory pause that occurred before the first second and with a number of cough artifacts. There was another spirometry effort effort however, that did not have a pause and had a larger FEV1. The reason the other effort had not been selected was that it did not meet ATS-ERS criteria for back-extrapolation whereas the first effort did.

     Spiro_EV_Pause

    The technician that performed the test and selected the effort has over a half-dozen years of experience in our lab and has worked in other PFT labs as well. I learned that she had thought very hard about which effort to select before choosing the one she did. An interesting side note to this is that when the test software was asked to select which FEV1 should be reported it also chose the same effort as our technician.

    When we train technicians in our lab back-extrapolation is one of the spirometry quality issues we emphasize. My experience has been that efforts with large amounts of back-extrapolation tend to overestimate the FEV1. When we teach how to select spirometry efforts we try to make the point that you can’t select the highest FEV1 without also looking at test quality. It is this point that caused the technician to select the FEV1 she did.

    We have some pretty extensive technician training materials that we developed over the years, but teaching about a sub-one second expiratory pause is not in there and this is because I never once thought it was necessary. To me it is self evident that if a patient does not have a clean exhalation during the first second then the FEV1 is not accurate. I think, however, this highlights a personal blind spot. I think it also highlights a difference in how technicians learn about PFTs now and how I learned about PFTs 40 years ago.

    I learned spirometry using a counterweighted water-seal bell spirometer with a kymograph. A felt-tipped pen on the counterweight drew a line on graph paper that had been taped to the kymograph drum. After a set of tests I had to take the graph paper and use a plastic overlay to determine where the FEV1 was on the tracing. I then had to take the numbers from the graph paper and multiply them by a bell correction factor and BTPS to get the actual FVC and FEV1. Similarly I had to calculate the FEV1/FVC ratio and percent predicted for all the test values.

    I am not in any way advocating we go back to the level of technology nor is this a diatribe about how easy technicians have it now (yes, I walked six miles to school every day, uphill both ways, through the drifts of snow and the blinding heat) but the point is that it made a direct connection for me between what a patient did during the test and what the results were. Now everything is handled by a computer and although a technician can see the patient perform the test and what the results look like afterward, everything that happens in-between those two events is now a black box to them. This means that technicians are far more likely now to learn rules about testing without really understanding where those rules come from and what limits the rules have.

    So, using this as a teaching moment, I showed the technician that since the volume-time curves of the two efforts overlapped almost completely up until the pause, this means that the back-extrapolation did not likely affect the FEV1. I also showed that since the patient paused their exhalation before the first second there was no way the selected effort could have an accurate FEV1. Finally I pointed out that if the first FEV1 had been selected the patient would have been diagnosed with an obstructive lung disease when the other FEV1 made it clear they did not have one.

    This will be added to our teaching material. I would like to think that it will help our technicians develop a deeper understanding of the testing process but {sigh} it is more likely this will be just another rule to remember.

    As I mentioned earlier, however, the testing software also rejected the larger FEV1. This also appears to be because of back-extrapolation and highlights some serious deficiencies in the software. The full paragraph from the ATS-ERS spirometry standard on spirometry quality says (italics are mine):

    The following conditions must also be met: 1) without an unsatisfactory start of expiration, characterised by excessive hesitation or false start extrapolated volume or EV 5% of FVC or 0.150 L, whichever is greater (fig. 2); 2) without coughing during the first second of the manoeuvre, thereby affecting the measured FEV1 value, or any other cough that, in the technician’s judgment, interferes with the measurement of accurate results [3]; 3) without early termination of expiration (see End of test criteria section); 4) without a Valsalva manoeuvre (glottis closure) or hesitation during the manoeuvre that causes a cessation of airflow, which precludes accurate measurement of FEV1 or FVC [3]; 5) without a leak [3]; 6) without an obstructed mouthpiece (e.g. obstruction due to the tongue being placed in front of the mouthpiece, or teeth in front of the mouthpiece, or mouthpiece deformation due to biting); and 7) without evidence of an extra breath being taken during the manoeuvre.

    In this particular case, it is clear that sentences #2 and #4 are not implemented in the software. I realize that not all of the sentences in this statement can be implemented in software (sentences #5 and #6 for example) but pauses or hesitations within the first second should be amenable to software analysis. Is this also a blind spot on the part of the manufacturer? Do they also think that a sub-one second pause is self-evident? I don’t have an answer to that nor do I know how wide-spread a problem this is. Answers to technical questions like this are hard to get because oftentimes the answer may only be known to a single programmer or project manager as well as the fact that manufacturers tend to be disinclined to answer this kind of question in the first place.

    I think the lesson is that neither your staff nor your test systems may be aware of the effect that sub-one second pauses have on FEV1. You and I may think it is self-evident, but this may be a blind spot for us, our staff training and our equipment.

    Reference:

    Brusasco V, Crapo R, Viegi G. ATS/ERS Task Force: Standardisation of spirometry. Eur Respir J 2005; 26: 319-338.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • PACO and DLCO

    Patients are advised not to smoke prior to DLCO testing, primarily because it increases carboxyhemoglobin. The effect of COHb on DLCO has been well studied, but COHb is not often measured before DLCO testing. Alveolar carbon monoxide (PACO) can be measured however, and there is a good correlation between PACO and COHb.

    The single-breath DLCO calculation assumes that PACO is zero. At first glance this is a reasonably good approximation since a non-smoker will normally have a PACO of less than 5 ppm. This is no more than 0.17 percent of the 0.3% CO concentration (3000 ppm) used for testing which is negligible. Even so, smokers can have significantly elevated COHb levels and COHb increases during testing.

    Elevated PACO and COHb levels will decrease DLCO. PACO back-pressure is estimated to be responsible for about 40% of the decrease and the anemia effect of COHb about 60%. Since PACO and COHb are usually in equilibrium in the lung, the effects are combined and DLCO is corrected according to COHb.

    COHb has been estimated to increase by 0.5% to 0.7% from a single-breath DLCO test. The ATS-ERS statement on DLCO testing acknowledges this, but also does not set an upper limit on the number of tests performed in a single session. My PFT Lab has set an arbitrary upper limit of four attempts but since DLCO decreases approximately by 1% for every 1% increase in COHb, by the fourth attempt DLCO may have declined by over 2%. This is well within the reproducibility requirements for the DLCO test however, and DLCO inter-test variability is likely more related to factors such as inspired volume and breath-holding time.

    A larger concern is that smokers can have significantly elevated levels of COHb. I have seen at least one study that estimated that half of all smoker have a COHb level greater than 5%. COHb levels tend to correlate with smoking status but smokers are often poor reporters of their smoking habits and ex-smokers often relapse. Exhaled CO has been and can be used to determine whether patients are being compliant with smoking reduction programs (6 ppm is the usual cutoff for normalacy).

    There is an oxygen re-breathing technique for estimating COHb, but this requires a three-minute period of breathing 100% oxygen followed by a three-minute period of rebreathing into an anesthesia bag equipped with a soda lime filter. This was considered a rapid technique for the time it was developed (1960) but it was tested using a small number of young, healthy subjects (6) and has not been verified.

    Highly sensitive solid-state CO analyzers were developed in the 1980’s and have been used primarily to monitor a patient’s smoking status. Using these devices a number of investigators have obtained an alveolar sample from a 20-second room air breath-holding period to measure PACO. They showed that COHb is approximately equal to PACO (in PPM) x 0.20 with a range of 0.16 to 0.25. The range in factors is due to several reasons, one of which is that these devices are sensitive to exhaled hydrogen which is elevated in some subjects for metabolic reasons. It’s also been shown that airway obstruction can cause the COHb to be calculated from PACO to be underestimated by as much as 3% when airway obstruction is severe.

    Although their overall contribution is not great in most patients, elevated levels of COHb and PACO can reduce the accuracy of DLCO measurements. Although the error bar is larger than I would like, COHb can be estimated from a room-air measurement of PACO. I would like to suggest that pulmonary function test system manufacturers add the option of measuring PACO prior to DLCO testing. Alternatively, a PFT Lab could acquire a CO analyzer used for monitoring smokers and estimate COHb through a manual calculation. In either case this would improve the accuracy of DLCO testing and would also help monitor and document a patient’s smoking status. 

    References:

    Brusasco V, Crapo R, Viegi G. ATS/ERS Task Force: Standardisation of the single-breath determination of carbon monoxide uptake in the lung. Eur Resp J 2005; 26: 720-735.

    Henderson M, Apthorp GH. Rapid method for estimation of carbon monoxide in blood. Brit Med J. 1960 2: 1853-1854.

    Jarvis MJ, Belcher M, Vesey V, Hutchison DCS. Low cost carbon monoxide monitors in smoking assessment. Thorax 1986; 41: 886-887.

    Kleerup EC, Zeider MR, Fedor BC, Kim HJG, Tashkin DP. Adjustment of diffusing capacity (DLCO) using exhaled carbon monoxide (ECO). Amer J Respir Crit Care Med 2011; 183: A6292.

    Leech JA, Martz L, Liben A, Becklake MA. Diffusing capacity for carbon monoxide: The effects of different derivations of breathhold time and alveolar volume and of carbon monoxide back pressure on calculated results. Amer Rev Resp Dis 1985; 132: 1127-1129.

    McNeil AD, Owen LA, Belcher M, Sutherland G, Fleming S. Abstinence from smoking and expired-air carbon monoxide levels: lactose intolerance as a possible source of error. Amer J Public Health 1990; 80: 1114-1115.

    Togores B, Bosch M, Agusti AGN. The measurement of exhaled carbon monoxide is influenced by airflow obstruction. Eur Resp J 2000; 15: 177-180.

    Wald NJ, Idle M, Boreham J, Bailey A. Carbon monoxide in relation to smoking and carboxyhaemoglobin levels. Thorax 1981; 36: 366-369 

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • DLCO, by the numbers

    I was recently contacted by a physician looking for an illustration or diagram to help make gas exchange and DLCO more understandable. We’ve all seen the diagram of the alveoli with a capillary stretched around it and with oxygen and carbon dioxide exchanging across the membrane. I think I first saw it in Comroe’s ‘The Lung” published in the 1960’s, but it may well be older than that. It’s hard to improve on this and dozens of versions have been made of it.

    Alveoli

    He said something that got me thinking for instance in a preoperative setting … all we know is a number on a seedy print out and all we use is a DLco % to tell us what to do!”. When I review reports I can access all of the raw data from all of a patient’s efforts so there’s a lot I can see about test quality that doesn’t show up on the final report. So what is a reduced DLCO test trying to tell you when all you have are numbers to look at?

    The numbers can actually tell you quite a bit but before looking for a cause for a low DLCO you need to start by getting a sense of the test’s quality. To do this for DLCO the VA and theInspired Volume should also be reported.

    Trying to determine DLCO test quality without other pulmonary function test results will be difficult because the results from spirometry and lung volume tests can be used both to assess DLCO quality and to guide the interpretation. Inspired Volume, for example, should be at least 90% of the FVC. When Inspiratory Volume is less than this the test gas mixture will probably not be as well distributed through the lung as it should be. When the Inspired Volume is markedly low compared to FVC then DLCO will likely be underestimated as much from a low VA as from poor gas distribution.

    VA is a single-breath TLC measurement. Because it is a single-breath measurement it is almost always less than TLC measured by helium dilution, nitrogen washout or plethysmography. As long as VA is within 10% of the patient’s measured TLC then it is likely that the DLCO test gas was well distributed. If VA is significantly lower than TLC this is likely due either to a suboptimal inspired volume or to poor gas mixing from airway obstruction. There is a correlation between airway obstruction and the difference between VA and TLC so a low VA is to be expected if it is accompanied by significant airway obstruction and will be acceptable if it has an adequate Inspired Volume. If not you will need to assume that the DLCO is likely underestimated.

    Correcting for hemoglobin is recommended but not required. The lack of hemoglobin correction only becomes an issue when the DLCO is slightly below the normal range because anemia may be the cause of the reduction, not a gas exchange defect.

    If there is reason to believe a DLCO test is underestimated it is probably not possible to estimate by how much. Having said that unless a DLCO test is grossly suboptimal (and should have not been reported in the first place) it is unlikely that it is underestimated by more than about 25%. My experience is that DLCO is a fairly robust test and that patients can mis-perform it in many different ways and the results still come out reasonably correct. Like anemia, underestimation is an issue mostly when you are trying to determine if the results are within normal limits or not.

    DLCO is essentially a measure of the lung’s functional surface area and will be reduced when lung perfusion, volume or ventilation are reduced. If all you have are the DLCO results then VA can tell you something about a patient’s lung volume and gas mixing, but only in a negative sense. If VA is within normal limits for the patient’s TLC then you can probably rule out significant restrictive and obstructive diseases, which then suggests that reduced perfusion is a primary cause. Unfortunately, VA (and DL/VA) can be reduced in both restrictive and obstructive disease, so this is where spirometry and lung volume measurements become important.

    DLCO is often reduced in restrictive diseases. The degree to which it is reduced relative to the decrease in TLC can be used to differentiate between interstitial diseases and chest wall/neuromuscular diseases. This is the about the only place where DL/VA may be of some use (DL/VA is not DLCO normalized for lung volume, it is KCO!). The basic rule of thumb is that DL/VA will be normal or reduced in interstitial diseases and elevated in chest wall and neuromuscular diseases. To use DL/VA however, you need to know that VA is reduced because TLC is reduced and furthermore that VA is essentially the same as the measured TLC.

    A reduced DLCO in the presence of significant airway obstruction will, of course, suggest COPD. A reduced DLCO without airway obstruction or restriction will, of course, suggest a perfusion limitation such pulmonary emboli or hepato-pulmonary syndrome.

    DLCO can be used to assess a patient for lung resection, lobectomies and pneumonectomies. A simple approach is to estimate how much lung tissue will be removed as a fraction of the total lung volume and then estimate that DLCO will be reduced by the same amount. This is true to an extent, but a more accurate assessment will use a V/Q scan and take the perfusion and ventilation of the target lung volume into consideration as well. A post-pneumonectomy patient will have a reduced DLCO but DL/VA (okay, so there is another use for DL/VA) will be normal as long as the remaining lung is normal.

    DLCO, VA and Inspired Volume can say a lot about lung function. DLCO is a critical component in the differential diagnosis of obstructive and restrictive lung diseases but to use it you first need to assess test quality with Inspired Volume and VA and only then can you use VA to help interpret the results.  

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Should complete PFTs always be done on a first visit?

    This is not something I have any real influence over because the tests ordered on a patient’s first visit to the PFT lab are going to be determined by the ordering patterns of the referring physician and not by what I think. It’s still a worthwhile question, however.

    There are no standards for PFT ordering. There are recommendations from the ATS, ERS, ACCP and NIH regarding patient diagnosis and treatment for a variety of pulmonary diseases and buried inside of them are some guidelines for PFT tests. What I’ve seen, however, is that these guidelines are honored far more in the breach than in their observance. As an example, for asthmatics the NIH recommends spirometry during an initial visit, after asthma has been stabilized, during an exacerbation, after an exacerbation and at least every 1 to 2 years otherwise. How often do you see this guideline followed in more than spirit?

    I never used to think about this too much but several years ago I had a long conversation with a Pulmonary lab manager at a tertiary care hospital in Australia. One of the things he said was that all patients newly referred to the Pulmonary division there always had a complete set of PFTs, including post-bronchodilator spirometry, MIP & MEP and an ABG before they even saw a pulmonary physician. The ABG may be a bit of overkill, but since that time I now spend a lot less time on the front lines and lot more time reviewing PFT reports. I have a more global view of patient management (or at least I like to think I do) and I have to wonder if complete PFTs on a first visit shouldn’t be a standard approach.

    Most new Pulmonary patients at my hospital usually have just spirometry on the first visit. They may return for several visits, getting just spirometry each time (even when the results are abnormal) before they ever have any additional tests. I’ve talked to physicians about this a few times and the answer I’ve gotten most often is that they usually don’t feel they need more than spirometry to manage a patient’s care. The second most common answer is that this approach is more economical, that they don’t order any tests they don’t think the patient doesn’t need.

    Physicians are under a great deal of pressure to contain costs so I understand why tests shouldn’t be ordered if they aren’t needed, but I also have to wonder if this approach sometimes leads to greater costs in the long run, including the cost in patient time. The time that patients spend traveling and waiting for medical care is often overlooked. Physician time is very expensive and valuable and in any department or medical office there is usually a phalanx of receptionists, medical assistants and nurses who are there to maximize a physician’s face time with patients. Patients, on the other hand, often have to take time off from work, spend considerable time traveling, wait to see a physician and then afterward wait for a diagnosis and treatment. Patient time isn’t considered part of cost containment.

    This may be a somewhat extreme example, but it’s one I see frequently enough that it is not all that unusual. A new patient complaining of shortness of breath sees a pulmonary physician and has spirometry. Spirometry is normal-ish and there are no prior PFTs to compare it to. The patient, still complaining of SOB, has a follow-up visit several months later and again has spirometry, with no change. Still SOB, there is another follow-up visit months later, again with spirometry, which is again normal-ish and unchanged. Finally, six months to a year after the initial visit the patient has a complete set of PFTs and suddenly the shortness of breath is explained by a low DLCO.

    That was not a fair example because I am sure that many patients are diagnosed and treated correctly during the initial visit or soon after by follow-up x-rays or something similar. Still, it is not that uncommon either. And I have to wonder that if complete PFTs were performed as part of initial visit just how much patient time would have been saved and how much better informed and focused the physician would have been the first time they saw the patient.

    I would like to suggest that complete PFTs (which to me means spirometry pre- and post-bronchodilator, lung volumes and diffusing capacity; we can quibble about some of the other tests another time) should be standard for all initial pulmonary physician appointments. Yes, I also agree that in many cases this will turn out to have been unnecessary but I will also say that you don’t know for certain which patients for whom this will be the case ahead of time. The advantage will be that the physician will have much better understanding of the patient’s pulmonary status, a baseline will be established and the patient’s time will be saved.

    I think the value of a baseline to a patient’s long term care is underestimated and I don’t think it is considered as a factor in cost-containment. My lab’s database goes back to 1989. Patients who had complete PFTs ten or twenty years ago are returning for new pulmonary problems and we now have a baseline for comparison. Sometimes the complaints are new but baseline shows the pulmonary problems aren’t and sometimes they show just how much lung function has declined. In either case, a baseline is invaluable.

    I understand that medical costs need to be contained. At my age I am both a provider and a consumer of medical care. But I also think that this means we need to learn how to make testing less expensive and not necessarily just order fewer tests. I think that cost-containment also has to look to long-term costs. Saving a dollar now doesn’t make sense if it costs us a hundred dollars five years in the future. Pulmonary function testing is already relatively low-cost and low-overhead (particularly when compared to radiological imaging), what needs to be done is to maximize its utilization and making complete PFTs a standard part of initial patient visits would help do that.

    I’ll be the first to admit that I have a narrow focus. To a hammer every problem is a nail. I’m a PFT technician, so for me the answer to every problem is a PFT. I do believe that an ounce of prevention is worth a pound of cure however, and that PFTs are definitely part of the ounce, not the pound. 

     Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.