Category: pftblog

  • VA versus TLC, what this can tell you about test quality

    I’ve had a bug lately about lung volumes and I guess that today is no different. A report with some odd lung volume results came across my desk and I’ve spent some time trying to figure out what the numbers are telling me about test quality.

     VA vs TLC Table

    What concerned me was the 15% discrepancy between the VA from the DLCO and the TLC measured by plethysmography. VA is measured from the insoluble component of the DLCO gas mixture (methane in this case) and is almost always less than TLC. VA is a single-breath measurement and for VA and TLC to be close this usually means the patient must have very good gas mixing inside their lungs. Even when the quality of the DLCO and lung volume tests are good however, VA is almost always less than TLC. So why was VA so much larger than TLC in this case?

    I first checked to see if there was a problem with the calculation of VA. Patients can perform the DLCO test erratically and it was possible that a single marginal test had been selected. This was not the case as the patient had performed three DLCO tests (one with questionable quality and two with good quality) and since the VA did not vary by more than 3% across all of the tests it was evident that the VA was eminently reproducible.

    Usually when there is any maldistribution of ventilation this can be seen in the single-breath DLCO test by a tracer gas concentration that steadily decreases during the exhalation phase. When this occurs the tracer gas concentration in the standard ATS-ERS alveolar sample is the highest during the entire exhalation and therefore VA tends to be on the low side. On very rare occasions however (probably less than 1 in a 1000 DLCO tests and only in patients with pulmonary fibrosis) I have observed the exhaled tracer gas showing the opposite pattern. That is, the tracer gas concentration is lowest at the beginning of exhalation and increases as exhalation continues. When this occurs VA can be higher than TLC but there is a distinct tracer gas waveform that goes along with it. This patient’s exhaled gas waveforms were normal and even showed a slight decline during exhalation which was more suggestive of a small amount of the kind of ventilation inhomogeneity you see with airway obstruction or normal aging.

    Next I looked at the plethysmography TGV tracings. Plethysmograph is another test that many patients find difficult and the lab’s technicians are occasionally hesitant to override the software’s calculations so it was possible that TGV was being underestimated by poor test quality. Once again however, there were three good quality TGV efforts and TLC did not vary more than 2% across all of the tests. The TGV waveforms were clean and the measured slope for each effort was spot-on as far as I could tell. When I looked at the slow vital capacity however, I was bothered by the fact that the exhalation seemed to be chopped-off, like the test had been ended too soon and the patient hadn’t finished exhaling.

    VA_GT_TLC_3

    This was curious because the SVC and FVC were essentially the same. When I finally looked at the spirometry effort (this, by the way, is opposite to the pattern that I usually use to review reports but the VA and TLC had caught my eye immediately and I was reviewing the report backwards) and there it was obvious the patient had not exhaled completely and that FVC was likely underestimated.

    VA_GT_TLC_2

    Since most patients have little or no problem performing a full inhalation to TLC as a general rule of thumb I believe that when FVC and SVC are underestimated this means that it is usually the ERV that is underestimated, not the IC. When this logic is applied to lung volume measurements this means that it is the RV that is usually overestimated and not the TLC that is underestimated. In this case I am forced to believe that the patient had both a suboptimal inhalation and a suboptimal exhalation and that the difference between the reported FVC and their “real” FVC may be substantial. This also means that the FEV1/FVC ratio was probably overestimated and that the patient therefore also had some mild airway obstruction which might explain the slight decline in tracer gas during exhalation in the DLCO test.

    The patient was in their 20’s with a new diagnosis of scleroderma and a non-smoker. They had been complaining of SOB and had never had any pulmonary function tests previously. Interstitial disease often goes along with scleroderma but given the results and the doubts about FVC and TLC a component of asthma may be more likely. It is also likely that there was some testing naivete and that the patient just needs more practice to know what a full inhalation and a full exhalation are really like.

    The final read on this report was “Normal lung volumes and gas exchange. Although spirometry results are within normal limits the FVC is likely underestimated due to an early termination of exhalation and for this reason an obstructive defect cannot be excluded.”

    VA should never be more than TLC. When it is, listen to it because it is trying to tell you something about test quality.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • VA, DLCO and COPD

    Although the technology used to perform the single-breath DLCO test has improved since it was first developed in the 1950’s the essential concepts and equations have not changed significantly. Probably the most important advance has been the introduction of rapid response real-time gas analyzers in the 1990’s. Prior to that time the patient’s washout and sample volumes had to be preset which always involved a certain amount of guesswork when a patient was significantly obstructed or restricted. With a real-time gas analyzer it is possible inspect the exhaled gas tracings after the test has been performed in order to determine when washout has occurred and then select the appropriate location for the sample volume. This has improved the single-breath DLCO test quality but at the same time it has also exposed some of its limitations.

    The single-breath DLCO test attempts to simplify what is actually a very complex process. One of the key assumptions of the single-breath DLCO calculations is that the inspired gas mixture is evenly distributed throughout the lung. This is not really true even for patients with normal lungs and in general, inspired gas follows the last in-first out rule. In patients with lung disease this inhomogeneous filling and emptying can be magnified and a maldistribution of ventilation is often most evident in patients with COPD.

    A second key assumption is that a small sample of exhaled air taken near the beginning of exhalation (the alveolar sample) can accurately represent the entire lung. The reason that it is necessary to rely on a small sample rather than the entire exhaled volume has to do mostly with time. The length of time that the inspired gas sample resides in the patient’s lung (breath-holding time, BHT) needs to be determined fairly precisely. Since an entire exhalation is presumed to take on the order of 6 seconds (or longer, of course) then at what point do you stop measuring time? Beginning? Middle? End? By using a small sample of exhaled air near the beginning of exhalation when flow rates are relatively high and a gas sample can be acquired in a short period of time it is possible to be more precise. Even so there have been different approaches to measuring the end of the BHT. The current ATS-ERS standard recommends the Jones-Meade approach which uses the middle of the alveolar sampling period as the end of BHT, whereas both the Ogilvie and ESP approach used the beginning of the alveolar sampling period. The selection of BHT technique has been shown to produce measurable and systematic differences in calculated DLCO but regardless of this a small sample reduces the error bar.

    The insoluble component of the DLCO test gas mixture (which is most often methane in systems with real-time gas analyzers) is used to calculate VA and this is where the problems of ventilation inhomogeneity and a small exhaled gas sample converge.

    The exhaled gas waveforms from a patient without lung disease typically has a flat methane tracing which indicates that the inhaled gas mixture was well mixed within their lungs. This is confirmed by the fact that their VA is usually almost the same as their TLC.

    Normal_DLCO

    In contrast the exhaled gas waveforms from a patient with COPD often has a methane tracing that decreases throughout exhalation and a VA that is noticeably lower than their TLC. This shows that the inhaled gas mixture was unevenly distributed in their lungs.

    Emphysema_DLCO_1

    Since the methane concentration varies throughout exhalation it is apparent that VA will be quite different depending on where the methane is measured. The following graph uses data taken from the above exhaled gas and volume tracings to show how the calculated VA varies during exhalation.

    VA_vs_Exh_Vol

    The ATS-ERS statement on DLCO testing allows for a certain amount of variability in both washout and sample volumes. For a vital capacity greater than 2.0 liters, washout can be from 0.75 to 1.00 liter and the sample can be from 0.50 liter to 1.00 liter. In patients with normal lungs variations in washout and sample volume within ATS-ERS guidelines are unlikely to make any significant difference in calculated DLCO.

    In patients with COPD however, VA is usually significantly less than TLC and there tends to be a direct correlation between the degree of airway obstruction and the difference in VA and TLC. {see Single Breath TLC Measurements} This raises the question as to just how how much the low DLCO seen in patients with COPD is due to the actual lung disease and how much is due to a low VA. The answer, somewhat surprisingly, is that the variation in DLCO is much smaller than the change in VA over a wide range of washout volumes. This means that a DLCO using the ATS-ERS guidelines is (probably) both reasonably correct and reasonably appropriate to the degree of lung disease.

    This is not to say there is no difference. DLCO measured using an alveolar sample from late in the exhalation of a COPD patient tends to be higher than when measured from an alveolar sample from early in the exhalation. This increase is largely attributable to the larger VA measured from that part of the exhalation. Patients with normal lungs however, show the opposite pattern. DLCO decreases when using an alveolar sample from late in exhalation and this is probably due to the fact that it comes in part from a period when lung volume and the corresponding surface area are lower.

    The difference between VA and TLC can provide an index into ventilation inhomogeneity but it is unclear how reproducible this measurement is and how sensitive it is to overall DLCO test quality and in particular, inspired volume. In patients with normal lungs, assuming that inhalation during the DLCO maneuver is to TLC, VA is relatively insensitive to inspired volume. In patients with COPD I suspect that the measured VA is going to be dependent on how close a patient is able to get to RV before inhaling to TLC. But just as importantly as all this, it’s unclear what a VA/TLC index is actually measuring and what its clinical relevance is.

    Prediletto et al suggest the use of an index, Delta VA/Ve, that is the change in the percentage of VA per liter of exhaled gas. They note that higher values of this index indicate increasing maldistribution of ventilation. This may be an improvement over the VA/TLC ratio but it is not clear to me that Delta VA/Ve relationship is linear over the range of exhaled volume.

    Since VA is sensitive to washout volume in patients with COPD, this means that DLCO/VA (KCO) is going to be similarly sensitive. Although DLCO/VA has its place in assessing gas exchange (mostly in restrictive lung disease) I think that its use in patients with COPD is suspect. {see DL/VA is really K in disguise.}

    Before rapid-responding, real-time gas analyzers came into widespread use in clinical DLCO testing the dependence of VA on washout volume in patients with ventilation inhomogeneities was known by researchers but little could be done about it during routine measurements. Now we can routinely see its effects but it is not clear that what should be done about it. Several different approaches have been proposed to improve the accuracy of DLCO measurements in general and COPD in particular but there is no consensus at this time. I suspect that a future answer, if any, will depend on integrating DLCO and VA over the entire exhalation.

    In my lab the general recommendation to technicians is that whenever it is difficult to determine where the alveolar plateau begins (i.e., when the methane line gently slopes downwards and there is no distinct alveolar plateau) to default to a 1.00 liter washout and 0.50 liter sample volume in order to standardize results as much as possible. This is not a perfect answer but it at least acknowledges the problem and attempts to contain it.

    References:

    Beck KC, Offord KP, Scanlon PD. Comparison of Four Methods for Calculating Diffusing Capacity by the Single Breath Method. Chest 1994; 105:594-600

    Brusasco V, Crapo R, Viegi G editors. ATS/ERS Task Force: Standardization of Lung Function Testing. Standardization of the single-breath determination of carbon monoxide uptake in the lung. Eur Resp J 2005; 26:720-735

    Dressel H, Filser L, Fishcher R, de la Motte D, Steinhaeusser W, Huber RM, Nowak D, Jorres RA. Lung diffusing capacity for nitric oxide and carbon monoxide: Dependence on breath-hold time. Chest 2008; 133:1149-1154

    Ferris BG, ed. Epidemiology Standardization Project. Am Rev Resp Dis 1978; 118:6(Part 2;1-120)

    Graham BL, Mink JT, Cotton DJ. Overestimation of the Single-Breath Carbon Monoxide Diffusing Capacity in Patients with Air-Flow Obstruction. Am Rev Resp Dis 1984; 129:403-408

    Graham BL, Mink JT, Cotton DJ. Effect of breath-hold time on DLCO(SB) in patients with airway obstruction. J Appl Physiol 1985; 58:1319-1325

    Jones RS, Meade FA. Pulmonary Diffusing Capacity: an improved single-breath method. Lancet 1:94-95

    Leech JA, Martz L, Liben A, Becklake M. Diffusing Capacity for Carbon Monoxide: The Effects of Different Derivations of Breathold Time and Alveolar Volume and of Carbon Monoxide Back Pressure on Calculated Results. Am Rev Resp Dis 1985; 132:1127-1129

    Ogilvie CM, Forster RE, Blakemore WS, Morton JW. A Standardized Breath Holding Technique For The Clinical Measurement Of The Diffusing Capacity Of The Lung For Carbon Monoxide. J Clin Invest 1957; 36:1-17

    Prediletto R, Fornai E, Catapano G, Carli C. Assessment of the alveolar volume when sampling exhaled gas at different expired volumes in the single breath diffusion test. BMC Pulmonary Medicine 2007; 7:18

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Open-access on-line medical journals

    I have been researching different pulmonary function topics for quite a few years. The medical libraries I’ve frequented were, of course, originally all paper-based and to be able to find an article the library or one of the department’s physicians had to subscribe to the journal in question. In the last fifteen years we have all seen an explosion in on-line publishing and it has become much easier to research articles. I vigorously applaud the pulmonary medicine journals (Chest, American Journal of Respiratory and Critical Care Medicine, European Respiratory Journal, Journal of Applied Physiology, Thorax) for having opened part or all of their archives to anybody who wants to search and download articles.

    There are a number of other journals, however, that remain entirely behind paywalls. I was reminded of this recently while looking for an article on diffusing capacity from the 1970’s only to find that it would cost me $31 to access it (and then only for 24 hours). Although strictly speaking this does not prevent anyone from accessing these articles, anybody who does has to have deep pockets and this is acting as a significant barrier to the ability of individuals and institutions to obtain and to share information.

    Science in general and medicine specifically has advanced and improved because of open collaboration where information, results and techniques are shared. Medical journals came into existence as the publishing arm of different medical societies for just this purpose. Over time it seems that medical journals have developed a life of their own and just how much they owe or belong to the medical society and the principals that created them has become questionable.

    The field of publishing is in a rapid state of change and publishing houses everywhere are living in fear of these changes. There are now a number of open-access on-line journals that do not charge for reader access. There is also a movement of researchers and authors who are deliberately refusing to submit articles to journals that are kept behind paywalls by their publishers.

    So why are medical journals so expensive? The journals and publishers would say that it is necessary to ensure that they are adequately compensated for their work and the role they play which is maintaining a scholarly reputation, arranging peer review, and for editing and indexing articles. They will probably also say that they can ensure fair access for developing nations through differential pricing or financial aid from the more developed countries.

    It’s not so clear to me the costs are what they say they are. It used to be that typesetting was an expensive and somewhat arduous process but for some time now desktop computers and software have been able to do this job relatively quickly and inexpensively. Many if not most editorial staffs that review and select articles for publication are not paid. Authors are not paid for their work and I have heard that some journals are now charging authors “page costs” in order for their articles to be published. I would have thought that physically publishing a medical journal would be expensive but have found several on-line print-on-demand services that will print and bind a 100 page book for less than $3 each in quantities of 100.

    This is not to say there are no costs and I would probably guess the time spent by reviewers and editors to be the most significant component, but the only thought I have been able to come up with for the rather exorbitant amount charged by the top tier journals is that it is because they are premium brands with a monopoly on archived articles and as such, very profitable.

    The “brand recognition” factor for the top tier journals has been built over time and cannot be easily duplicated by any competing journal, particularly a new one. The top tier medical magazines are “premium brands” where it is far more likely for an article to be read by a wide audience and far more prestigious to have an article published there in the first place.

    Does this prestige mean that the quality, pertinence or importance of the articles that are published in a top tier journal is that much greater than published elsewhere? To some extent the answer has to be yes. The top tier journals are usually able to attract a high quality editorial staff (even when they aren’t paid in dollars they are paid in prestige). Because they have a wide audience they also tend to attract more important and pertinent articles as well.

    Does this prestige mean that the articles published in the top tier journals are always of high quality, pertinence and importance? No, it doesn’t. The top tier journals have been hit as often by scientific fraud as any of the other journals. Articles co-authored by physicians or scientists who have published in the same journal previously are more likely to get published regardless of whether anything new is being said (I have seen a half a dozen articles that all came from the same study, each given a slightly different spin, and published in different journals simply because one or more of the co-authors was a “name”).

    Of greatest concern to me is the monopoly on older articles. I sincerely doubt that the original authors wanted their work to be sitting behind a paywall, and probably more so since they are not benefiting from the situation in any way. It may even be a negative benefit because one important measure of value that authors have is the number of other authors that cite their article and articles behind paywalls are less likely to be read and cited. But authors sign their rights away in the very beginning so they certainly don’t have any say (although it is not clear to me how legally valid this is since research is often paid for by government or institutional grants and the authors may not actually have the ability to sign away the rights to the research results, but then I most definitely am not a lawyer).

    I would like to see all of the medical societies review and re-think their stance on their medical journals. The original purpose, which presumably was of collaboration and sharing, is not being well served by the current system. This issue is probably not particularly high on anybody’s agenda, however. I really have no idea how profitable a journal is for any particular medical society but it is likely that for every top tier journal there are a number of people involved that have a vested interest (whether it be political, financial, institutional or social) in the system as it currently exists. I am probably being cynical but I am not expecting any significant changes to come from the traditional medical journals. Unless they prepare for change however, they may well find that they have become irrelevant.

    The most successful of the open-access on-line journals have taken an approach where the author pays but the reader doesn’t. In general these publication fees are based on which journal they are intended for and the country of origin. For PLOS Medicine, an article is $2900 for an author from the United States or Europe but only $500 for an author from a third-world nation and there is no fee for an author from a fourth-world nation. BioMed Central has a similar structure and for BMC Pulmonary Medicine the charge to an American or European author would be $2055 and for Respiratory Research it would be $2445.

    These fees go to each journals peer-review system and for editing, publishing, maintaining and archiving the articles, just like the traditional journals. Although this system does impose a potential burden on the author (since many research grants require public dissemination of the results the publication fees are often included in the grant), the author retains the rights to their publication. There are also few limits on the size of the article and the addition of a study’s raw or supporting data is encouraged. Multi-media files, including video can be included as well.

    Another approach that I should mention in passing is that a number of universities and medical institutions have established open-access repositories for their research and the articles written there. Although there are no author costs involved I think that this also tends to fragment information and may be too highly dependent on external search engines like Google Scholar.

    Although it appears the author-fee system used by the open-access on-line journals keeps the overall costs low, most particularly for readers, it may be that in the long run a hybrid system, one that combines author fees and a (very) small reader access fee might be fairer and more sustainable.

    We are still in a transition period between the traditional paper-based publishing systems and on-line publishing. I have been an ebook reader for several years and have watched the changes occurring in the mass-market publishing field during this time. I understand the concerns of both publishers and authors yet to be asked to pay the same for an ebook that somebody else pays for a hard-cover copy of the same book continues to rankle me. There are publishers that have embraced the ebook model and seem to be making a profitable business of it so medical journals should be able to do this too. What medical journals should not expect however, is business as usual and I think they would do well to remember what their original purpose was. 

    Open-access on-line journals:

    BIOMed Central  (www.biomedcentral.com/)

    Directory of Open-Access Journals   (www.doaj.org/)

    Highwire Press   (highwire.stanford.edu/lists/freeart.dtl)

    Medscape    (www.medscape.com/)

    PLOS-ONE  (www.plosone.org/)

    PubMed Central    (www.ncbi.nlm.nih.gov/pmc/

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • N2 washout drift throws another curve.

    Just when we thought it was safe to go back in the water, we’ve run into another N2 washout-related problem. Although it probably affects the TLC and RV calculations in a minor way, it was actually noticed in relation to spirometry.

    When spirometry is reviewed in my lab the FVC is compared to the SVC, if one has been performed. If the SVC is greater than FVC, then the FEV1/VC ratio is recalculated using the SVC. This is in line with the ATS recommendations on interpreting spirometry and does occasionally throw up a patient with airway obstruction that otherwise would not have been detected.

    I have been reviewing the raw data from all of the lung volume tests lately. My lab has a mix of equipment and performs lung volumes with helium dilution, nitrogen washout and by plethysmograph. I’ve mentioned previously that we went through a major upgrade in equipment and software over the summer and this extra scrutiny on lung volumes is in part because of the problems we’ve had with the nitrogen washout test.

    Today a report came across my desk that at first glance met the criteria for recalculating the FEV1/VC ratio with the SVC. The spirometry was within normal limits, but the FEV1/FVC ratio was towards the lower end of the normal range and there was some mild coving on the flow-volume loop. The SVC was about 0.30 L greater and when the FVC was replaced with the SVC, it was definitely below the normal range. One of the patient’s diagnoses was asthma so the overall report was consistent with mild airway obstruction and there were no warning bells going off. But since the lung volumes were performed by N2 washout I went ahead a looked at the raw data.

    n2 washout overestimating SVC gain

    Once again the first thing I saw was drift during the washout period. This has been a chronic problem with the N2 washout test that I attribute to patient leaks. I think the leaks are a result of two factors. First, the equipment that performs the N2 washout will only allow us to use what I consider to be a small mouthpiece. I have always been an advocate of using the largest possible mouthpiece a patient can tolerate because it cuts down on leaks and at the moment we don’t have that option. Second, the instructions for the test states that the patient should “take slow deep breaths” during the washout period but what I see often looks more like fast deep breaths. I think that the technicians may be encouraging the patients to breathe faster and deeper than is optimal during washout and that anybody is more likely to leak when breathing this way.

    The amount of drift was actually relatively small and although the TLC and RV were probably overestimated, the actual error is probably small and I would be hard pressed to believe it would alter the interpretation of the lung volumes being normal. However, when I eyeballed the drift it was apparent that the SVC was being overestimated by about 0.30 L and that instead of being larger than the FVC, the SVC was actually the same as the FVC. The final read on the report was that the spirometry was within normal limits.

    So, in this case drift during the N2 washout didn’t significantly affect TLC and RV, but would have significantly affected the way spirometry had been interpreted. Go figure. I guess I’ll be continuing to review all the raw lung volume test data (no matter what the technique) for the forseeable future.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Re-breathing DLCO, another almost unknown technique

    Re-breathing DLCO may not be well known because this technique has not been used in routine clinical testing. It has instead been primarily used to research diffusing capacity during exercise because it is able to do this without the need to interrupt or significantly alter a subject’s breathing pattern.

    Re-breathing DLCO is probably best thought of as a hybrid between the single-breath DLCO and steady-state DLCO techniques. The gas mixture and calculations are from the single-breath world but the breathing pattern is more from the steady-state side of the family.

    In order to perform a re-breathing DLCO test, a rubber bag is filled with a standard single-breath DLCO gas mixture (usually 0.3% CO, 10% He, 21% O2, balance N2). The volume of the bag is adjusted to the subject so that it empties at end-inhalation with every breath. At the end of an exhalation the subject is switched into the bag who then hyperventilates at a rate of approximately 30 breaths per minute. 

    During the first several breaths, carbon monoxide and helium undergo dilution and equilibration with the patient’s FRC. An initial carbon monoxide reading is taken at this time and used as a baseline. The test usually continues for another 30 seconds or so after the baseline measurements are taken and the subject is then switched out of the breathing circuit. DLCO is then calculated by:

     DLCO_RB_Formula_1

    where:

     Pb = barometric pressure

    T = length of the re-breathing period, in minutes

    FACO1 = fractional CO at beginning of re-breathing period (after initial equilibration between lungs and bag)

    FACO2 = fractional CO at end of re-breathing period 

    Lung and bag volume is calculated by:

     DLCO_RB_Formula_2

     where:

    Vbag = initial volume of gas in bag (ml STPD)

    FIHe = fractional He in bag before re-breathing started

    FAHe = fractional He in bag at end of re-breathing period 

    There are several re-breathing DLCO techniques which vary primarily by bag and tidal volume size, gas mixture, and breathing circuit details. Most researchers have developed a testing circuit that continuously samples the gas mixture. In part this is because determining the baseline is relatively straightforward. After the first couple of breaths, when the mixing of gas in the bag and in the subject’s lungs has been completed, the CO concentrationdeclines linearly when plotted using a semi-log scale. The beginning of this linear period is then used as the baseline.

    In a version of this developed by Lewis et al, a pump was used to continuously circulate gas from the re-breathing bag through a carbon monoxide analyzer during the test.

     Rebreathe_Lewis

    Another version developed by Sackner et al, used a mass spectrometer to continuously sample the re-breathing gas mixture. Although a mass spectrometer has several advantages, in particular a small sampling rate (approximately 15 ml/min), carbon monoxide and nitrogen have essentially the same mass. For this reason, carbon monoxide using a stable isotope of oxygen, oxygen-18, must be used.

    In this system a 6 liter bag was filled with 3.5 liters of test gas. The gas mixture used was 10% helium, 0.3% C18O, 0.5% acetylene, 21% oxygen, balance nitrogen and they were able to measure DLCO, tissue volume, pulmonary capillary blood flow (Qc), membrane diffusing capacity (DMCO) and pulmonary capillary blood volume (Vc) at rest and during exercise.

     Rebreathe_Sackner

    Hsia et al developed a complicated but still quite elegant system capable of measuring re-breathing DLCO, tissue volume, pulmonary capillary blood flow, DMCO and pulmonary capillary blood volume at rest and during exercise. This system is also capable of measuring the normal cardiopulmonary exercise values such as tidal volume, minute volume, respiratory, oxygen consumption and CO2 production.

    The basic exercise testing components consists of a turbine flow meter, non-re-breathing valve, mixing chamber and mass spectrometer. Added to this is a pneumatically actuated 3-way valve, a 6 liter bag, a bag-in-box and a volume displacement spirometer. The system is prepared by evacuating the 6 liter bag and filling the bag-in-box with the test gas mixture. Test gas mixtures typically used were 9% helium, 0.3% C18O, 6% acetylene and either 30% O2 in N2 or 90% O2.

    At the end of a normal exhalation, the pneumatically-actuated 3-way valve switches the patient to the bag-in-box system and they inhale to TLC. The pneumatically actuated valve then switches the patient to the 6 liter bag and the patient re-breathes for a period of 15 or 16 seconds.

    An advantage of this system is that the re-breathing bag volume is not pre-filled but instead determined by the subject’s inhalation from the bag-in-box system. This system has since been updated to use an infrared analyzer rather than a mass spectrometer making the C18O test mixture unnecessary.

    Measuring cardiopulmonary values

    Measuring cardiopulmonary values

    Inhale to TLC

    Inhale to TLC

    Re-breathing

    Re-breathing

    A low-tech approach was developed by Marshall. The breathing circuit for this technique uses two rubber bags instead of one. In addition to a 3 liter rubber bag is used for re-breathing, the circuit contains a smaller 500 cc bag that is used for obtaining the initial equilibrated gas sample from the beginning of the re-breathing period.

    To prepare for the test, the 3 liter bag is filled with 2 liters of the diffusing gas mixture and the 500 ml sample bag is evacuated and clamped so that it cannot refill. To perform the test, at the end of exhalation, the patient is switched into the re-breathing circuit. The patient then hyperventilates. After approximately 10 seconds into the re-breathing period the clamp to the 500 cc sample bag is opened. Once the sample bag is filled, it is again clamped closed. The patient continues to hyperventilate and 20 seconds later is switched out of the breathing circuit. After the test has ended the bags are analyzed for CO and helium.

     Rebreathe_Marshall

    Re-breathing DLCO results are lower than single-breath DLCO measurements in the same patients. This appears to be mainly due to the fact that re-breathing measurements are made at lower lung volumes than single-breath measurements and at least one researcher has shown that the results agree when made at the same lung volume.

    Although the hyperventilation that occurs during the re-breathing maneuver is not a natural breathing pattern proponents have pointed out that re-breathing DLCO is relatively independent of minute ventilation and that respiratory rates between 15 and 60 do not affect calculated DLCO.

    A major assumption of the re-breathing technique is that the gases in the lung and in the re-breathing bag equilibrate during the maneuver and that the CO concentration in the re-breathing bag is therefore an accurate indication of mean alveolar PCO. This assumption is probably not valid when significant obstructive disease or other maldistribution of ventilation is present, but the same criticism can also be applied to all DLCO techniques.

    A criticism of the re-breathing DLCO technique is that lung volume changes constantly during testing. This means that the test gas mixture is in contact with the maximum alveolar-capillary surface area only during part of the breathing cycle and may be an additional reason why re-breathing DLCO results are lower than single-breath DLCO results. A related issue is that the re-breathing DLCO calculation may be overly simplistic due to the fact that a basic assumption in the formula is that lung volume is constant and at end-inhalation.

    The re-breathing technique will probably never be used clinically because the results are related to the lung volume at which the test is performed and this limits the ability to interpret results. Its primary value lies in being able to measure DLCO during exercise or any other situation where the patient’s breathing pattern should not be interrupted. Although it has limited applications, it’s still an interesting technique that pulmonary technologists should be aware of.

    References:

    Barazanji KW, Ramanathan M, Johnson RL, Hsia CCW. A modified re-breathing technique using an infrared gas analyzer. J Appl Physiol 1996; 80:1258-1262

    Chance WW, Rhee C, Yilmaz C, Dane DM, PrunedaML, Raskin P, Hsia CCW. Diminished Alveolar Microvascular reserves in Type 2 Diabetes reflect systemic Microangiopathy. Diabetes Care 2008; 31:1596-1601

    Clark EH, Jones HA, Hughes JMB. Bedside re-breathing technique for measuring carbon monoxide uptake by the lung. Lancet 1978; 1:791-793

    Dujic Z, Eterovic D, Denoble P, Krstacic G, Tocilj J. Lung diffusing capacity in a hyperbaric environment: assessment by a re-breathing technique. Br J Indust Med 1992; 49:254-259

    Felton C, Rose GL, Cassidy SS, Johnson RL. Comparison of Lung Diffusing Capacity during re-breathing and During Slow Exhalation. Resp Physiol 1981; 43:13-22

    Hijazi OM, Ramanathan M, Estrera AS, Peshock RM, Hsia CCW. Fixed maximal stroke index in patients after pneumonectomy. Am J Respir Crit Care Med 1998; 157:1623-1629

    Hsia CCW, Ramanathan M, Estrera AS. Recruitment of Diffusing Capacity in Patients after Pneumonectomy. Am Rev Resp Dis 1992; 145:811-816

    Hsia CCW, McBrayer DG, Ramanathan M. Reference Values of Pulmonary Diffusing Capacity during Exercise by a re-breathing Technique. Am J Respir Crit Care Med 1995; 152:658-665

    Lewis BM, Lin TH, Noe FE, Hayford-Welsing EJ. The measurement of pulmonary diffusing capacity for carbon monoxide by a re-breathing method. J Clin Invest 1959; 38:2073-2086

    Marshall R. A re-breathing Method for Measuring Carbon Monoxide Diffusing Capacity: A Supplement to the Single-breath Method. Am Rev Resp Dis 1977; 115:587-589

    Rose GL, Cassidy SS, Johnson RL. Diffusing capacity at different lung volumes during breath holding and re-breathing. J Appl Physiol 1979; 47:32-37

    Russell NJ, Bagg LR, Dobrzynski J, Hughes DTD. Clinical assessment of a re-breathing method for measuring pulmonary gas exchange. Thorax 1983; 38:212-215

    Sackner MA, Greeneltech D, Heiman MS, Epstein S, Atkins N. Diffusing Capacity, Membrane Diffusing Capacity, Capillary Blood Volume, Pulmonary Tissue Volume and Cardiac Output Measured by a re-breathing Technique. Am Rev Resp Dis 1975; 111:157-163

    Weingarten JA, Milite F, Lederer DJ, Cohen SJ, Mooney AM, Basner RC. Comparison of single-breath and rebreathe diffusing capacity in Emphysema. Am J Respir Crit Care 2009; 179:A1485

    Yilmaz C, Chance WW, Johnson RL, Hsia CCW. Simulation system for a re-breathing technique to measure multiple cardiopulmonary function parameters. Chest 2009; 135:1309-1314

     

     

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • It’s after midnight, do you know where your reports are?

    After the tests themselves, the second most important thing that a Pulmonary Function Lab needs to do is to report results. Like a tree falling in the woods, if a report hasn’t gotten in front of the person that ordered the tests, has anything happened? A successful PFT Lab needs to manage its reports effectively.

    An obstacle that many labs will need to overcome is the reporting software that comes with their testing systems. I think that for many of the PFT equipment manufacturers reports are an afterthought at best. The ability to format reports to a specific lab’s needs is often limited and is just as often is a difficult and time-consuming process.

    For example my lab maintains approximately eight different report formats, each tailored to the most common mix of tests we perform. If we need to change an item in patient demographics (and we have) then that means that this change needs to be made separately in each of the eight different formats. If this change requires more space on the report and that other report elements need to be moved to accommodate this, then this has to be done on each of the report formats as well. The software has no ability to copy or share changes between report formats (which I am sorry to say is ridiculous since I remember having reporting software in the DOS era where this was possible).

    Despite the trouble of maintaining this many report formats, we think think it is worthwhile and that is because results need to be reported in a concise and pertinent manner. To be useful reports need to be readable and reporting too many parameters is almost as bad as reporting too few. There are over 30 different items that can reported for an FVC test but it is unlikely that any more than five or six need to be on a report. More than once however, I have seen reports from other labs with nearly twenty FVC parameters that were taking up most of a page, not even including the graphs. It is possible that the medical director of those PFT labs asked for this but far more likely that it was a default report format included with the equipment that the lab never modified.

    Don’t be intimidated by your report formatting software and don’t be afraid to modify report formats. Your hospital’s Medical Records department may have guidelines for demographic information that you will need to follow (our does and is very picky about where the patient names, ID numbers and dates need to be placed) but you should be able to decide for yourself what test data should be reported. Poll the physicians whose patients you see and find out what they want to see on a report. You should try to satisfy your physicians but you should also try to limit what you add to a report. Simple and clean is better than complicated and cluttered. You may not think you have time to “play” with report formats but remember that reports are the primary face your lab presents to the world and as such they deserve your time.

    There are, however, two major components that go into a report: the test data and an interpretation. The lab is responsible for the test data but the interpretation has to come from a physician. How these come together is often an idiosyncratic process and there may be elements to it that are there for historical reasons and may not be logical or efficient so reviewing the interpretation process can be worth the effort. It may help to write out and detail the steps that are involved. Often, when you see a process in black and white the problems become more evident.

    Timeliness matters for lab reports and it matters a lot. One partial solution we’ve used for several years is to print preliminary (uninterpreted) reports as a PDF file which is emailed to the ordering physician (automatically for the Pulmonary physicians and on an “as-requested” manner for other physicians). This gets the test results in the physician’s hands more or less immediately but that does not mean it reduces the pressure on the lab to produce an official interpreted report in a timely manner as well.

    Improvements in the interpretation process should not be approached in an adversarial manner. You can’t “make” a physician interpret reports in a way that makes it easy for the lab. Their time is valuable and they have the final say in how they will handle report interpretation. Other than a sense of professionalism and concern about the patients about the only “carrot” you can offer is the reimbursement for the interpretation. The quicker an interpretation is done and can be billed, the less likely it is to be denied. (Be very careful however, about when an interpretation is billed. Medicare audits have turned up situations where the billing for the interpretation was performed well ahead of the interpretation itself and that has been costly for the institutions involved.)

    This doesn’t mean you can’t point out solutions that make it easier for the physician even though it may mean extra work for your lab. I know that none of us are looking for more work but it may be necessary for your lab’s staff to take on extra tasks in order to be able to get reports out faster.

    Historically, my PFT lab has had the lab staff type the report interpretations. This was “extra” work for the lab but I think it has been worthwhile because it has given the staff a much better appreciation about what the test results mean. It also touches on a philosophy of work that I have and that is that technicians should be able to do everything that needs to be done in the lab and everything includes reports. A number of years back, at the urgings of the administrator I was reporting to at the time, we hired a part-time clerical worker to handle some pieces of the reporting process. I saw the technicians quickly develop a “it’s not my job” attitude to reports and that no matter how much free time they had and no matter how far behind the clerical person was they no longer cared about getting reports out the door. When that clerical person left after a half a year (to a better job) I adamantly resisted replacing them because the “it’s not my job” attitude ended up exacting a toll in morale and overall efficiency that more than offset what work the clerical person was able to do.

    The final part of managing reports is tracking reports and making sure that nothing gets lost “between the cracks”. This is probably a bigger problem for high-volume labs than for those with a limited number of patients. My lab sees on average over 150 patients a week and this continues to be a weak link in our report management system. We have methods to track reports and even though the number of “lost” reports is small, we have never managed to get that number to zero. This is due at least in part to the fact that although the lab has a complex networked computer system and the most recent version of its lab software report tracking still has to be performed manually. I’ve already said that reports seem to be an afterthought to the lab software developers and that seems to be even more true of overall report management. Given what I know about the lab software’s database it should be possible, even relatively simple, to track a report from the time a patient is first seen, but this is not done. We do always find the missing reports towards the end of our report management process, but that can be a week or more after they were first supposed to have been reviewed and that is far from ideal.

    [I would urge you to let your equipment manufacturer know when you have problems with reports (or any other aspect of their software, of course). I think that many PFT labs are not as happy as they could be with their test equipment’s reporting functions but don’t say anything because either they don’t think it matters or that they won’t be heard. I think that the manufacturers too often assume that if nobody complains then everything must be fine but I also think there is a certain amount of willful blindness on their part as well. I may be wrong (and feel free to correct me) but I don’t know of a single PFT equipment manufacturer that has an official (or public or open) process for handling customer suggestions and complaints.]

    Both the patients you see for testing and the physicians that order the tests are your customers. They are both served best by pertinent, timely and readable reports that are also the public and virtual face of your PFT lab. I think that any Pulmonary Function lab that wants to be successful needs to control their report process as much as possible and to treat reports as an important part of daily operations.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Steady-State DLCO, an almost-forgotten technique

    Recently I have been reviewing a lot of early pulmonary function research. I’m not feeling nostalgic but I think that re-visiting some of the older methods and technology may be interesting. Almost all Pulmonary Function laboratories presently use the single-breath technique to measure diffusing capacity. There are a number of reasons why this is case but I suspect that many technicians and physicians aren’t aware that there are several alternative methods for measuring diffusing capacity and that there was a time when at least one of these, the steady-state DLCO, was routinely used in clinical labs.Diffusing capacity, regardless of technique, is determined by the rate of CO uptake, divided by the driving (alveolar) pressure:

     DLCO_SS_Formula_1

    A steady-state DLCO test takes this simple equation and tries to run with it. It is relatively easy to measure CO uptake and a steady-state test system can be as simple as a breathing valve and large sample bags containing the inspiratory and expiratory gas mixtures.

     Steady_State_Generic_System_1

     The problem, and perhaps the most significant weakness of all steady-state DLCO measurements is the need to estimate the alveolar concentration of carbon monoxide. For each method of estimating alveolar CO there are limitations to either the accuracy of the measurement or the circumstances under which it can be performed. As importantly, the differences in the way each technique estimates PACO can lead to significant differences in calculated DLCO.

    Depending on how carefully you count them there have been at least eight different approaches taken at one time or another towards estimating PACO, but two were used most frequently;

    Filley et al, measured Vd/Vt by taking an arterial blood gas during the DLCO test and measuring the mixed expired CO2. Once Vd/Vt ratio was known it was possible to estimate PACO from:

     DLCO_SS_Formula_2

     The Filley approach was used (and modified) by many researchers and for some it is considered the “gold standard” of steady-state DLCO. The primary drawback for this technique is that small errors in the analysis of PaCO2 can have a large effect on the calculated Vd/Vt. In the range of normal values for PaCO2 and PECO2 a 1 mm Hg error in PaCO2 alone can cause an error of 8% in Vd/Vt. It’s also been suggested that the dead space/tidal volume ratio for CO2 does not necessarily match that for CO and that this could cause an error of up to 33% in calculated DLCO.

    The other major approach, first published by Bates et al, was to assume that end-tidal CO was the same thing as PACO. A variety of end-tidal samplers were developed, most notably the ingeniously simple Rahn-Otis end-tidal sampling valve which was used in at least one clinical testing system (Collins Modular Lung Analyzer).

     Steady_State_Rahn_Otis_Sampler

     [This sampling valve uses the pressures generated during breathing to obtain end-tidal gas samples. During inhalation, the negative pressure in the breathing valve causes a small, thin-walled sample ballon to inflate with gas that is immediately downstream of a one-way valve on the exhalation limb of a breathing circuit. During exhalation, positive pressure causes the sample ballon to slightly deflate, preventing any exhaled gas from the beginning of the exhalation from entering. A sample pump set to a low flow rate continuously evacuates the balloon and sends the sample to the CO gas analyzer.]

    Although this approach gets around the need for measuring or estimating Vd/Vt the primary limitation is that end-tidal gas concentrations are not necessarily an accurate reflection of alveolar gas concentrations and this is particularly so during exercise or hyperventilation. It is open to question just how “alveolar” the samples really are because end-tidal sampler valves act to average out the gas sample and the amount of gas averaged and which part of the exhalation it comes from will depend on tubing size, sampling rate, respiratory rate, tidal volume and is generally not predictable.

    The steady-state DLCO techniques are a reflection of the available technology of the time they were developed. They played a significant role in the early understanding of ventilation and perfusion during exercise, membrane diffusing capacity and the diffusing capacity in lung disease. Technology and the understanding of diffusing capacity has moved onwards in the last half-century and the single-breath DLCO technique has become the primary clinical measurement technique. Steady-state DLCO measurement systems have not been commercially available for at least several decades and there are a number of reasons for this.

    A steady-State test is significantly longer and more time-consuming than a Single-Breath test. Steady-state tests are at least three minutes long (not including set-up time) because it is necessary for the DLCOSS gas mixture to be breathed long enough for a true steady-state condition to exist.

    Steady-state diffusing capacity measurement techniques suffer from a lack of standardization because there is no consensus on the best way to determine alveolar CO. CO, like O2 and CO2 varies constantly in the alveoli during the breathing cycle and the selection of a specific alveolar CO concentration to represent an average value depends on the assumptions being used to select it.

    Steady-state DLCO measurements are highly influenced by minute ventilation and results can be markedly skewed by hyperventilation. When tidal volume is low the error bar is greater for either due to dead space gas contamination of end-tidal samplers or because minor errors in calculating Vd/Vt are magnified. When tidal volumes are large there is an increase in the surface area of the lung and the average absolute volume at which measurements are made becomes a factor.

    Clinical interpretation of results can be difficult. Steady-state measurements are obtained near FRC. Because the lung has a lower surface area and possibly a thicker membrane at FRC this means these results are always lower than measurements obtained at TLC by single-breath measurements. Because tidal breathing does not uniformly ventilate the lung, measurements made at FRC can not be linearly or otherwise systematically related to measurements made at TLC in patients with lung disease.

    Finally, even though the concentration of carbon monoxide in the steady-state gas mixture is nominally one-third of that used in the single-breath DLCO mixture (usually 0.10% with a range of 0.04% to 0.15%), the greater length of a Steady-State test means that the amount of carbon monoxide absorbed by the patient tends to be much higher than it does for a Single-Breath test. COHB has been estimated to increase as much as 2.4 percent during the course of a single test. This raises the issue of patient safety given the COHb burden placed on a patient from even a single steady-state test, let alone from repeat testing. Although noted in the literature, this fact was not widely appreciated and I personally shudder to think of the number of times during the 1970’s that I performed three steady-state DLCO tests on frail and sick patients because that was my medical director’s standard protocol.

    I’d have to say the era of steady-state DLCO testing has passed and will not come again. It has been several decades since steady-state DLCO tests were performed clinically, and it’s been a least a decade since I last saw a research paper involving steady-state DLCO. Although theoretically still capable of playing a role in research, there are other DLCO techniques that are easier to perform, easier to interpret and less prone to error. Probably the only value that steady-state DLCO still has is in teaching the concepts and history of DLCO testing.

    References:

    Anderson TW, Shephard RJ. A theoretical study of some errors in the measurement of pulmonary diffusing capacity. Respiration 1969; 26:102-115

    Apthorp GH, Marshall R. Pulmonary diffusing capacity: a comparison of breath-holding and steady-state methods of using carbon monoxide. J Clin Invest 1961; 40:1775-1784

    Bates DV, Boucot NG, Dormer AE. The pulmonary diffusing capacity in normal subjects. J Physiol 1955; 129:237-252

    Bates DV, Pearce JF. The pulmonary diffusing capacity: a comparison of the methods of measurement and a study of the effect of body position. J Physiol 1956: 132:33-238

    Bates DV, Knott JMS, Christie RV. Respiratory function in emphysema in relation to progress. Quart J Med 1956; 25:137

    Bates DV. The measurement of the pulmonary diffusing capacity in the presence of lung disease. J Clin Invest 1958; 37:591-604

    Bates DV, Varvis CJ, Donevan RE, Christie RV. Variations in the Pulmonary Capillary Blood Volume and Membrane Diffusion Component in Health and Disease. J Clin Invest 1960; 39:1401-1412

    Beck KC, Hyatt RE, Staats BA, Enright PL, Rodarte JR. Carbon monoxide diffusing capacity of the lungs determined by single-breath and steady-state exercise methods. Mayo Clin Proc 1989; 64:51-59

    Borland C, Mist B, Zammit M, Vuylsteke A. Steady-state measurement of NO and CO diffusing capacity on moderate exercise in men. J Appl Physiol 2001; 90:538-544

    Bouhuys A, Georg J, Jonsson R, Lundin G, Lindell SE. The influence of histamine inhalation on the pulmonary diffusing capacity in man. J Physiol 1960; 152:176-181

    Filley GF, MacIntosh DJ, Wright GW. Carbon monoxide uptake and pulmonary diffusing capacity in normal subjects at rest and during exercise. J Clin Invest 1954; 33:530-539

    Forster RE, Cohn JE, Briscoe WA, Blakemore WS, Riley RL. A modification of the Krogh carbon monoxide breath-holding technique for estimating the diffusing capacity of the lung: A comparison with three other methods. J Clin Invest 1955; 34:1417-1426

    Forster RE, Roughton FJW, Cander L, Briscoe WA, Kreuzer F. Apparent pulmonary diffusing capacity for CO at varying alveolar O2 tensions. J Appl Physiol 1957; 11:277-289

    Kinker JR, Haffor AS, Stephan M, Clanton TL. Kinetics of CO uptake and diffusing capacity in transition from rest to steady-state exercise. J Appl Physiol 1992; 72:1764-1772

    Leathart GL. Steady-state diffusing capacity determined by a simplified method. Thorax 1962; 17:302-307

    MacNamera J, Prime FJ, Sinclair JD. An assessment of the steady-state carbon monoxide method of estimating pulmonary diffusing capacity. Thorax 1959; 14:166-175

    McCredie RM. The diffusing characteristics and pressure-volume relationships of the pulmonary capillary bed in Mitral valve disease. J Clin Invest 1964; 43:2279-2289

    Sybert A, Ayash R, Chatham M, Gurtner GH. CO concentration-dependent changes in pulmonary diffusing capacity in humans. J Appl Physiol 1982; 53:505-509

    Turino GM, Brandfonbrener M, Fishman AP. The effect of changes in ventilation and pulmonary blood flow on the diffusing capacity of the lung. J Clin Invest 1959; 38:1186-120

     

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Busted for speeding during an N2 washout

    Nitrogen washout lung volumes are still relatively new to my PFT Lab. The number of problems we’ve encountered has decreased substantially but we are still learning some of the idiosyncrasies of the system. Recently while trying to understand a test with odd results we were reminded by the manufacturer that during the washout period a patient’s inspiratory and expiratory flow rates should not exceed 1.5 liters/second. The reason this “speed limit” is necessary highlights some of the limitations of modern open-circuit lung volume measurements.

    The basic concept behind nitrogen washout is relatively simple. The air we breathe contains 78% nitrogen which is a relatively inert, insoluble gas. If you have a patient breathe 100% oxygen and then collect their exhaled air you can calculate the volume of exhaled nitrogen by multiplying the concentration in the exhaled air by the total volume of air that was collected. Once you know the volume of nitrogen you can then calculate the lung volume.

    Initially this was a laborious and cumbersome process. The patient’s exhaled breathing circuit and a Tissot Gasometer (a very large spirometer with a volume between 125 and 300 liters) are first flushed with oxygen several times to remove any nitrogen. Next, while breathing room air the patient exhales to RV and an end-expiratory gas sample is taken and used to estimate the patient’s alveolar nitrogen concentration. The patient is then switched to 100% oxygen and breathes for seven minutes. At the end of the washout period the nitrogen concentration of the exhaled gas in the gasometer is analyzed and the volume recorded.

     N2_Washout_System_JCI_v19_n4_p609_1940

    At some point it was realized that instead of being physically collected exhaled air could be measured with a flow sensor and a high-speed nitrogen analyzer and the exhaled nitrogen volume calculated from the integrated wave forms. Later it was also realized that what wasn’t nitrogen in the exhaled air was mostly oxygen and that the nitrogen concentration could be estimated from the oxygen concentration instead. Thus the modern open-circuit nitrogen washout lung volume technique was born.

    The reason there is a limitation in a patient’s inspiratory and expiratory flow rates during the washout period has mostly to do with the speed of the oxygen analyzer and the sampling system. All gas analyzers have a limited ability to accurately react to rapid changes in gas concentrations which is due both to design issues in the electronic circuitry and to the physical sample volume needed by an analyzer. The current recommendations are for a 95% response to 10% change in nitrogen (or oxygen) in less than 60 milliseconds, which sounds quite fast but the transition from inspiration to expiration, particularly early in the washout period, can show much larger changes in nitrogen concentrations on a small time scale so this specification is only just adequate.

    More importantly, gas samples are usually transported to the analyzer from the breathing manifold by narrow tubing. Because gas is drawn through the tubing by a pump at a constant rate this means that there is a smaller amount of gas sample per unit of inhaled or exhaled air when flow rates are high than when flow rates are low. This means that when inspiratory or expiratory flow rates are high a change in gas concentration may not be “seen” by the analyzer because the sample is too small. In addition the tubing that transports the gas sample from the breathing circuit to the oxygen analyzer also causes rapid changes in gas concentrations to be “smeared” (due to flow shear effects near the wall of the tubing) and this “smearing” will be proportionally greater when inspiratory or expiratory flow rates are high.

    Sample_volume 

    The inspiratory and expiratory flow signals have to be integrated to convert flow to volume. High flow rates aren’t as much a concerning factor for this but it is a much more difficult task than is generally appreciated given that the gas concentrations, humidity and temperature of inspiratory and expiratory air are so different. The various equipment manufacturers appear to have solved this problem quite well with various proprietary software algorithms and flow-sensor construction techniques but the long term stability of the integrated volume is always a concern. Strictly speaking because of differences in oxygen consumption, CO2 production and humidity there are always small differences in the amount of air that is inhaled versus the amount that is exhaled and this means that the stability of FRC shown during prolonged tidal breathing is to some extent an artificial construction.

     Although some drift during the washout period is probably normal, too much drift can effect the measured TLC and not just because it probably indicates a leak. Most nitrogen washout systems have the patient perform a SVC maneuver and exhale to RV before being switched to washout mode. This actually solves several problems simultaneously. First it allows the end-exhalation air to be sampled and the starting alveolar nitrogen concentration estimated. Second, by exhaling to RV the amount of nitrogen that has to be washed out and therefore the time needed to wash it out is significantly reduced. Finally, since it is RV (or the lung volume at which the patient is switched to washout mode) that is measured, the SVC maneuver allows FRC and TLC to be calculated.

     So when drift during the washout period is detected is it normal or is it due to a leak? When drift is large and exceeds the SVC envelope it will cause the calculated TLC and/or RV to be miscalculated and it’s obvious the test can be rejected. When drift is small however, it is a lot less clear whether it signals a leak or not. About the only way we can be sure one way or the other is to always perform a minimum of two lung volumes tests and to compare results.

     N2_Washout_Leak

    N2 washout switch in and drift errors 2

    These problems with inspiratory and expiratory flow rates, analyzer speed and integrator drift would not have been as issue with early nitrogen washout systems but the modern systems are much faster, simpler and certainly much less cumbersome, so it is a good tradeoff.

    Overall, I’d have to say that we’re satisfied with our new test systems. When good quality tests are compared we see little or no difference between lung volumes performed by helium dilution and those performed by nitrogen washout. There has been a learning curve and the problems for each technique are different and it has been learning these differences that made the first couple of months somewhat rocky.

    References:

    Brusasco V, Crapo R, Viegi G. ATS/ERS TASK FORCE: STANDARDISATION OF LUNG FUNCTION TESTING. Standardisation of the measurement of lung volumes. Eur Respir J 2005; 26: 511-522.

    Darling RC, Cournand A, Richards DW. Studies on the intrapulmonary mixture of gases. III. An open-circuit method for measuring residual air. J Clin Invest 1940; 19: 609-618.

    Newth CJL, Enright P, Johnson RL. Multiple-breath nitrogen washout techniques: including measurements with patients on ventilators. Eur Resp J 1997; 10: 2174-2185.

    Tierney DF, Nadel JA. Concurrent measurements of Functional Residual Capacity by three methods. J. Appl. Physiol. 1962; 17: 871-873

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  • Plethysmography and “trapped air” demystified

    Recently I was reading the blog of someone who teaches Pulmonary Function testing and they stated:

    “…in emphysema and air trapping, the VTG (thoracic gas volume) will be higher than a FRC (functional residual capacity) measured by Helium dilution and Nitrogen washout. This is because VTG is the volume of gas contained in the thorax, whether in communication with the airways or trapped in the thorax.”

    This unfortunately is not correct, but it is a common misconception and since there was a time when I believed it myself I find it difficult to fault the author too much. I think it’s time to return to the basics of plethysmographic lung volumes however, and show why this is not true.

    Plethysmography relies on Boyle’s law, which is:

    P1V1 = P2V2

    This means that in a closed system, there is a proportional relationship between pressure and volume. Decrease the volume and the pressure increases proportionally. Increase volume and the pressure decreases proportionally as well. This effect is independent of the size of the system, so both large and small systems will act in the same way.

     

    To measure lung volume, lungs (the person is optional) are placed inside of a pressure-tight box (the plethysmograph). When the patient inhales and exhales against a closed shutter the lungs expand and compress and mouth pressure decreases and increases accordingly. This maneuvercompresses and rarifies the air in the box and the pressure in the box increases and decreases.

    The volume of the lungs is small compared to the volume of the box and the change in box pressure is going to be proportional to:

    [change in lung volume] / [box volume]

    Plethysmographs have a volume of roughly 600 liters. Assuming the patient compresses or expands their lungs with 30 cm H2O pressure, for a 6-liter lung volume this will cause the entire lung to compress or expand by about 180 ml. This in turns will cause a pressure change of approximately +/- 0.3 cm H2O inside the plethysmograph. (Yes, that small a change, and the boxs pressure transducer has to be exceedingly sensitive. I remember performing manual calibrations on this transducer and being able to tell when somebody opened a door at the other end of the lab which was around a corner).

    The change in box pressure (which is directly related to volume of compression or expansion of the patient’s lung)and the change in mouth pressure are all of the elements that are needed to calculate lung volume using Boyle’s law (as long as you also know the barometric pressure and the volume of the box). The math is rather straightforward but it is easier to think of this graphically.

    Mouth pressure and box pressure are simultaneously plotted against each other and it is the slope of this vector tracing the shows lung volume. Assuming that the same amount of pressure is used to expand or compress the lungs (and humans, regardless of size, tend to generate the same pressures) a small lung will cause a small change in box pressure and therefore a steep pressure tracing. A large lung will cause a large change in box pressure and a shallow pressure tracing. The slope of the tracing is therefore directly related to the volume of the lung.

    This measurement process can work well but an absolutely critical assumption is that the pressure measured at the mouth is the same as the average pressure inside the lungs. For patients with normal lungs this assumption is reasonably correct, but when lungs are diseased this is not necessarily the case at all. In obstructive lung diseases in particular the disease processes in the lung are often not homogeneous. This can cause different parts of the lung to go through different volume and pressure changes during the measurement maneuver. As long as the pressure measured at the mouth is equal to the average pressure inside the lung then the measured lung volume will be correct.

    When airways are obstructed however, the pressure changes in trapped air may not be reflected at the mouth.

    The pressure of trapped air cannot be less than the average pressure inside the lung and there are a number of reasons to believe it is often higher. It does not matter whether airway obstruction is localized or diffusely spread throughout the lung, the presence of trapped air usually causes mouth pressure to underestimate the average pressure inside the lung. When this underestimated pressure is compared to the actual compression volume of the lung, TGV will be overestimated.

    So the misconception isn’t that a plethysmograph can measure “trapped air”, it’s that it can measure it accurately. Helium dilution and nitrogen washout lung volumes are in fact more likely to be accurate when obstruction is present.

    My PFT Lab has been involved in funded research projects for close to 20 years. Almost all investigators that have come to us and asked for lung volumes to be performed have insisted that they be performed in a plethysmograph. Since more than one researcher has noted (starting over 30 years ago) that lung capacity tends to be overestimated when airway obstruction is present and that the amount of overestimation is related to the degree of airway obstruction, why has plethysmography continued to be considered the “gold standard” for lung volumes?

    I think that part of the reason for this is that accurate lung volume measurement, regardless of technique, is far more difficult than is usually credited and verification of the problem has had to wait until measurements could be independently confirmed with thoracic volumes analyzed by CAT scan. I also think that for a researcher the fact that “everybody else” uses plethysmography means that using anything different would cause reviewers to needlessly question their study results. Finally, I think there is an element of time management (or laziness) involved since the fact that you can perform a half dozen plethysmography lung volume measurements in the same time it takes to measure lung volumes by helium dilution once makes it quicker and easier to use a plethymograph.

    I am not going to say that obstructive hyperinflation and “trapped air” don’t exist. I’m also not going to say the helium dilution or nitrogen washout tests can’t underestimate lung volumes when obstruction is present. But instead of accepting the myth that plethysmographic lung volumes are always accurate and that helium dilution or nitrogen washout lung volumes are underestimated because of “trapped gas” it is time to start thinking the opposite may be true instead.

    References:

    Brown R, Ingram RH Jr, McFadden ER Jr. Problems in the plethysmographic assessment of changes in total lung capacity in asthma. Am Rev Respir Dis 1978; 118: 685-692.

    Goldman MD, Smith HJ, Ulmer WT. Whole-body plethysmography. Eur Respir Mon 2005; 31: 15-43

    O’Donnell CR, Bankier AA, Stiebellehner L, Reilly JJ, Brown R, Loring SH. Comparison of plethysmographic and helium dilution lung volumes: Which is better for COPD? Chest 2010; 137: 1108-1115.

    Rodenstein DO, Stanescu DC, Francis C. Demonstration of failure of body plethysmography in airway obstruction. J Appl Physiol 1982: 52: 949-954.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

  • What’s normal about the FEV1/FVC ratio?

    The FEV1/FVC ratio is used to estimate the presence and degree of airway obstruction. For well over thirty years my lab has used an FEV1/FVC ratio of 95% of predicted as the cutoff for normalcy. This value (carved onto a stone tablet by the way) had been brought to the lab by a founding physician who had come to the department from the NIH in the 1970’s. Since the software and hardware upgrade this summer our PFT Lab has switched to the NHANES III spirometry reference equations but we have so far resisted changing our 95% cutoff to the lower limit of normal (LLN). This is due in part to inertia but also in part to a mistrust in the concept of LLN. We have been steadily re-evaluating all of our testing criteria and have turned again to the FEV1/FVC ratio with the question as to whether our 95% cutoff is over-zealous or whether the LLN is too lax.

    Strictly speaking LLN is a statistical concept. In the NHANES III study (and most others) it is computed as the mean predicted value minus 1.645 times the standard estimate of error. Unlike the reference equations for FVC and FEV1 which use both height and age as factors, the NHANES III reference equations for the FEV1/FVC ratio are derived solely from age. It is not clear to me this is completely correct and I have discussed some of the discrepancies between the NHANES III predicted FEV1/FVC ratio and height in a prior posting but it does make analyzing the LLN for the ratio easy. For adult, Caucasian males the reference equations are:

    Predicted FEV1/FVC ratio = 88.066 – (0.2066 x age), LLN = 78.388 – (0.2066 x age)

    This means the LLN for all ages and heights of adult white males is therefore equal to {78.388/88.066} or 89% of predicted. Calculated similarly, the LLN for the adults of all races and sexes also approximates 89% (range 88.3% to 89.9%).

    Since the LLN is the bottom 5 percent of the observed FEV1/FVC ratios there is a high probability that these patients have some form of airway obstruction and a number of studies have shown a good correspondence between a FEV1/FVC ratio below the LLN and the symptoms and diagnosis of COPD. But a diagnosis of COPD does not necessarily rest on a FEV1/FVC ratio below the LLN. GOLD (Global Initiative for Chronic Obstructive Lung Disease) uses an observed (not percent predicted) FEV1/FVC ratio below 0.70 as a primary criteria for classifying COPD. Elderly patients in particular can have an observed FEV1/FVC ratio below 0.70 yet above the LLN and this has been used as a legitimate criticism of the GOLD criteria. At least one study however, has shown that there are elevated hospitalization and mortality rates due to respiratory causes in patients with airway obstruction by GOLD criteria but a FEV1/FVC ratio above the LLN.

    I do not doubt that patients with a FEV1/FVC ratio below the LLN likely have significant airway obstruction that meets the criteria for COPD and I also do not doubt that being able to provide a definitive diagnosis of COPD is an important component in patient care. I think the main question however, when comparing the NHANES III LLN and our cutoff of 95% of predicted is whether airway obstruction needs to meet COPD criteria to be able to be classified as airway obstruction and I don’t think it does. Regardless of the level of severity assigned to it, the presence of COPD indicates that airway obstruction is already relatively advanced. What is needed is an evidence-based cutoff for normalcy that is clinically relevant for levels of obstruction below COPD and that, unfortunately, is not available.

    The difficulty in creating an evidence-based cutoff is the same as that of creating the predicted values in the first place: what is normal? Population studies attempt to control this by selecting healthy, asymptomatic subjects. This does not mean however, that a subject that has not been hospitalized, is not on medication and appears healthy is actually “normal” and this has to be part of the reason why population studies show bell-shaped curves.

    Selecting a cutoff is going to be a subjective judgment that, at least presently, cannot be backed by objective evidence. My personal opinion is that the NHANES III LLN is too conservative. It is based on a statistical concept that is likely valid for a wide range of biological systems but it is far from clear to me that statistical significance is the same as clinical significance. I also think that our 95% cutoff is closer to the truth but at the same time I am well aware that I am biased by the fact that this is what I’ve used for a good part of my professional career and am likely to see the evidence that supports it, not the evidence that argues against.

    It should also be remembered that the FEV1/FVC ratio is not the sole factor in diagnosing airway obstruction. A 10% decrease (or even increase) in a patient’s FEV1 from one visit to another would be considered to be a significant change and clinically would likely be taken as an indication of underlying airway obstruction. A patient that had baseline spirometry that was WNL but showed a significant increase in FEV1 following a bronchodilator or a significant decrease in FEV1 during a methacholine challenge would also likely to be considered to have airway obstruction or at least the clinical potential for it. Finally, there is a small fraction of asthmatics that show a symmetrically decreased FVC and FEV1 with a normal FEV1/FVC ratio (and often a normal peak flow!) during exacerbations who in a sense aren’t even on the radar in this discussion.

    I think that an important question at this point is why do we need and use cutoffs in the first place? Isn’t it a bit silly knowing full well the myriad problems involved getting an accurate FEV1/FVC ratio in the first place that we would consider a value a tiny fraction below a cutoff to be abnormal and another value a tiny fraction above the cutoff to be normal? The reality is that for an FEV1/FVC ratio between 100% of predicted and say, the LLN, there is a continuum of probabilities that a patient does or does not have airway obstruction. The other reality is that humans do not do well with shades of gray and prefer black and white.

    So choose a cutoff that makes clinical sense to you but at the same time remember that whatever you choose is a line in the sand and that the FEV1/FVC ratio, whatever the cutoff, is not the sole evidence for airway obstruction.

    Update:

    For a more complete discussion of the standards for a normal FEV1/FVC ratio with reference equations see COPD and the FEV1/FVC Ratio. GOLD or LLN?

    References:

    Brusasco V, Crapo R, Viegi G, et al. ATS/ERS Task Force: Standardisation of lung function testing. Interpretive strategies for lung function tests. Eur Respir J 2005; 26: 948-968.

    Celli BR, Halbert RJ, Isonaku S, Schau B. Population impact of different definitions of airway obstruction. Eur Respir J 2003; 22: 268-273.

    Hankinson JL, Odencrantz JR, Fedan KB. Spirometric reference values from a sample of the general U.S. Population. Am J Respir Crit Care Med 1999; 159: 179-187.

    Mannino DM, Buist SA, Vollmer WM. Chronic obstructive pulmonary disease in the older adult: what defines abnormal lung function. Thorax 2007; 62: 237-241.

    Mannino DM, Diaz-Guzman E. Interpreting lung function data using 80% predicted and fixed thresholds identifies patients at increased risk of mortality. Chest 2012; 141: 73-80.

    Stanojevic S, Wade A, Stocks J, Hankinson J, Coates AL, Rosenthal M, Corey M, Lebecque P, Cole TJ. Reference ranges for spirometry across all ages: a new approach. Amer J Respir Crit Care Med 2008; 177: 253-260.

    Swanney MP, Ruppel G, Enright PL, Pedersen OF, Crapo RO, Miller MR, Jensen RL, Falaschetti E, Schouten JP, Hankinson JL, Stocks J, Quanjer PH. Using the lower limit of normal for the FEV1/FVC ratio reduces misclassification of airway obstruction. Thorax 2008; 63: 1046-1051.

    Vaz Fragoso CA, Concato J, McAvay G, Van Ness PH, Rochester CL, Yaggi HK, Gill TM. The ratio of FEV1 to FVC as a basis for establishing Chronic Obstructive Pulmonary Disease. Amer J Respir Crit Care Med 2010; 181: 446-451.

    Vollmer WM, B Gislason, Burney P, Enright PL, Gulsivik A, Kocabas A, Buist AS. Comparison of spirometry criteria for the diagnosis of COPD: results from the BOLD study. Eur Respir J 2009; 34: 588-597.

    Creative Commons License
    PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.