One of the major drivers for the development of these quality-of-care indicators, as acknowledged by the authors, was defensive posturing, ie, "concern that measures developed without physician specialist [neurology] input will be incorporated into payment incentive programs, or worse, payment incentive programs will not include measures related to the services that [neurologists] provide" [p. 2024]. The authors indicated as well that "If the care delivered by specialists [ie, neurologists] is not evaluated by these measurement programs, the value of health care delivered by [neurologists] becomes difficult to quantify and can be underestimated. Furthermore, if the care delivered by [neurologists] is measured by programs developed without the input of [neurologists], the value of [neurologic] care may be underestimated" [p. 2022].
The second stated driver for the development of these indicators was the request of specialty organizations to provide modules for measuring and improving performance in practice as part of maintenance-of-certification programs. To be sure, such a measurement approach is needed and appropriate for this indication.
Regardless of the rationale(s) for development of such measures, it must also be acknowledged that failure of neurologists to tick off all of the items that they themselves have identified as important indicators of quality will be used by others as a sure sign that quality was somehow lacking, even if in reality some of the measures are irrelevant or of lower priority to particular episodes of care. Such measures typically develop a life of their own and will likely soon multiply across a range of neurologic conditions. Therefore, it is necessary for neurologists to be aware of this development and to make necessary adjustments to their evaluation and management of patients, and to their corresponding documentation, so as to demonstrate "compliance." It is appropriate to speak of "compliance" in this respect because, as discussed further below, these measures by themselves do not demonstrate quality even if the absence of documentation of these measures will now be seen as a lack of or deficiency in quality of care. Similar complexities of such 2-edged swords have been raised by a profusion of practice parameters, JCAHO accreditation processes, Medicare documentation guidelines, etc. Failure to take these issues into account will likely have significant reimbursement, privileging/reprivileging, and medicolegal implications as these changes play out over the next several years.
It is also a fair bet that the final choice of measures was somewhat arbitrary and reflects a workable consensus product of a large expert panel working under the auspices of a large and complex professional specialty organization. The final product was in part a function of necessary institutional politics of an organization trying to set a line in the sand to demonstrate the "quality" of the care provided by its members, while avoiding items that would be difficult or imprecise or controversial to measure. Indeed, boiling almost 300 rules from the literature down to 10 final items by itself suggests that many important aspects of the quality of care provided to patients with Parkinson disease will not be measured by the selected measures. That said, meeting all of the final measures in no way proves that the care provided was of high quality, or improved functional outcomes of the patients, or that the entire effect of this quality measure process was itself cost effective. Meeting all of the measures may correlate with overall quality of care as determined by implicit peer review or other measures, but this remains to be established. No doubt a series of health services and health policy articles will be forthcoming to address these and related issues in coming years.