Connecting Psychiatry - Expert community for all mental health professionals

Why Physicians Do Not Follow Some Guidelines and Algorithms

(PSYCHIATRIC TIMES) - Evidence-supported guidelines and algorithms play an important role in promoting more standardized approaches to patient care. In a series of articles in Psychiatric Times, Michael Fauman, MD, examined the extent to which psychiatric practice guidelines and algorithms are used, how they are used, and the findings of studies that have validated their usefulness compared with usual practices.1-4

This article focuses on some psychopharmacology guidelines and algorithm recommendations that are often not followed. Seven clinical scenarios are examined. Major guideline and algorithm projects are summarized in Table 1 .

Standardized care driven by evidence-supported algorithms is a model that has attracted the attention of the hospital business community.5 Intermountain Health Care in Salt Lake City has been following standardized treatment protocols for two dozen illnesses, including pneumonia, diabetes, and heart disease, in its 21 hospitals and 90 clinics for many years, with robust improvement in quality of care and reduction of costs. The business case for the health care system’s approach is impressive: operating margins are at the very top of the industry.5 Thus, it is likely that many physicians will someday be working in health care systems in which algorithm adherence will be the expectation.

When and Why Guidelines and Algorithms Are Not Followed
Fauman1,4 discussed the many reasons why physicians are critical of guidelines and algorithms and do not want to follow them. Most recommendations in guidelines and algorithms, however, probably are followed.4 Curiously, physicians often agree with the specific recommendations when they are presented separately.6 Nonadherence may therefore be more the result of a failure of the health care system to provide reminders of the recommendations at a timely moment in the physician’s work flow than any disagreement with the actual recommendations.7 For ease of use, recommendations should be made available in an abbreviated format, but with the option to access the full reasoning and supporting evidence as needed. Clearly, the best mechanism for getting the guideline/algorithm advice to the physician is through a computerized medical record and order-entry system.8 However, the technology for how best to incorporate the logic and recommendations of guidelines and algorithms into such systems is still in development.9

Another reason for differences between what guidelines and algorithms recommend and what physicians actually do is related to how practicing physicians make decisions.10 Experienced physicians do not usually think through every decision starting with an exhaustive data collection of the patient’s history, followed by a systematic review of the evidence supporting different treatment options. This kind of evidence-based medicine practice is time-consuming and impractical.

Instead, physicians with busy practices tend to do a limited review of the patient’s history and mental status, prompted by certain symptoms or historical details. They rather quickly determine the important characteristics of the situation, after which the treatment plan may just “fall into place.”10 They typically have “rules of thumb” that can be applied rapidly and confidently. Clinical experience validates these reasoning shortcuts, which are assumed to exemplify the “art” of medicine. By contrast, the logic in guidelines/algorithms is based on an exhaustive analysis of the literature and assumes very accurate knowledge of the patient’s treatment history.7 It has been found that when the scientifically validated recommendations take more time than what physicians do now and believe works well, physicians frequently will not follow them.7

Sometimes these “personal heuristics” of physicians do in fact produce outcomes as good as the outcomes derived from following the recommendations of the guidelines. Subsequent research will sometimes validate these idiosyncratic formulas.

Deviations From Recommended Practices: 7 Examples
The following 7 recommended practices have been found to produce better results but are at odds with the usual practice of many physicians (listed in Table 2 and discussed in detail below). (“Better result” is defined as either a better clinical outcome or the same clinical outcome with equivalent safety but with greater cost-effectiveness.)

1. Use clozapine after 2 adequate monotherapy trials of other antipsychotics in schizophrenia. Numerous lines of evidence support this recommendation, found in all schizophrenia algorithms, including the International Psychopharmacology Algorithm Project, the Texas Medication Algorithm Project, and the Psychopharmacology Algorithm Project at the Harvard South Shore Department of Psychiatry. The latest Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) data confirm this recommendation.11 Yet clinicians prefer to try additional trials of monotherapy, various combinations of antipsychotics, and other polytherapy before turning to clozapine.

The resistance to clozapine is probably a consequence of the fact that it is a much more arduous and time-consuming treatment to implement for both the physician and the patient. There is, of course, legitimate fear of adverse effects, but a major factor is the additional time and effort involved to overcome patient resistance, perform the appropriate medical monitoring, and manage adverse effects. Patient resistance can often be overcome if the physician presents an appropriately positive assessment of this treatment option. Certainly, the evidence supporting clozapine compared with alternative medications should be part of the discussion. A recent report provides a comprehensive review of when and how to give clozapine and manage adverse effects.12

2. Make 1 medication change at a time, allowing for adequate dose and duration of therapy. All guidelines and algorithms stress this. The studies of algorithm-driven treatment compared with usual treatment clearly show that organized, diligent, consistent, measurement-based care that gives adequate time for the medication to be dosed properly produces better and faster results than treatment as usual.13,14 Following the one-change-at-a-time approach may be more important than using the specific drugs favored in the algorithms.

Managed care may be the chief source of opposition to this approach; daily changes in the pharmacotherapeutic regimen of inpatients, without allowing sufficient duration for each trial, are often demanded to justify “active” treatment.15 Approval of reimbursement for additional days in the hospital or outpatient visits may depend on this activity. The other major obstacle to implementing this methodological approach is “clinical experience,” which often seems to support various add-ons and premature switches. Patients often improve, or continue to improve, after these changes are made; however, the placebo effect and passage of time may explain much of this improvement.16

3. When there has been an unsatisfactory response to a monotherapy, switch to a different agent rather than add a second medication. Most physicians switch medications when the patient does not improve at all after a reasonable period. The key difficulty is in the evaluation of a partial but unsatisfactory response. How much of this partial response is due to the non–drug- related aspects of care? Does the medication deserve any credit? To help evaluate this, the Figure shows hypothetical data that are typical of the findings of hundreds of studies of the effects of different medications for mood and anxiety disorders compared with placebo.

In this hypothetical study of active drug and placebo, both groups of patients show gradual improvement during 12 weeks, but the response to active drug starts to separate from placebo after 2 weeks, and the effect size (difference from placebo) increases gradually during the 12 weeks. Note, however, that placebo also does moderately well at each time point. What if a patient were to improve 20 or 25 points on this hypothetical rating scale by weeks 8 to 12? This would be a partial response compared with results of the active drug. Most physicians (and patients) would be inclined to attribute this improvement to the active drug. However, a 20- to 25-point improvement is right at the mean of what is expected from placebo!

In clinical practice, there is no placebo treatment but there are nonspecific therapeutic elements, including the alliance and expectations set up by the diagnostic process, supportive follow-up meetings with the patient, and investigator bias (ie, the physician’s belief and expectation that the medication will work).

So, is this 20- to 25-point improvement all caused by placebo or nonspecific aspects of care? Or is some of it due to the drug? Or is all of it from the drug? The physician and patient must determine this as best they can. This process will be facilitated if the patient is informed at the start of treatment that both drug and placebo effects are possible and that an assessment of the relative contribution of each will need to be made. Both the doctor and the patient would then share a desire to avoid unnecessary polydrug therapy resulting from “augmentation” of placebo-related changes.

For full article, please visit:

Views: 9


You need to be a member of psychiatryRounds to add comments!

Join psychiatryRounds

psychiatryRounds Social Media


CMEinfo: Board Reviews in Anesthesia, Cardiology, Internal Medicine, Radiology

© 2020   Created by PsychiatryRounds Team.   Powered by

Badges  |  Report an Issue  |  Terms of Service