Reputation, Ranking and Objective Measures:


Reputation,  Rankings and Objective measures

The top-10 heart and heart surgery hospitals (according to US News 2011) were as follows:

  1. Cleveland Clinic
  2. Mayo Clinic
  3. Johns Hopkins
  4. Texas Heart Institute at St Luke’s Episcopal
  5. Massachusetts General
  6. New York Presbyterian University
  7. Duke University Medical Center
  8. Brigham and Women’s Hospital
  9. Ronald Reagan UCLA Medical Center
  10. Hospital of the University of Pennsylvania

(US News, July 19, 2011)

The First shall be First..

Well, the latest US News hospital rankings are out – and as usual, John Hopkins is at the top of the list – as they have been for the last seventeen years.  Or are they on the top of the list because they were ranked #1 for the previous sixteen years?

How much do these or any rankings actually reflect the reality of the health care provided?  What are they really measuring?  These are important questions to consider.  While US News uses these rankings to sell magazines, other people are using these results to plan their medical care.

 So, what do these rankings or studies show[1]?  The answer depends on two things:

1.  Who you ask.  2. The measure(s) used.

Reed Miller, over at Heartwire.com reported the results of a study by Dr. Ashwini Sehgal over at Case Western Reserve examining the US News Rankings back in 2010 (and re-posted below.)  Dr. Sehgal explains that much of what the US News is measuring is not scientific, nor objective data – it’s public opinion, which as we all know, may have little basis in actual facts.  Ask any fifteen-year- old girl who is the most qualified candidate for president – now imagine Justin Bieber in the White House[2].  An extreme example, to be sure – but one that fully illustrates the pitfalls of relying on this sort of subjective data.

News versus Tabloid

This isn’t the first time that the magazine has come under scrutiny for the methodology of their ‘ranking’ practices.  Teasley (1996) exposed similar flaws in their ranking schemes almost fifteen years ago.  Green, Winfield, Krasner & Wells (1997) explained in JAMA that there
were additional limitations to US News approaches due to a lack of availability of standardized data, despite the magazine using what they considered to be a strong conceptual design.  They cite the same concerns with the weight given to reputation as a majority deficiency.

However,  these significant oversights does not prevent the media and hospitals from continuing to present their results as a legitimate measure of  performance. In fact, more people know about these rankings than they do about government data collected for the same purpose.

Core Measures

Compare this well-known ranking, with governmental attempts to quantify and compare American hospitals.  Medicare and Health and Human Services quantifies and ranks hospital  performance using a ‘score card’ scenario known as “Hospital Compare.”

While this government system is far from perfect since it relies heavily on individual physician documentation, it is an evidence-based measurement tool, making it far more objective.  The government rating system uses a series of specific criteria called Core Measures.  These core measures are used to evaluate adherence to accepted treatment strategies for different conditions such as heart failure, heart attack, and pneumonia.  This data is then published on-line for consumers.

The advantages to measurement tools such as Core Measures is that it an easily applied checklist type scoring system.

For example, the core measures used to evaluate the appropriateness of treatment for an acute myocardial infarction (heart attack) are pretty clear cut:

– Amount of time in minutes for patient to receive either cardiac cath or thrombolytic drugs “clot busters”

– How long (minutes) for patient to receive first EKG after presenting with complaints consistent with AMI

– Did patient receive aspirin on arrival?

– Did patient receive ACE/ ARB for LV dysfunction?

– Did patient receive scripts for beta blockers, ACE/ ARB, aspirin at discharge?

As you can see – all of these measurements are clear, easily defined and objective in nature.  The main problem with core measures in many institution is getting doctors to clearly document whether or not they instituted these measures.  (But that too reflects on the institution, so hospitals with multiple staff members not adhering to the national guidelines will have lower scores than other facilities.)  In fact, this is the main criticism of this measurement tool – and this criticism often comes from the very doctors that omit this data.  (In recent years – hospitals have tried to address this shortcoming by making documentation an easier, more streamlined process – and allowing other members of the health care team to participate in this documentation.)

Then this data is compared to other hospitals nationwide, with subsequent percentile ratings, and status.  Ie. a hospital may rank higher or lower than national average for death rate or re-admission for heart attack, pneumonia, post-surgical infection or several other diagnoses/ conditions.  Consumers can also use this database to compare different facilities to each other (such as several hospitals in a local area).

The accessibility and publication of this data for health care consumers is a very real and meaningful public service.  This allows people to make more informed choices about their care, without relying on third-party anecdotes, or reputation alone.

How does this tie in with surgical tourism?  (or what does this have to do with Bogotá Surgery?)

As part of my efforts to provide objective, unbiased information on the institutions, physicians and surgical procedures in Bogotá, Colombia, I applied the Core Measures criteria as part of my evaluation.  I used these measures not on an institutional level, but on an individual provider level – to each and every surgeon that participated in this project.

However, core measures (NSQIP) was not the only tool I used during my assessment.  I also used several other measurements to get a fair/ well-balanced evaluation of the providers listed in my publication.  (Other criteria used  as part of this process will be discussed more fully in a future post.)

Surgical tourism information needs to be clear, objective and meaningful to be of use to potential consumers.  Reputation alone is not sufficient when considering medical treatment either in the United States or abroad – and consumers should seek out this information to help safeguard their health.

Article Re-post from Heartwire.com

Popular best-hospital list tracks subjective reputation, but not quality measures

April 20, 2010 | Reed Miller

Cleveland, OHUS News & World Report‘s list of the top 50 hospitals
in the US reflects the subjective reputations of the institutions and not
objective measures of hospital quality, according to a new analysis [1].

The magazine’s ranking methodology includes results of a survey of 250 board-certified physicians from across the country, plus various objective data such as availability of specific medical technology, whether the hospital is a teaching institution or not, nurse-to-patient ratios, risk-adjusted mortality index based on Medicare claims, and whether the American
Nurses Credentialing Center has designated the center as a nurse magnet.

In his analysis of the US News rankings system, published April 19, 2010 in the Annals of Internal Medicine, Dr Ashwini Sehgal (Case Western Reserve University, Cleveland, OH) points out that previous investigations have compared the US News rankings with external measures and found that highly ranked cardiology hospitals had lower adjusted 30-day mortality among elderly patients with acute MI, but that many of the high-ranked centers scored poorly in providing evidence-based care for patients with MI and heart failure. Also, performance on Medicare’s core measures of MI, congestive heart failure, and community-acquired pneumonia were frequently at odds with US News rankings.

Sehgal sought to examine a broader range of measures internal to the US News system and “found little relationship between rankings and objective quality measures for most
specialties.” He concludes that “users should understand that the relative standings of US News & World Report‘s top 50 hospitals largely indicate national reputation, not objective measures of hospital quality.”

Sehgal performed multiple complementary statistical analyses of the US News & World Report 2009 rankings of the top 50 hospitals in the US, as well as the distribution of reputation scores among 100 randomly selected unranked hospitals.

He examined the association between reputation score and the total score and the connection of objective measures to reputation score. According to Sehgal’s analysis, the statistical association is strong between the total US News score and the reputation score. The association between the total US News score and total objective scores is variable, and there is minimal connection between the reputation score and objective scores.

The majority of rankings based on reputation score alone agreed with US News overall rankings. The top five heart and heart-surgery hospitals based on reputation score alone were the same as those of the US News top five heart hospitals (Cleveland Clinic, Mayo Clinic—Rochester, Johns Hopkins University, Massachusetts General Hospital, and the Texas Heart Institute), and 80% of the 20 heart and heart-surgery hospitals with the best reputation scores were also on the US News top-20 heart and heart-surgery centers.

Objective measures were relatively more influential on cardiology centers’ total scores than in some other categories, but reputation still carried a lot more weight than objective measures. Sehgal used the nonparametric Spearman rank correlation p value to assess the univariate associations among reputation score, total objective-measures score, and total US News score. The p2 value indicates the proportion of variation in ranks of one score that are accounted for by the other score.

Additional Resources and References

1.  Teasley, C. E. III (1996).  Where’s the best medicine? The hospital rating game. Eval Rev. 1996 Oct;20(5):568-79.

2. Green J,  Wintfeld  N., Krasner M.  & Wells C.  (1997).  In search of America’s best
hospitals. The promise and reality of quality assessment. JAMA. 1997 Apr 9;277(14):1152-5.

3. Sehgal, A. R. (2010). The role of reputation in U.S. News & World Report’s rankings of the top 50 American hospitals. Ann  Intern Med. 2010 Apr 20;152(8):521-5.


[1] US News may be the best known, and most widely published source, but there are multiple
studies and reports attempting to rank facilities and services nationwide.

[2] This is probably not a fair analysis given the current state of American politics.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s