Thursday, June 25, 2009

Recession-Related Surge in Nursing Employment Just a Blip, Study Cautions

The worst economic recession in the post–World War II era has shed jobs across almost all industrial sectors, pushing the national unemployment rate close to 10%. Yet for one group in the slowing but still robust healthcare sector — hospital-based registered nurses (RNs) — the current economic downturn has led to a record employment spike, according to a study published online June 12 in Health Affairs.

However, this spike is only temporary, warns lead author Peter I. Buerhaus, PhD, RN, the Valere Potter Professor of Nursing at Vanderbilt University School of Nursing, Nashville, Tennessee. "We've eased the nursing shortage, but we haven't permanently ended it," Dr. Buerhaus told Medscape Nursing.

The history of such shortages, Dr. Buerhaus and the study coauthors write, is inversely related to the general health of the economy: RNs are in short supply during boom periods and are available to fill vacancies when the economy is spiraling down.

In 2001, 3 years after hospitals began reporting difficulty filling vacancies, RN shortages peaked. With vacancy rates reaching a national average of 13%, an estimated 126,000 full-time-equivalent (FTE) RN positions went unfilled, forcing "many hospitals to close nursing units and restrict operations."

The 2001 recession altered this trend. Faced with a bad economy and the prospect of reduced family income, nurses already in the workforce increased their hours, and those who had left it returned, in part to take advantage of the substantially higher RN wages that hospitals began offering in 2002. The exigencies of the recession, coupled with the lure of higher wages, worked like a magnet: During the next 2 years, hospital RN employment surged by 184,000 FTE RNs. "At the time, that was a world record — right off the charts," Dr. Buerhaus said.

But if hospital officials thought their nurse vacancy problems were solved, they were wrong. Once the economy recovered, the shortage problem reasserted itself. In fact, the annual growth in FTE RN employment between the economic boom years of 2004 and 2006 was −0.9%. It has taken this most recent recession, which some argue started as early as the final months of 2007, to reverse the nursing shortage problem yet again.

In 2007 and 2008, according to the study, hospital-based RN employment increased by an estimated 243,000 FTEs. As in the 2001 recession, bad economic times have pushed nurses back into the labor market, and for many of the same reasons as before. But the lure of higher wages is not among them; for the most part, said Dr. Buerhaus, hospitals did not increase RN wages in 2007 and 2008. That fact, he says, makes the dramatic surge in RN employment during this recession all that more surprising. "From our past studies, we knew the effect the recession would have. But we were completely stunned by the size of the increase. Looking back, there's simply no 2-year period of growth in the hospital employment sector that rivals this one."

For nurses fresh out of school, the influx of new hires has not always worked to their benefit. "Their ability to find the job of their dreams in the hospital down the street from where they live has probably changed," Cheryl Peterson, MSN, RN, director of Nursing Practice and Policy at the American Nursing Association, told Medscape Nursing. "We've also found that employers can be a little more selective these days, holding out for someone with more experience rather than hiring a recent graduate or someone with limited experience."

Despite the trend toward older, more experienced hires, however, younger nurses are by no means absent from the workforce. In 2008, for example, the number of FTE RNs aged 23 to 25 years — 130,000 — was the highest it has been in more than 2 decades, according to the study. In addition, in 2008 there was a large jump in the number of younger FTE nurses with children younger than 6 years, compared with in 2007 — a phenomenon the authors say is related to families' efforts to boost their incomes during hard economic times. Overall, in 2008, employment of RNs younger than 35 years increased by a dramatic 74,000, with most ending up in hospital-based jobs.

Getting a Handle on Looming Shortages

Given the oddly cyclical nature of nurse employment, however, few if any in the nursing community are sanguine about the recent employment surge. "We can't be lulled into thinking that the problem of a shortage is over," said Ms. Peterson.

Similar to past shortages, Dr. Buerhaus said, future ones will be driven by the interaction of supply and demand. On the demand side, he and his coauthors lean heavily on projections outlined by the federal Health Resources and Services Administration (HRSA). Noting that "changing demographics constitute a key determinant of projected demand for FTE RNs," HRSA points to the "much greater per capita healthcare needs" of an aging baby boom generation, the leading edge of which will approach age 65 years starting around 2010.

Dr. Buerhaus and coauthors also consider something likely to drive demand that HRSA does not — the prospect that healthcare reform will expand coverage to more citizens, thereby placing even greater pressure on the nursing workforce.

On the supply side of the equation, the authors say, the waves of baby boomer RNs retiring during the next decade will be significant. So too will be the prospective size of the successive cohorts that will replace them. Will these cohorts be large enough to keep the workforce from shrinking, and yet too small "to meet the projected demand"? If so, the authors say, a much older RN workforce than ever before may be left to do the heavy lifting.

Action Plan for Policymakers

The authors conclude by proposing a series of action steps for policymakers. They want to strengthen the current workforce and, in particular, to "improve the ergonomic environment of the clinical workplace" for older nurses. They want to improve communication skills, especially for RNs educated in other countries — a group that has not only helped to fuel the current surge but also is likely to play a significant role in future supply scenarios. Perhaps most notably, they want to see steps taken to expand the numbers of 2 "underrepresented" groups in nursing — men and Hispanics.

Representatives of each group are sympathetic, although they cite challenges.

"We're up against the historical image of men as doctors and women as nurses," Demetrius J. Porsche, DNS, RN, dean of the Louisiana State University Health Center School of Nursing and president of the American Assembly for Men in Nursing, told Medscape Nursing. Among the barriers to full participation that Dr. Porsche sees are unsupportive families, school counselors who "don't understand that nursing is an autonomous profession, not just a handmaiden to doctors," and too few public images of men in the profession. Each year, Dr. Porsche explained, the American Assembly for Men in Nursing presents a series of awards aimed at enhancing the status of men in nursing, including one for the best workplace and another for the best nursing school/college.

"The push for men in nursing is a diversity issue," he said. "The profession should be open and welcoming not only to all races and ethnicities but to both genders."

Anyone recruiting Hispanics to nursing also faces barriers, said Norma Martinez-Rogers, PhD, RN, FAAN, associate clinical professor in the Department of Family Nursing at the University of Texas Health Science Center, San Antonio, and president of the National Association of Hispanic Nurses.

The biggest barrier, Dr. Martinez-Rogers told Medscape Nursing, is money. Despite some funding, she said, "many Hispanic students end up having to pay back big loans." Then there's the work issue, she added. Used to holding down part-time jobs to make ends meet before entering nursing school, too many Hispanic students try, at their peril, to duplicate that work schedule once enrolled. "Nursing school is all about the application of the content that you're learning, which is very time consuming," Dr. Martinez-Rogers said. "Students can hold down part-time jobs, but they risk having to repeat a course."

Hoping for more funding and support for what she characterizes as "not a brand-new problem," Dr. Martinez-Rogers has been talking to the Congressional Hispanic Caucus about renewed efforts to bring more Hispanics into nursing. One step would be to work with universities — her own included — that have the potential, because of their location, to enroll significant numbers of Hispanic nursing students. Once enrolled, she said, such students need to be mentored while in school and encouraged after they graduate. Her own university has what she described as a "student-driven" mentorship program; for its part, the National Association of Hispanic Nurses is working to develop its own national mentorship program.

Dr. Buerhaus thinks that expanding the capacity of educational programs — for Hispanics, men, and anyone else interested in becoming a nurse — is key. So, too, he said, is turning out the "right" nurses: "Beyond all the rhetoric, we need the future nurse to be really, really sharp in the areas of both quality and safety."

The ANA's Cheryl Peterson agrees, but added that nursing education "can't change on a dime" and that employers must also do their part by giving the freshly minted nurse the necessary "space to learn."

Source : http://www.medscape.com/viewarticle/704668?sssdmh=dm1.488649&src=nldne

Saturday, June 20, 2009

Glove Perforation May Increase the Risk for Surgical Site Infection

Surgical glove perforation increases the risk for surgical site infection (SSI) unless antimicrobial prophylaxis is used, according to the results of a prospective observational cohort study reported in the June issue of the Archives of Surgery.

"All surgical staff members wear sterile gloves as a protective barrier to prevent hand-to-wound contamination during operations," write Heidi Misteli, MD, from University Hospital Basel in Basel, Switzerland, and colleagues. "When gloves are perforated, the barrier breaks down and germs are transferred. With the growing awareness among operating room staff of their risk of exposure to disease from patients, primarily human immunodeficiency virus and hepatitis B virus, gloves have begun to be regarded as a requirement for their own protection."

This study took place at University Hospital Basel, in which approximately 28,000 surgical procedures are performed each year. For this analysis, the study sample was a consecutive series of 4147 surgical procedures performed in the Visceral Surgery, Vascular Surgery, and Traumatology divisions of the Department of General Surgery. The main endpoint of the study was rate of SSI, as defined by the Centers for Disease Control and Prevention (CDC), and the main predictor variable was compromised asepsis because of glove perforation.

Of 4147 procedures performed, 188 (4.5%) overall were associated with SSI. Compared with procedures in which asepsis was maintained, procedures in which gloves were perforated had a higher likelihood of SSI, based on univariate logistic regression analysis (odds ratio [OR], 2.0; 95% confidence interval [CI], 1.4 - 2.8; P < .001).

The increase in the risk for SSI with glove perforation was different when surgical antimicrobial prophylaxis was or was not used (multivariate logistic regression analyses test for effect modification, P = .005). When antimicrobial prophylaxis was not used, the odds of SSI were significantly higher for glove perforation vs the group in which asepsis was maintained (adjusted OR, 4.2; 95% CI, 1.7 - 10.8; P = .003). In contrast, the likelihood of SSI was not significantly higher for procedures in which gloves were punctured when surgical antimicrobial prophylaxis was used (adjusted OR, 1.3; 95% CI, 0.9 - 1.9; P = .26).

"Without surgical antimicrobial prophylaxis, glove perforation increases the risk of SSI," the study authors write. "To our knowledge, this is the first study to explore the correlation between SSI and glove leakage in a large series of surgical procedures."

Limitations of this study include 22.1% missing data on glove perforation, prospective observational vs randomized controlled design, possible residual or unknown confounding, and use of nonvalidated techniques to detect glove leakage. Since this study was performed from 2000 to 2001, there have been significant changes in circulating relevant bacteria.

"Efforts to decrease the frequency of glove perforation, such as double gloving and the routine changing of gloves during lengthy surgical procedures, are therefore encouraged," the study authors conclude. "The present results support an extended indication of surgical antimicrobial prophylaxis to all clean procedures in the absence of strict precautions taken to prevent glove perforation. The advantages of this SSI prevention strategy, however, must be balanced against the costs and adverse effects of the prophylactic antimicrobials, such as drug reactions or increased bacterial resistance."

In an accompanying invited critique, Edward E. Cornwell III, MD, from Howard University Hospital in Washington, DC, notes additional study limitations.

"I do not believe the recommendation to extend antibiotic prophylaxis guidelines is justified," Dr. Cornwell writes. "Although the risk of SSI (with vs without glove perforation) among patients without antibiotic prophylaxis was significant on multivariate analysis, the data in this and other studies cited by the authors much more strongly support the measures suggested for lowering the risk of glove perforation. These measures would be substantially cheaper, more promising for efficacy, and less likely to produce allergies or adverse effects than giving prophylactic antibiotics to all patients."

The Department of General Surgery, University Hospital Basel, and the Freiwillige Akademische Gesellschaft Basel funded this study. The study authors and Dr. Cornwell have disclosed no relevant financial relationships.

Arch Surg. 2009;144:553-558.

Clinical Context

Despite the precautions deployed to maintain asepsis during surgery, the risk for transfer of pathogens remains. The transfer of skin-borne pathogens from staff hands is especially prevalent. All surgical staff members wear sterile gloves as a protective barrier to prevent hand-to-wound contamination during operations. However, the barrier breaks down as soon as the gloves are perforated. Factors leading to an increased risk for perforation include duration of operating time (significantly after 2 hours); improperly fit gloves; and puncture by needles, spiked bone fragments, or sharp surfaces on complex instruments. The impact of glove perforation on the risk for SSI is unknown.

The aim of this study was to determine whether clinically apparent surgical glove perforation increases the risk for SSI.

Study Highlights

  • From January 1, 2000, through December 31, 2001, a prospective observational cohort study was performed at the University Hospital Basel to evaluate the incidence of SSI in association with surgical glove perforation.
  • Consecutive series of 4147 surgical procedures performed in the Visceral Surgery, Vascular Surgery, and Traumatology divisions of the Department of General Surgery were enrolled.
  • Operations requiring no incision and procedures classified as wound class 4 (dirty infected) according to the CDC criteria were excluded from the study.
  • The outcome of interest was SSI occurrence as assessed according to the CDC standards.
  • All incidents of SSI were validated by a board-certified infectious disease specialist on the basis of a comprehensive review of the patient history, initial microbiology results, and outcome at least 30 days after surgery when no implants were involved or more than 1 year after surgery if an implant was in place.
  • The primary predictor variable was compromised asepsis because of glove perforation.
  • Prophylactic antimicrobial administration was given according to the CDC guidelines. Patients received antimicrobial prophylaxis if they underwent surgery classified as CDC wound classes 3 (contaminated), 2 (clean contaminated), and 1 (clean) involving a nonabsorbable implant or at the discretion of the surgeon.
  • Results of the study demonstrated that the overall SSI rate was 4.5% (188/4147 procedures).
  • SSI was classified as the following: superficial (n = 56), deep (n = 62), and organ/space (n = 70).
  • Asepsis was compromised by glove leakage in 677 interventions (16.3%); 51 instances (7.5%) of SSI were recorded from the compromised asepsis. In comparison, there were 137 instances in 3470 procedures (3.9%) when asepsis was not breached.
  • Univariate logistic regression analysis showed a higher likelihood of SSI in procedures in which gloves were perforated vs interventions with maintained asepsis (OR, 2.0; 95% CI, 1.4 - 2.8; P < .001).
  • However, multivariate logistic regression analyses showed that the increase in SSI risk with perforated gloves was different for procedures with surgical antimicrobial prophylaxis vs those without surgical antimicrobial prophylaxis (test for effect modification, P = .005).
  • Data were further analyzed for the risk association separately for glove perforations in surgeries with and without antimicrobial prophylaxis.
  • Without antimicrobial prophylaxis, glove perforation entailed significantly higher odds of SSI vs the reference group with no breach of asepsis (adjusted OR, 4.2; 95% CI, 1.7 - 10.8; P = .003).
  • On the contrary, when surgical antimicrobial prophylaxis was applied, the likelihood of SSI was not significantly higher for operations in which gloves were punctured (adjusted OR, 1.3; 95% CI, 0.9 - 1.9; P = .26).
  • Limitations of this study were that 22.1% of the information on glove perforation was missing, the study was a prospective observational study vs a randomized controlled trial, nonvalidated techniques were used to detect glove leakage, and results may be inapplicable because of changes in circulating relevant bacteria since this study was conducted from 2000 to 2001.

Clinical Implications

  • Factors that may lead to glove perforation include puncture by needles, spiked bone fragments, or sharp surfaces on complex instruments as well as duration of operating time (> 2 hours) and gloves that do not fit properly.
  • In the absence of surgical antimicrobial prophylaxis, glove perforation increased the risk for SSI.
Source : http://cme.medscape.com/viewarticle/704548?sssdmh=dm1.487338&src=nldne

Friday, June 19, 2009

Cardiac Calcifications

Introduction

Radiologic detection of calcifications within the heart is quite common. The amount of coronary artery calcification correlates with the severity of coronary artery disease (CAD). Calcification of the aortic or mitral valve may indicate hemodynamically significant valvular stenosis. Myocardial calcification is a sign of prior infarction, while pericardial calcification is strongly associated with constrictive pericarditis. Therefore, detecting and recognizing calcification related to the heart on chest radiography and other imaging modalities such as fluoroscopy, CT, and echocardiography may have important clinical implications.

In patients with diabetes mellitus, by determining the presence of coronary calcifications, patients at risk for future myocardial infarction and coronary artery disease could be identified, and future events could be excluded if no coronary calcifications were present.1

In an asymptomatic population, determination of the presence of coronary calcifications identified patients at risk for future myocardial infarction and coronary artery disease independent of concomitant risk factors. In patients without coronary calcifications, future cardiovascular events could be excluded.2
Pericardial Calcifications

Calcification of the pericardium usually is preceded by a prior episode of pericarditis or trauma. Infectious etiologies for pericarditis include viral agents (eg, coxsackievirus, influenza A, influenza B), tuberculosis, and histoplasmosis.


Incidence


Of patients with pericardial calcification, 50-70% have constrictive pericarditis. Extensive calcification may be present without signs or symptoms of pericardial constriction.


Features

On chest radiographs, pericardial calcification appears as curvilinear calcification usually affecting the right side of the heart (Images 3-4). This is often visualized better on lateral chest radiographs than on frontal views. Calcifications associated with tuberculous pericarditis present as thick, amorphous calcifications along the atrioventricular groove. This pattern may be observed less commonly with other forms of pericarditis as well.

CT is the best technique to detect pericardial calcification; however, overpenetrated films, conventional tomography, fluoroscopy, and MRI may be helpful.
Myocardial Calcifications

Myocardial calcification is more common in males than in females and usually occurs in patients who have sustained sizable infarcts and have survived more than 6 years after infarction. Most of these patients have a dominant right coronary artery, since this favors longer survival after infarction in the region of the left anterior descending coronary artery.


Incidence


Approximately 8% of patients who sustain a large myocardial infarction develop myocardial calcification. In these patients, infarcts usually are large and most frequently involve the anterolateral wall of the LV. LV aneurysm usually is present.


Features


Myocardial calcification is identified as thin and curvilinear shaped and usually appears toward the apex of the LV. The associated contour abnormality from the aneurysm is frequently noted. Rarely, calcification can appear spherical or platelike.

Detection of left atrial wall calcification has significant clinical implications. Most of these patients have congestive heart failure and atrial fibrillation from long-standing mitral valve disease. Mural thrombi secondary to atrial fibrillation are a frequent source of systemic and pulmonary emboli. Possible complications during cardiac surgery include dislodgement of thrombi, which results in cerebral embolism and uncontrollable hemorrhage if the left atrium (LA) is entered through the calcified region because of LA wall rigidity. LA calcification usually is secondary to endocarditis resulting from rheumatic heart disease, and the amount of calcification is often related to the duration of untreated disease.3


Features


LA calcification may be in the endocardial or subendocardial layer or within a thrombus. Calcification is usually thin and curvilinear (Images 8-9). Three patterns of calcification have been identified.

Type A: Calcification is confined to the LA appendage, the underlying lesion is often mitral stenosis; this type of calcification almost always is associated with thrombus in the appendage.

Type B: The free wall of the LA and mitral valve are calcified, although the valve calcification is not always appreciated from chest radiographs. This pattern indicates advanced mitral stenosis.

Type C: Small area of calcification is confined to the posterior wall of the LA. This results from a jet lesion because of mitral regurgitation and is termed a McCollum patch.
Valvular Calcifications

Valvular calcification identified radiographically suggests the presence of a hemodynamically significant stenosis. Dominant valvular insufficiency is not associated with radiographic depiction of calcification, except in patients with calcified stenotic valves secondarily destroyed by endocarditis. The aortic valve calcification is detected most frequently.


Aortic valve calcification


In patients younger than 40 years, a calcified aortic valve usually indicates marked aortic stenosis secondary to a congenital bicuspid aortic valve. In these patients, one cusp of the valve is larger than the other; therefore, the valve cannot function properly, resulting in prolapse, fibrosis, calcification, and stenosis. The average age at which calcification first is detected is 28 years. More than 90% of patients with congenital bicuspid valve have calcification by age 40 years.

In older patients, calcification of the aortic valve may be secondary to aortic sclerosis with degeneration of normal valve leaflets and may be associated with hemodynamically significant aortic stenosis. Aortic valve disease associated with rheumatic heart disease frequently is associated with mitral valve disease. The average age at which aortic valvular calcification first is detected is 47 years in patients with a history of rheumatic fever and carditis. However, aortic valvular calcification is infrequently seen in this entity, and fewer than 10% of patients without congenital bicuspid valve have calcification from age 40-65 years.


Features


* In bicuspid aortic valves, calcification may be nodular, semilunar, or mushroom shaped. A dilated ascending aorta often is seen. A thick, irregular, semilunar ring pattern with a central bar or knob is typical of stenotic bicuspid valves and results from calcification of the valve ring and the dividing ridge of the 2 cusps or the conjoined leaflet (Image 10). Rarely, 3 leaflet valves mimic this pattern because of fusion of 2 of the 3 leaflets. However, none of these features has a high sensitivity or specificity in predicting valvular anatomy.

* In patients with aortic sclerosis, calcification usually is nodular. Diffuse aortic dilatation can be observed. Heart size may be normal; however, LV dilation can occur with decompensation. Nodular calcification of the valve also is observed in patients with rheumatic aortic disease. The ascending aorta may be dilated, and signs of rheumatic mitral valve disease may be present.


Mitral valve calcification


Although mitral leaflet calcification is commonly a sequela of rheumatic mitral valve disease, its appearance may be very subtle, not readily apparent on plain films or echocardiograms.4


Features


* A nodular or amorphous pattern of calcification is observed, and signs of rheumatic mitral stenosis frequently are present. These include enlargement of the LA, especially the LA appendage, and pulmonary venous hypertension with cephalization and interstitial edema seen as Kerley B lines.
* Findings that indicate pulmonary arterial hypertension, such as enlarged central pulmonary arteries, can occur in patients with long-standing disease. Detection of mitral valve calcification from chest radiographs is uncommon; echocardiographic detection is far more common. Detection of the calcification has surgical implications, since in such instances valve replacement is preferred to commissurotomy.


Pulmonary valve calcification


Calcification of the pulmonary valve occurs rarely in patients with pulmonary valvular stenosis. If valve calcification is identified radiologically, the gradient across the valve often exceeds 80 mm Hg; valvar calcification also may be observed in patients with long-standing, severe pulmonary hypertension.
Tricuspid valve calcification

Tricuspid valve calcification is rare and most frequently is caused by rheumatic heart disease; however, it has been associated with septal defects, congenital tricuspid valve defects, and infective endocarditis.
Annular Calcifications

Myocardial fibers attach to the annulus or fibrous skeleton of the heart. The cardiac valves are suspended from the annulus.

The mitral annulus commonly calcifies. Annular calcification is a degenerative process seen most often in individuals older than 40 years and is especially common in women. Such calcification is not clinically important unless it is massive, in which case it can cause mitral insufficiency, atrial fibrillation (in presence of dilated LA), conduction defects, and infrequently mitral stenosis.5


Features


A, J, U, or reverse C-shaped bandlike calcification is observed involving the mitral annulus.6 Calcification can appear O-shaped if the anterior leaflet also is involved. Calcification appears bandlike and of uniform radiopacity compared to the nodular and more irregular opacity of mitral valve calcification (Image 11). Sensitivity of detecting mitral annular calcification is substantially higher with echocardiography. Some recent data suggest that this calcification is a form of atherosclerosis and can be used as a marker for ischemic heart disease.

Aortic annular calcification usually is associated with a calcified aortic valve and may extend superiorly into the ascending aorta or inferiorly into the interventricular septum. This type of calcification is often dystrophic, commonly seen in older individuals, and often clinically insignificant.

Tricuspid annular calcification is rare and usually is associated with long-standing and severe pulmonary hypertension.
Vascular Calcifications

Calcification involving the aortic arch occurs in more than 25% of normal patients from age 61-70 years. The ascending aorta usually is spared, since the arch and distal aorta most commonly are involved. Patients with hyperlipidemia and diabetes are predisposed to calcific atherosclerosis at a younger age. Calcification in these patients is observed as a curvilinear density along the ascending aorta and the arch.7

Syphilitic aortitis, an inflammatory aortitis involving the ascending aorta, sinuses of Valsalva and the aortic valve, is observed most commonly in patients older than 50 years. Syphilitic aortitis is associated with aortic insufficiency, ascending aortic aneurysms, and a positive serologic test for syphilis. In these patients, angina resulting from occlusion of the ostia of the coronary vessels also can be present. Calcification occurs in a linear pattern along the ascending aorta.

On gross specimen examination, the aorta has been described as revealing a "tree-bark" appearance (Image 12). Focal ascending aortic calcifications also may be observed in patients with Marfan syndrome and focal aortic dissection. Other causes of ascending aortic calcification include false aneurysm and chronic aortic dissection.

Calcification of the ductus arteriosus in adults may be found in patients with ductus patency as well as in those with occluded ductus. In children, calcification of the ligamentum arteriosum indicates a closed ductus. On a frontal chest radiograph, calcification is observed as a curvilinear or nodular density between the aorta and pulmonary trunk.
Coronary (artery) Calcifications

Coronary artery disease (CAD) is the leading cause of death in the United States. According to one American Heart Association estimate, in 1 year, at least 1 million Americans suffer from angina or myocardial infarctions. Of patients who suffer myocardial infarctions, 30% are younger than 65 years and 4% are younger than 45 years.

Evaluation of patients with CAD includes patient history (including review of symptoms and significant coronary risk factors), physical examination, and evaluation of a resting ECG. In patients with symptoms suggestive of CAD, additional investigation, including the physiologic response to stress, may be indicated. This may include stress ECG, stress echocardiography or MRI, or radionuclide perfusion imaging. Coronary calcification is a recognized marker for atherosclerotic CAD. Calcification can be identified on plain radiographs, fluoroscopy, and CT. More reproducible CT evaluation, including quantitation of coronary calcium, may be performed using helical and electron beam CT (EBCT).

Physicians are increasingly using CT detection of calcification to detect subclinical CAD, which may result in early initiation of diet and drug therapy.

Imaging of coronary calcification

Numerous modalities exist for identifying coronary calcification, including plain radiography, fluoroscopy, intravascular ultrasound, MRI, echocardiography, and conventional, helical, and electron-beam CT (EBCT).

Plain radiographs have poor sensitivity for detection of coronary calcification and have a reported accuracy as low as 42% (Image 15).

Fluoroscopy was the most frequently used modality in detecting coronary artery calcification before the advent of CT. The ability of fluoroscopy to detect small, calcified plaques is poor. In a recent study, only 52% of calcific deposits observed on EBCT were identified fluoroscopically. In addition, fluoroscopy is operator dependent, and certain patient characteristics (eg, body habitus, overlying anatomic structures, calcification in overlying anatomic regions) can compromise fluoroscopic examination.8

CT is highly sensitive for detecting calcification. In a study using calcification on CT as a marker of significant angiographic stenosis, sensitivities of 16-78% were reported. Reported specificities were 78-100%, and positive predictive values were 83-100%, suggesting that CAD may be likely to occur when coronary calcification is observed on CT. Conventional CT demonstrates calcification in 50% more vessels than fluoroscopy does in patients with angiographically proven stenosis. However, conventional CT has a slower scan time and is more prone to artifacts from cardiac and respiratory motion and volume averaging than helical or EBCT.9,10


Electron-beam CT


EBCT minimizes motion artifacts, since cardiac-gated imaging can be triggered by the R wave of the cardiac cycle.11 Imaging can be performed in diastole, minimizing cardiac motion. Typically, 20 contiguous, 100-millisecond, 3-mm thick sections are obtained during 1 or 2 breath-holds. Coronary calcification is observed as a bright white area along the course of coronary vessels.


Helical CT


Scans are performed with acquisition times approaching 0.5 second-250 milliseconds; the faster acquisition is possible with the newer multidetector scanners. Calcific deposits are identified as bright white areas along the course of coronary arteries (Image 16).

Coronary calcifications detected on EBCT or helical CT can be quantified, and a total calcification score can be calculated. In this schematic, an arbitrary pixel threshold of +130 Hounsfield units (HU) (+90 for some helical scanners) covering an area greater than 1.0 mm often is used to detect coronary artery lesions. Regions of interest are placed around the area of calcification. Once the region of interest is placed, scanner software displays peak calcification, attenuation in HU, and area of the calcified region in millimeters squared. The volume or Agatson score is displayed. The volume score is the area of the lesion, while the Agatson score is weighted to consider attenuation of pixels, as well as the area.

In the Agatson scoring system, +130-200 HU lesions are multiplied by a factor of 1, +201-300 HU by 2, +301-400 HU lesions by 3, and lesions greater than 401 HU are multiplied by a factor of 4. The sum of the individual lesion scores equals the score for that artery, and the sum of all lesion scores equals the total calcification score.

In one study, a total calcification score of 300 had a sensitivity of 74% and a specificity of 81% in detecting obstructive CAD. The negative predictive value of a zero calcification score was 98%. In another study, sensitivity for detecting calcific deposits in patients with angiographically significant stenosis was 100%, and specificity was 47%. In the same study, 8 patients without calcification showed no angiographic evidence of CAD, while 28 patients with calcification showed mild or moderate CAD.

However, despite the high sensitivity of EBCT, calcification scores do not always predict significant stenosis at the site of calcification. In another study, EBCT was compared with coronary angiography; only 1 patient with stenosis greater than 50% on angiography did not demonstrate coronary calcification on EBCT. Thus, absence of calcification appears to be a good predictor of the absence of significant luminal stenosis. However, absence of calcification does not always indicate the absence of atherosclerotic plaque.

A multicenter study reviewed cardiac event data in 501 mostly symptomatic patients with CAD who underwent both EBCT and coronary angiography. In this group, 1.8% of patients died and 1.2% had nonfatal myocardial infarctions during a mean follow-up period of 31 months. A calcification score of 100 or more was revealed to be highly predictive in separating patients with from those without cardiac events.

Conclusion


The amount of coronary calcification relates to the extent of atherosclerosis, although the relationship between arterial calcification and the probability of plaque rupture is unknown. A zero calcification score is a good predictor of absence of significant CAD. Detecting extensive coronary calcification on CT appears to be a marker of significant atherosclerotic burden and serves as an indication for a more aggressive evaluation of coronary risk factors and an early institution of dietary and/or drug therapy. However, the full implications of coronary calcification detection on CT and the role of this modality must await the results of ongoing investigations.


Keywords

cardiac calcifications, coronary artery calcification, coronary artery disease, aortic valve calcification, mitral valve calcification, valvular stenosis, myocardial calcification, pericardial calcification, constrictive pericarditis.

Source : http://emedicine.medscape.com/article/352054-overview?src=emed_whatnew_nl_0#calcifications

Thursday, June 18, 2009

Can Nitroglycerin Be Given to a Patient Who Has Taken Sildenafil?

Question

When a male patient who had taken sildenafil presents with an acute coronary syndrome (ACS), how real is the risk for serious hypotension if nitroglycerin (NTG) is titrated intravenously (IV)?

Response from Joanna Pangilinan, PharmD, BCOP
Pharmacist, Comprehensive Cancer Center, University of Michigan Health System, Ann Arbor, Michigan

The phosphodiesterase-5 (PDE5) inhibitor sildenafil is approved by the US Food and Drug Administration for the treatment of erectile dysfunction.[1] Sildenafil is contraindicated in combination with nitrates due to the risk for a pharmacodynamic drug interaction that can result in severe hypotension[1] and death.[2]

Men with coronary artery disease (CAD), a condition with increased prevalence of erectile dysfunction, may in certain medical situations be candidates for nitrates. Some patients with CAD could experience an ACS within 24-48 hours of taking a PDE5 inhibitor.[3]

In order to study this drug-drug interaction, Parker and colleagues[4] performed a randomized, double-blind, crossover trial to evaluate the effect of IV NTG in 34 men with stable CAD who received sildenafil 100 mg or placebo. Telemetry monitoring was used to assess supine blood pressure and heart rate at baseline and every 3 minutes during the assessment periods, which were 15 minutes prior to sildenafil/placebo administration, 15 minutes prior to NTG initiation, and 9 minutes after each change in NTG dose level (maximum 160 µg/min).

Sildenafil administration resulted in a mean ± standard deviation maximum decrease in systolic/diastolic blood pressure of 12 ± 12/8 ± 8 mm Hg compared with 5 ± 11/4 ± 7 mm Hg after placebo administration. Heart rate change after sildenafil was 2 ± 3 beats per minute and -1 ± 4 beats per minute for placebo. The NTG dose (median maximum) was 80 µg/min (range 0-160) for the sildenafil phase and 160 µg/min (range 20-160) for the placebo phase. The maximum rate of 160 µg/min was tolerated by 8 (25%) men during the sildenafil phase vs 19 (59%) during the placebo phase (P = .0008). Hypotension occurred in significantly more patients in the sildenafil group. The authors recommend caution when extrapolating these results to other subgroups of patients, including those with ACS.

As treatment of ACS should include all of the recommended therapeutic interventions, nitrates may not be required for a patient who develops an ACS 24-48 hours after administration of a PDE5 inhibitor.[3] Some suggest that recent sildenafil administration may not be absolutely contraindicated in patients who require IV NTG as long as the blood pressure is monitored appropriately.[3]

The American College of Cardiology (ACC) and American Heart Association (AHA) joint guidelines for the management of patients with unstable angina/non-ST-elevation myocardial infarction recommend that NTG or nitrates not be given to these patients within 24 hours of sildenafil.[5] Nitrates should also be avoided after use of other PDE5 inhibitors.[5,6] An ACC/AHA expert consensus document suggests that nitrates may be considered 24 hours after sildenafil dosing. Further delay may be necessary in the patient who might have a prolonged elimination half-life of sildenafil due to renal or hepatic dysfunction or coadministration of a cytochrome P450 3A4 inhibitor. If NTG is initiated in such circumstances, caution and careful monitoring are necessary.[7]

In conclusion, administration of any form of nitrates in a patient who has taken sildenafil poses a real and serious risk for serious hypotension. Coadministration is contraindicated and should be avoided.

References

  1. Viagra® [package insert]. New York, NY: Pfizer Inc; 2008.
  2. US Food and Drug Administration. Center for Drug Evaluation and Research. Postmarketing safety of sildenafil citrate (Viagra). Available at: http://www.fda.gov/cder/consumerinfo/viagra/safety3.htm. Accessed April 2, 2009.
  3. Werns SW. Are nitrates safe in patients who use sildenafil? Maybe. Crit Care Med. 2007;35:1988-1990.
  4. Parker JD, Bart BA, Webb DJ, et al. Safety of intravenous nitroglycerin after administration of sildenafil citrate to men with coronary artery disease: a double-blind, placebo-controlled, randomized, crossover trial. Crit Care Med. 2007;35:1863-1868.
  5. Anderson JL, Adams CD, Antman EM, et al. ACC/AHA 2007 guidelines for the management of patients with unstable angina/non-ST-elevation myocardial infarction: executive summary. A report of the American College of Cardiology/American Heart Association task force on practice guidelines (Writing Committee to revise the 2002 guidelines for the management of patients with unstable angina/non-ST-elevation myocardial infarction). Circulation. 2007;116:803-877.
  6. Kostis JB, Jackson G, Rosen R, et al. Sexual dysfunction and cardiac risk (the Second Princeton Consensus Conference). Am J Cardiol. 2005;96:313-321.
  7. Cheitlin MD, Hutter AM, Brindis RG, et al. Use of sildenafil (Viagra) in patients with cardiovascular disease. J Am Coll Cardiol. 1999;33:273-282.
Source : http://www.medscape.com/viewarticle/703602?src=mp&spon=34&uac=133298AG

Saturday, June 13, 2009

ADA 2009: Expert Committee Recommends Use of Hemoglobin A1C for Diagnosis of Diabetes

The American Diabetes Association (ADA), the International Diabetes Federation (IDF), and the European Association for the Study of Diabetes (EASD) have joined forces to recommend the use of the hemoglobin A1C assay for the diagnosis of diabetes.

The international expert committee's recommendations were announced here on Friday during the opening hours of the ADA's 69th Scientific Sessions and released simultaneously online in the July issue of Diabetes Care.

"This is the first major departure in 30 years in diabetes diagnosis," committee chairman David M. Nathan, MD, director of the Diabetes Center at Massachusetts General Hospital and professor of medicine at Harvard Medical School in Boston, declared in presenting the committee's findings.

"A1C values vary less than FPG [fasting plasma glucose] values and the assay for A1C has technical advantages compared with the glucose assay," Dr. Nathan said. A1C gives a picture of the average blood glucose level over the preceding 2 to 3 months, he added.

"A1C has numerous advantages over plasma glucose measurement," Dr. Nathan continued. "It's a more stable chemical moiety.... It's more convenient. The patient doesn't need to fast, and measuring A1C is more convenient and easier for patients who will no longer be required to perform a fasting or oral glucose tolerance test.... And it is correlated tightly with the risk of developing retinopathy."

A disadvantage is the cost. "It is more expensive," Dr. Nathan acknowledged. However, cost analyses have not been done, "...and costs are not the same as charges [to the patient]."

The committee has determined that an A1C value of 6.5% or greater should be used for the diagnosis of diabetes.

This cut-point, Dr. Nathan said, "is where risk of retinopathy really starts to go up."

He cautioned that there is no hard line between diabetes and normoglycemia, however, "...an A1C level of 6.5% is sufficiently sensitive and specific to identify people who have diabetes."

"We support the conclusion of the committee, that this is an appropriate way to diagnose diabetes," stated Paul Robertson, MD, president of medicine and science at the ADA and professor of medicine at the University of Washington in Seattle.

"Now, we have to refer the committee's findings to practice groups for review of the implications and for recommendations," Dr. Robertson told Medscape Diabetes & Endocrinology after the committee's presentation.

"We purposely avoided using estimated average glucose, or EAG, as this is just a way to convert the A1C into glucose levels.... And one thing we want to try to get away from is the term prediabetes," Dr. Nathan said. "It suggests that people with it will go on to get diabetes, but that is not the case."

"We don't know if we will be diagnosing more patients with diabetes or less, with AIC," Dr. Nathan commented. Cut-off values or practice guidelines have not been established. More study needs to be done first, but "physicians should not mix and match A1C and blood glucose levels. They should stick with one in reviewing a patient's history," Dr. Nathan asserted.

"There is no gold standard assay," said session moderator Richard Kahn, PhD, chief medical and scientific officer of the ADA, which is headquartered in Alexandria, Virginia. "All of these tests measure different things. They all have value. But A1C is the best test to assess risk of retinopathy."

"We [the ADA] are not issuing a position statement at this time," Dr. Robertson stressed when speaking with Medscape Diabetes & Endocrinology. "It is too soon to write a position paper yet. We need to know what we are getting into first."

"Some parts of the world are not going to be able to use this," Dr. Robertson added. "It may be too expensive to use in the developing world. Some of these countries have severe chronic anemia, hemolytic anemia, and so on, where we will have to fall back on traditional tests. We are being very cognizant of the international implications." A1C assays are inaccurate in cases of severely low hemoglobin levels.

"We don't think physicians will have a hard time adopting the test...a lot of them are doing it already. We think it will only take a couple of years to be adopted widely into clinical practice," Dr. Kahn told Medscape Diabetes & Endocrinology. "Physicians won't be shocked by this report, but patients — and insurance companies — might be. There are wider social issues that haven't been looked at yet."

Source : http://www.medscape.com/viewarticle/704021?src=mpnews&spon=34&uac=133298AG

Friday, June 12, 2009

Half of Strokes Early After TIA Occur Within 24 Hours

A new study shows that nearly half of all strokes that occur after a transient ischemic attack (TIA) occur within the first 24 hours, highlighting the need for emergent intervention, the researchers say.

The good news is that the ABCD2 score, a validated risk score, was reliable in this hyperacute phase, meaning that "appropriately triaged emergency assessment and treatment are feasible," the researchers, with senior author Peter M. Rothwell, MD, from the Stroke Prevention Research Unit at Oxford University and John Radcliffe Hospital, in the United Kingdom, conclude.

This is the first rigorous population-based study of the risk for recurrent stroke within 24 hours of TIA, Dr. Rothwell told Medscape Neurology. "We found that nearly half of all the strokes that occur within 30 days after a TIA actually occur within those first 24 hours, so unless we intervene more quickly and treat it as a true emergency, rather than a 'see-urgently' problem, we'll miss the opportunity to prevent some of those early recurrent strokes," he said.

The results, reported on behalf of the Oxford Vascular Study, are published in the June 2 issue of Neurology.

Risk Underestimated

Over the past few years, Dr. Rothwell's group and others have been examining the natural history of TIA and minor strokes, looking at extent to which the early risk for recurrent stroke has been underestimated in the past and trying to determine how best to prevent recurrent events after the warning signal of TIA has occurred.

Results of the Early Use of Existing Preventive Strategies for Stroke (EXPRESS) trial, of which Dr. Rothwell was principal investigator, showed that urgent aggressive intervention after a TIA or minor stroke cut the 90-day risk for recurrent stroke by 80%, as well as reducing fatal and nonfatal stroke, disability, hospital admission days, and costs by the same magnitude (Rothwell PM et al. Lancet 2007; 370:1398-1400; Luengo-Fernandez R et al. Lancet Neurol 2009;8:218-219).

On the basis of these kinds of findings, clinical guidelines in most countries have changed significantly, recommending that patients should be assessed within 24 hours of a TIA or minor stroke, down from a recommendation of 7 days only a year ago.

Still, while 24 hours is better than 7 days, "it's still not quite a medical emergency," he said. In this paper, they sought to determine the real risk for recurrence in the 24 hours following a TIA, "to see what the very early risk really is in the first few hours and what might be gained, therefore, by seeing patients even earlier, as well as what might be gained by better public education to get patients to present immediately when they have 1 of these minor episodes."

The ABCD2 risk score aims to help clinicians identifying those at highest risk but was derived for prediction of the risk for stroke at 7 days and has not been examined in this hyperacute phase, Dr. Rothwell noted.

Using data from the Oxford Vascular Study, a prospective, population-based incidence study of TIA and stroke, they determined the risk for recurrent stroke at 6, 12, and 24 hours after an index event.

Of 1247 patients with a first TIA or stroke, 35 had recurrent strokes within 24 hours, all of them in the same arterial territory, the authors report. In 25 of these patients with recurrent strokes, the initial event was a TIA.

Of the 488 patients in total whose initial event was a TIA, 42% of the 25 events that occurred within the first 30 days actually occurred within the first 24 hours.

Stroke Risk at 6, 12, and 24 Hours Following Transient Ischemic Attack

Time Point, h Stroke Risk (%) 95% CI
6 1.2 0.2 – 2.2
12 2.1 0.8 – 3.2
24 5.1 3.1 – 7.1

"The other thing we were keen to do was make sure that the ABCD2 risk score, which is now embedded in all the national and international guidelines, actually worked for the risk of stroke within the first few hours," Dr. Rothwell noted. "The guidelines say that patients with scores less than 4 needn't be seen quite so urgently as those with higher scores, but that's only really been looked at for 7-day risk."

What they found is that basic triage using the ABCD2 score "still seems reasonable in patients that present within the first few hours," he said. The 12- and 24-hour risks were strongly related to the risk score (P = .02 and .0003, respectively). However, these findings were still based on small numbers of outcomes, they caution, and further studies on this would help to confirm their results.

Of some concern, 16 of the 25 (64%) recurrent stroke patients with TIA as their initial event did seek medical attention, usually from their family doctor, after their TIA but did not receive antiplatelet therapy acutely, nor were they sent to the acute intervention clinic at their institution. "However, the fact that the majority of patients sought medical attention prior to their recurrence indicates that emergency triage and treatment are feasible, if front-line services recognize the need," the authors write.

Dr. Rothwell added that he sees the medical profession and neurologists in particular as getting the message that patients presenting with a TIA and a high risk score need to be seen "immediately, rather than tomorrow, which is the current guideline."

"I think the bigger challenge is to get that message over to the public, because at the moment only about 50% of patients who have a TIA seek medical attention within 24 hours, and a lot of patients don't seek medical attention at all," he said.

TIA: A Medical Emergency

Asked for comment on these findings, Philip B. Gorelick, MD, professor and head of the department of neurology and rehabilitation and director of the Center for Stroke Research at the University of Illinois College of Medicine at Chicago, said that accumulating data confirm that recent TIA should be treated as a medical emergency.

A recent change in the definition of TIA by the American Heart Association/American Stroke Association (Easton JD et al. Stroke 2009;40:2276-2293) supports this conclusion, Dr. Gorelick noted, suggesting that neuroimaging and diagnostic workup should be carried out within 24 hours of a TIA when patients present within this time period and that it is reasonable to hospitalize patients who have had such an episode within the previous 72 hours if they have an ABCD2 score of 3 or more, he added

Importantly, it is critical to rapidly determine the etiology of TIA — for example, whether it results from a cardiac source embolism or large artery disease, Dr. Gorelick noted. "Currently, a tissue-based definition of TIA has been adopted. In aggregate data, it has been estimated that about 39% of [magnetic resonance imaging] diffusion-weighted image studies in patients with TIA show a cerebral ischemic injury pattern, and therefore a cerebral infarction has actually occurred."

The current study emphasizes again the importance of rapid diagnosis and treatment, since there was a 5.1% risk for stroke in the first 24 hours after TIA, with many of the strokes leading to a poor outcome, he said. "Furthermore, the 7-day stroke rate was close to 10%," he noted. "Although 64% of these early cases sought urgent medical attention prior to recurrent stroke, none received antiplatelet therapy acutely."

The lesson from this and other studies is that TIA is not benign, and urgent diagnosis and treatment is indicated, even though this may not occur in real-world experience, Dr. Gorelick concluded. In this effort, the ABCD2 score is a reliable clinical tool that assesses the acute risk for stroke in TIA patients.

"We need to continue to educate the public and healthcare professionals about the importance of recent TIA as a predictor of stroke and the urgency of diagnosis and treatment," Dr. Gorelick told Medscape Neurology. "Emergency TIA assessment and treatment programs have proven to dramatically reduce the risk of stroke after TIA. Widespread establishment of such programs should be considered, as we need to get TIA patients under the care of those who have experience in vascular neurology and who can make a difference."

The study was funded by the UK Medical Research Council, the National Institute of Health Research, the Stroke Association, the Dunhill Medical Trust, and the Oxford Partnership Comprehensive Biomedical Research Centre. The authors report no disclosures.

Neurology. 2009;72:1941-1947. Abstract

Source : http://www.medscape.com/viewarticle/703922?sssdmh=dm1.481497&src=nldne