Saturday, August 15, 2009

High Blood Pressure May Lead To 'Silent' Strokes

"Silent" strokes, which are strokes that don't result in any noticeable symptoms but cause brain damage, are common in people over 60, and especially in those with high blood pressure, according to a study published in the July 28, 2009, print issue of Neurology®, the medical journal of the American Academy of Neurology.

"These strokes are not truly silent, because they have been linked to memory and thinking problems and are a possible cause of a type of dementia," said study author Perminder Sachdev, MD, PhD, of the University of New South Wales in Sydney, Australia. "High blood pressure is very treatable, so this may be a strong target for preventing vascular disease."

The study involved 477 people age 60 to 64 who were followed for four years. At the beginning of the study 7.8 percent of the participants had the silent lacunar infarctions, small areas of damage to the brain seen on MRI that never caused obvious symptoms. They occur when blood flow is blocked in one of the arteries leading to areas deep within the brain, such as the putamen or the thalamus. By the end of the study, an additional 1.6 percent of the participants had developed "silent" strokes.

People with high blood pressure were 60 percent more likely to have silent strokes than those with normal blood pressure. Also, people with another type of small brain damage called white matter hyperintensities were nearly five times as likely to have silent strokes as those without the condition.

The study was supported by the National Health and Medical Research Council of Australia.

Source : http://www.sciencedaily.com/releases/2009/07/090727191241.htm

Thursday, July 23, 2009

Eating Fish, Omega-3 Oils, Fruits And Veggies Lowers Risk Of Memory Problems

A diet rich in fish, omega-3 oils, fruits and vegetables may lower your risk of dementia and Alzheimer's disease, whereas consuming omega-6 rich oils could increase chances of developing memory problems, according to a new study.

For the study, researchers examined the diets of 8,085 men and women over the age of 65 who did not have dementia at the beginning of the study. Over four years of follow-up, 183 of the participants developed Alzheimer's disease and 98 developed another type of dementia.

The study found people who regularly consumed omega-3 rich oils, such as canola oil, flaxseed oil and walnut oil, reduced their risk of dementia by 60 percent compared to people who did not regularly consume such oils. People who ate fruits and vegetables daily also reduced their risk of dementia by 30 percent compared to those who didn't regularly eat fruits and vegetables.

The study also found people who ate fish at least once a week had a 35-percent lower risk of Alzheimer's disease and 40-percent lower risk of dementia, but only if they did not carry the gene that increases the risk of Alzheimer's, called apolipoprotein E4, or ApoE4.

"Given that most people do not carry the ApoE4 gene, these results could have considerable implications in terms of public health," said study author Pascale Barberger-Gateau, PhD, of INSERM, the French National Institute for Health and Medical Research, in Bordeaux, France. "However, more research is needed to identify the optimal quantity and combination of nutrients which could be protective before implementing nutritional recommendations."

In addition, the study found people who did not carry the ApoE4 gene and consumed an unbalanced diet characterized by regular use of omega-6 rich oils, but not omega-3 rich oils or fish were twice as likely to develop dementia compared to those who didn't eat omega-6 rich oils, which include sunflower or grape seed oil. The study did not find any association between consuming corn oil, peanut oil, lard, meat or wine and lowering risk of dementia.

"While we've identified dietary patterns associated with lowering a person's risk of dementia or Alzheimer's, more research is needed to better understand the mechanisms of these nutrients involved in these apparently protective foods," said Barberger-Gateau.

Source : http://www.sciencedaily.com/releases/2007/11/071112163630.htm

Saturday, July 11, 2009

Psychiatric Symptoms May Be First Sign of Undetected Cancer

For some cancer patients, the first manifestation of the disease is a psychiatric symptom. This was found to be particularly true for brain tumors and small-cell lung cancer (SCLC), according to a study in the June 15 issue of the International Journal of Cancer.

Within the first month following a first-time evaluation for a psychiatric symptom, Danish researchers found that the odds of being diagnosed with any type of malignancy were increased 2.61-fold. But for brain tumors, the incidence rate ratio was increased almost 19-fold.

"Our study illustrates the importance of making a thorough physical examination of patients with first-time psychiatric symptoms," said lead author Michael E. Benros, MD, from the University of Aarhus, in Denmark, and the Danish Cancer Society. "The overall cancer incidence was highest in persons older than 50 years of age admitted with a first-time mood disorder, where 1 out of 54 patients would have a malignant cancer diagnosed within the first year."

The highest risk was concentrated in the range of 50 to 64 years of age, he told Medscape Oncology. "The overall incidence of cancer was increased almost 4-fold and the incidence of brain tumors was increased 37 times."

Dr. Benros also noted that paraneoplastic neurological disorders have previously been reported in patients who subsequently were found to have small-cell lung cancer. "But our study is the first to indicate that this is also the case with psychiatric symptoms, where patients with SCLC had the highest incidence after patients with brain tumors," he said.

It is hypothesized that paraneoplastic neurological disorders are most often induced in patients with SCLC, as well as in non-SCLC but with less frequency, because patients with SCLC produce antibodies that react with both the tumor antigens and the nervous system, explained Dr. Benros. This is turn might induce both the paraneoplastic neurological disorders and the psychiatric symptoms.

"The etiology of paraneoplastic psychiatric disorders is also partly explained by the fact that lung cancer, especially SCLC, tends to have the highest risk of metastasis to the brain," he added. "These metastases could then induce psychiatric symptoms by direct pressure. SCLC can also stimulate ectopic hormone production, which might then induce psychiatric symptoms, but our study cannot provide any evidence regarding causal mechanisms."

High Incidence of Brain Tumors and Lung Cancer

In their study, Dr. Benros and colleagues investigated the possibility that psychiatric symptoms could be caused by an undetected malignancy or be part of a paraneoplastic syndrome. Using data from the Danish Psychiatric Central Register and the Danish Cancer Registry, the researchers evaluated the occurrence of psychiatric symptoms and cancer in 4,320,623 million individuals who were followed in the 10-year period from 1994 to 2003. During this time, 202,144 persons had a first-time psychiatric contact and 208,995 were diagnosed with cancer.

From 1994 to 2003, a total of 4132 persons had a first-time psychiatric in- or outpatient contact and were subsequently diagnosed with cancer. Of this group, 1267 patients were diagnosed with cancer within the first year following their first-time psychiatric contact, and within this cohort, 145 persons had primary brain tumors.

The team observed that there was an increased overall incidence rate ratio of cancer during the initial 3-month period following a first-time psychiatric contact. During the first month, the incidence rate ratio of overall cancer was 2.61 (95% CI, 2.31 – 2.95), for brain tumors 18.85 (95% CI, 14.52 – 24.48), and for lung cancer 2.98 (95% CI, 2.16 – 4.12). The specific incidence rate ratio for SCLC was 6.13 (95% CI, 3.39 – 11.07).

In general, the increased incidence rates for most cancers decreased to a nonsignificant level within the first 3 months. The exception was for brain tumors, for which the incidence rate ratio remained significantly elevated during the first 9 months following a first-time psychiatric contact.

"Future studies should address a possible screening of subgroups of psychiatric patients with first-time symptoms," said Dr. Benros. "Psychiatric disorders with onset after the age of 50 could be an indication for brain imaging and, if they are smokers, patients could be examined for antibodies such as anti-Hu, but a more formal analysis of costs and benefits was beyond the scope of our study."

Source : http://www.medscape.com/viewarticle/705450?src=mpnews&spon=34&uac=133298AG

Friday, July 10, 2009

Final Analysis Shows HPV Vaccine Is Effective and Safe

The final results of a large phase 3 trial have confirmed that a bivalent vaccine is highly effective at protecting against human papillomavirus (HPV) types 16 and 18. Licensed under the name Cervarix and manufactured by GlaxoSmithKline, the vaccine was effective at providing protection against cervical intraepithelial neoplasia grade 2+ (CIN2+) lesions associated with HPV-16 and HPV-18, as well as lesions that were associated with nonvaccine types HPV-31, HPV-33, and HPV-45.

These 5 HPV types are responsible for about 82% of all cervical cancers, researchers say, in a report published online July 7 in the Lancet.

This is 1 of 2 vaccines against HPV that are now commercially available, the other being Gardasil (Merck). At present, only Gardasil is marketed in the United States, while Cervarix is awaiting approval there. But both vaccines are marketed in many other countries worldwide, including most of Europe.

The 2 vaccines also differ in the range of HPV subtypes they target — Cervarix is active against HPV 16 and 18, while Gardasil is active against HPV 6, 11, 16, and 18.

But even though both HPV vaccines appear to be effective at reducing precancerous lesions and have the potential to substantially reduce the incidence of cervical cancer, current approaches are too limited, argue the authors of an accompanying editorial.

Cannot Be Limited to Women

The only efficient way to control the spread of HPV is to "vaccinate the other half of the sexually active population: boys and men," write the editorialists, Karin Michels, PhD, from Harvard Medical School, in Boston, Massachusetts, and Harald zur Hausen, DSc, MD, from the German Cancer Research Center, in Heidelberg, Germany. Dr. zur Hausen was awarded the 2008 Nobel Prize in Physiology or Medicine for his discovery of human papilloma viruses causing cervical cancer.

The primary public-health goal of immunization programs is to halt the spread of infection and ultimately disease, and the current targets for the HPV vaccines are girls and young women who have not yet become sexually active. But while this program will reduce cervical-cancer incidence in a couple of decades, they note, "this subgroup of the population at risk is too small to limit the spread of the virus."

The editorialists point out that infection with oncogenic HPV types goes beyond cervical cancer, as they are also a primary cause of anal cancer and contribute to a substantial proportion of penile, oropharyngeal, and tonsillar cancers, all of which are predominant in men.

"Women have shouldered responsibility for contraception since its inception," they write. "The goal to eradicate sexually transmitted carcinogenic viruses can be jointly carried by women and men and could be accomplished within a few decades."

Lead author of the latest study, Jorma Paavonen, MD, a professor of obstetrics and gynecology at the University of Helsinki, in Finland, agrees. "Vaccinating both girls and boys is important to produce so-called herd immunity, which protects the population as a whole and may ultimately lead to eradication of the high-risk oncogenic HPV types."

He added that there is an ongoing randomized phase 4 community trial in Finland that is evaluating the HPV vaccine in both sexes, and more than 30,000 participants have already been enrolled.

Latest Results

The latest results, from a 3-year follow-up of women participating in the Papilloma Trial Against Cancer in Young Adults (PATRICIA), show the vaccine to be highly immunogenic, generally well tolerated, and effective against HPV-16 or HPV-18 infections and associated precancerous lesions, the researchers note.

Efficacy against CIN2+ associated with HPV types 16 and 18 was 92.9% (96.1% CI, 79.9% – 98.3%) in the primary analysis and 98.1% (95% CI, 88.4% – 100%) in an additional analysis, in which probable causality to HPV type was assigned in lesions infected with multiple oncogenic types.

The final analysis was event-driven, meaning that there were enough end points to show efficacy during this follow-up. "Also, the efficacy was even stronger when we used a CIN3+ end point, which is the immediate precursor of invasive cervical cancer," he told Medscape Oncology. "This and the Kaplan-Maier curves show that the efficacy gets stronger over time and does not wear off."

A total of 18,644 women between the ages of 15 and 25 years, residing in 14 countries, were included in PATRICIA. Participants were randomized to receive either the HPV vaccine or a control hepatitis-A vaccine. The analyses were conducted in several cohorts:

  • According-to-protocol cohort for efficacy (ATP-E), which consisted of women who met eligibility criteria, complied with the trial protocol, and received all 3 doses of study vaccine (vaccine=8093; control=8069).
  • Total vaccinated cohort (TVC), which included all women receiving at least 1 vaccine dose, regardless of their baseline HPV status; this represents the general population, including those who are sexually active (vaccine=9319, control=9325).
  • Total vaccinated cohort-naive (TVC-naive), consisting of women with no evidence of oncogenic HPV infection at baseline; this represents women before sexual debut (vaccine=5822; control=5819).

All of the participants received vaccinations at months 0, 1, and 6, and the mean follow-up was 34.9 months after the third dose. The primary-end-point analysis was conducted in the ATP-E cohort, in participants who were seronegative at month 0 and HPV DNA negative at months 0 and 6 for the HPV type considered in the analysis.

Efficacy Observed for Vaccine and Nonvaccine Oncogenic Types

At the final analysis, there were a total of 60 confirmed cases of CIN2+, of which 33 (55%) contained DNA from nonvaccine oncogenic HPV types in addition to HPV-16 or HPV-18. Within this group, 12 CIN3+ lesions containing HPV-16/18 DNA, including 3 cases of adenocarcinoma in situ, were detected. Only 2 of these cases were found in the vaccine group, while the other 10 were detected among the controls.

Neither this trial nor any of the other trials have shown any safety signals.

Vaccine efficacy against CIN2+, irrespective of HPV DNA in lesions, was 30.4% in the TVC and 70.2% in the TVC-naive groups. The researchers also noted that efficacy against CIN3+ was 33.4% in the TVC cohort and 87.0% in the TVC-naive cohort.

The efficacy against CIN2+ associated with 12 nonvaccine oncogenic types was 54.0% in the ATP-E group. Since several lesions were coinfected with HPV-16/18, a post hoc analysis was conducted excluding these lesions, showing an efficacy of 37.4% against CIN2+ lesions associated exclusively with nonvaccine types. These 2 analyses suggest that the true vaccine efficacy against CIN 2+ associated with 12 nonvaccine oncogenic HPV types is between 37% and 54%, the authors note.

The authors also observed that the vaccine substantially reduced the number of colposcopy referrals and cervical-excision procedures in both the TVC and TVC-naive cohorts.

In general, the safety profile was generally similar to that of the control vaccine. "Neither this trial nor any of the other trials have shown any safety signals," said Dr. Paavonen. "All existing evidence shows that the prophylactic HPV vaccine is safe."

The study was funded by GlaxoSmithKline Biologicals. Several of the study authors have reported financial relationships with GlaxoSmithKline and/or Merck; the disclosures are listed in the paper. The editorialists declare no conflicts of interest.

Lancet. Published online July 7, 2009.

Source : http://www.medscape.com/viewarticle/705560?sssdmh=dm1.497244&src=nldne

Thursday, July 2, 2009

Implementation of a Pediatric Medical Emergency Team

Transition From a Traditional Code Team to a Medical Emergency Team and Categorization of Cardiopulmonary Arrests in a Children's Center

Hunt EA, Zimmer KP, Rinke ML, et al
Arch Pediatr Adolesc Med. 2008;162:117-122

Summary

Medical emergency teams (METs) have been implemented in adult hospital care in order to identify patients who are experiencing clinical deterioration before they reach the point of arrest. As such, these teams are often more multidisciplinary than traditional "code teams," and they are designed to empower care providers, including nurses, to get quick evaluation of any patient about whom they are worried.

This study evaluated historical data from one institution before and after implementation of a pediatric MET. The outcome of interest was the rate (per 1000 patient-days and per 1000 discharges) of cardiopulmonary or respiratory arrests before and after implementation of the pediatric MET.

Organizing the MET had several aspects that differed from previous code team approaches, including the fact that nurses were supported in their first-responder roles. Security guards and chaplains were on the team to handle family members so that nurses could be more involved in the MET evaluation and stabilization of the patients.

Perhaps the most significant change was the addition of a pediatric pharmacist to the MET. (There was no pharmacist on the code team.) Part of the implementation of the MET involved defining a set of criteria for which an MET could be called, and these ranged from "respiratory distress" to "worried family member," giving a lot of discretion to nursing when ordering an MET intervention. Education of all personnel to their new roles was also completed.

There were no differences in the number of patients admitted during the year pre-MET and the year post-MET implementation, and patient severity of illness was similar as well. The MET was activated more often than traditional code teams at a rate of 1.8 calls per 1000 patient-days compared with 1.1 calls per 1000 patient-days for code team activation.

The study authors found a 51% reduction in rate of respiratory arrest and cardiopulmonary arrest after MET implementation, but the difference was not statistically significant (95% confidence interval [CI] for the incidence rate ratio was 0.18-1.20). The rate of reduction in respiratory arrest alone almost reached statistical significance (incidence rate ratio 0.27, 95% CI 0.05-1.01).

The study authors concluded that implementation of the pediatric MET was associated with no change in rate of cardiopulmonary arrest but was associated with a reduced rate of respiratory arrest on the ward units.

Viewpoint

This article is very valuable for providing a road map of how to implement a change in emergency response in a children's hospital. However, the process also points out one of the biggest difficulties of quality improvement research. With such a multifaceted intervention (change in criteria and environment for activating the team, addition of a pharmacist, and education of all providers as to new roles -- to name just a few), it would be very difficult to determine which portion of the change provided the benefit. As more hospitals implement pediatric METs as a method to respond to emergencies, it will be interesting to see outcomes at other institutions.

Source : http://www.medscape.com/viewarticle/575965

CPR Chest Compression Depth Guidelines in Children May Need Revision

The chest compression depth recommended in cardiopulmonary resuscitation (CPR) guidelines for pediatric patients do not appear to be optimal and may be excessive, according to studies in the July issue of Pediatrics.

The studies used computed tomography reconstruction to estimate chest compression depths that would be optimal for infants and children up to 8 years old during cardiopulmonary resuscitation. Current guidelines in that age group call for compressions of one third to one half the external anterior-posterior chest depth.

Dr. Matthew Braga at Dartmouth-Hitchcock Medical Center in Lebanon, New Hampshire, and co-authors blame low rates of survival among children who experience cardiac arrest on unproven targets for pediatric chest compressions that are based on extrapolation from adults and animal models.

To provide direct experimental data, Dr. Braga's team examined pediatric chest CT scans previously performed to measure individual chest depths in 14 age groups between birth and 8 years.

Results show that "the current recommendations of one third to one half external anterior-posterior chest depth are not ideal and may not be attainable or safe for all children."

For example, a one-half chest compression in 3- to 12-month-olds would theoretically result in 25% having no residual internal depth, causing harm to structures being compressed. The authors estimate that the same would be true for 21% of 1- to 3-year-olds, and 8% of 3- to 8-year-olds.

According to Dr. Braga and associates, "Use of a constant chest compression depth target of 38 mm would be expected to be adequate for > 98% of 1 to 8-year-old children, with > 10 mm of residual chest depth."

Dr. Matthew Huei-Ming Ma and associates at National Taiwan University Hospital, Taipei, take a similar tack, using chest CT scans of 36 infants and 38 children ages 1 to 8.

They observed that accurate depths of chest compression at the lower half of the sternum and the internipple line revealed no significant difference. Therefore, they maintain, "because guidelines should be modified and simplified for ease of use by either the layperson or health care provider, it is not necessary to provide two choices in the pediatric resuscitation guidelines in the future."

Dr. Ma's team also observed that compression depths according to current guidelines were similar to or even higher than recommended compression depths for adults.

"More scientific debate is needed on this issue for further revision of pediatric CPR guidelines," they conclude.

Pediatrics 2009;124:e69-e74.

Source : http://www.medscape.com/viewarticle/705045?sssdmh=dm1.492967&src=nldne

Thursday, June 25, 2009

Recession-Related Surge in Nursing Employment Just a Blip, Study Cautions

The worst economic recession in the post–World War II era has shed jobs across almost all industrial sectors, pushing the national unemployment rate close to 10%. Yet for one group in the slowing but still robust healthcare sector — hospital-based registered nurses (RNs) — the current economic downturn has led to a record employment spike, according to a study published online June 12 in Health Affairs.

However, this spike is only temporary, warns lead author Peter I. Buerhaus, PhD, RN, the Valere Potter Professor of Nursing at Vanderbilt University School of Nursing, Nashville, Tennessee. "We've eased the nursing shortage, but we haven't permanently ended it," Dr. Buerhaus told Medscape Nursing.

The history of such shortages, Dr. Buerhaus and the study coauthors write, is inversely related to the general health of the economy: RNs are in short supply during boom periods and are available to fill vacancies when the economy is spiraling down.

In 2001, 3 years after hospitals began reporting difficulty filling vacancies, RN shortages peaked. With vacancy rates reaching a national average of 13%, an estimated 126,000 full-time-equivalent (FTE) RN positions went unfilled, forcing "many hospitals to close nursing units and restrict operations."

The 2001 recession altered this trend. Faced with a bad economy and the prospect of reduced family income, nurses already in the workforce increased their hours, and those who had left it returned, in part to take advantage of the substantially higher RN wages that hospitals began offering in 2002. The exigencies of the recession, coupled with the lure of higher wages, worked like a magnet: During the next 2 years, hospital RN employment surged by 184,000 FTE RNs. "At the time, that was a world record — right off the charts," Dr. Buerhaus said.

But if hospital officials thought their nurse vacancy problems were solved, they were wrong. Once the economy recovered, the shortage problem reasserted itself. In fact, the annual growth in FTE RN employment between the economic boom years of 2004 and 2006 was −0.9%. It has taken this most recent recession, which some argue started as early as the final months of 2007, to reverse the nursing shortage problem yet again.

In 2007 and 2008, according to the study, hospital-based RN employment increased by an estimated 243,000 FTEs. As in the 2001 recession, bad economic times have pushed nurses back into the labor market, and for many of the same reasons as before. But the lure of higher wages is not among them; for the most part, said Dr. Buerhaus, hospitals did not increase RN wages in 2007 and 2008. That fact, he says, makes the dramatic surge in RN employment during this recession all that more surprising. "From our past studies, we knew the effect the recession would have. But we were completely stunned by the size of the increase. Looking back, there's simply no 2-year period of growth in the hospital employment sector that rivals this one."

For nurses fresh out of school, the influx of new hires has not always worked to their benefit. "Their ability to find the job of their dreams in the hospital down the street from where they live has probably changed," Cheryl Peterson, MSN, RN, director of Nursing Practice and Policy at the American Nursing Association, told Medscape Nursing. "We've also found that employers can be a little more selective these days, holding out for someone with more experience rather than hiring a recent graduate or someone with limited experience."

Despite the trend toward older, more experienced hires, however, younger nurses are by no means absent from the workforce. In 2008, for example, the number of FTE RNs aged 23 to 25 years — 130,000 — was the highest it has been in more than 2 decades, according to the study. In addition, in 2008 there was a large jump in the number of younger FTE nurses with children younger than 6 years, compared with in 2007 — a phenomenon the authors say is related to families' efforts to boost their incomes during hard economic times. Overall, in 2008, employment of RNs younger than 35 years increased by a dramatic 74,000, with most ending up in hospital-based jobs.

Getting a Handle on Looming Shortages

Given the oddly cyclical nature of nurse employment, however, few if any in the nursing community are sanguine about the recent employment surge. "We can't be lulled into thinking that the problem of a shortage is over," said Ms. Peterson.

Similar to past shortages, Dr. Buerhaus said, future ones will be driven by the interaction of supply and demand. On the demand side, he and his coauthors lean heavily on projections outlined by the federal Health Resources and Services Administration (HRSA). Noting that "changing demographics constitute a key determinant of projected demand for FTE RNs," HRSA points to the "much greater per capita healthcare needs" of an aging baby boom generation, the leading edge of which will approach age 65 years starting around 2010.

Dr. Buerhaus and coauthors also consider something likely to drive demand that HRSA does not — the prospect that healthcare reform will expand coverage to more citizens, thereby placing even greater pressure on the nursing workforce.

On the supply side of the equation, the authors say, the waves of baby boomer RNs retiring during the next decade will be significant. So too will be the prospective size of the successive cohorts that will replace them. Will these cohorts be large enough to keep the workforce from shrinking, and yet too small "to meet the projected demand"? If so, the authors say, a much older RN workforce than ever before may be left to do the heavy lifting.

Action Plan for Policymakers

The authors conclude by proposing a series of action steps for policymakers. They want to strengthen the current workforce and, in particular, to "improve the ergonomic environment of the clinical workplace" for older nurses. They want to improve communication skills, especially for RNs educated in other countries — a group that has not only helped to fuel the current surge but also is likely to play a significant role in future supply scenarios. Perhaps most notably, they want to see steps taken to expand the numbers of 2 "underrepresented" groups in nursing — men and Hispanics.

Representatives of each group are sympathetic, although they cite challenges.

"We're up against the historical image of men as doctors and women as nurses," Demetrius J. Porsche, DNS, RN, dean of the Louisiana State University Health Center School of Nursing and president of the American Assembly for Men in Nursing, told Medscape Nursing. Among the barriers to full participation that Dr. Porsche sees are unsupportive families, school counselors who "don't understand that nursing is an autonomous profession, not just a handmaiden to doctors," and too few public images of men in the profession. Each year, Dr. Porsche explained, the American Assembly for Men in Nursing presents a series of awards aimed at enhancing the status of men in nursing, including one for the best workplace and another for the best nursing school/college.

"The push for men in nursing is a diversity issue," he said. "The profession should be open and welcoming not only to all races and ethnicities but to both genders."

Anyone recruiting Hispanics to nursing also faces barriers, said Norma Martinez-Rogers, PhD, RN, FAAN, associate clinical professor in the Department of Family Nursing at the University of Texas Health Science Center, San Antonio, and president of the National Association of Hispanic Nurses.

The biggest barrier, Dr. Martinez-Rogers told Medscape Nursing, is money. Despite some funding, she said, "many Hispanic students end up having to pay back big loans." Then there's the work issue, she added. Used to holding down part-time jobs to make ends meet before entering nursing school, too many Hispanic students try, at their peril, to duplicate that work schedule once enrolled. "Nursing school is all about the application of the content that you're learning, which is very time consuming," Dr. Martinez-Rogers said. "Students can hold down part-time jobs, but they risk having to repeat a course."

Hoping for more funding and support for what she characterizes as "not a brand-new problem," Dr. Martinez-Rogers has been talking to the Congressional Hispanic Caucus about renewed efforts to bring more Hispanics into nursing. One step would be to work with universities — her own included — that have the potential, because of their location, to enroll significant numbers of Hispanic nursing students. Once enrolled, she said, such students need to be mentored while in school and encouraged after they graduate. Her own university has what she described as a "student-driven" mentorship program; for its part, the National Association of Hispanic Nurses is working to develop its own national mentorship program.

Dr. Buerhaus thinks that expanding the capacity of educational programs — for Hispanics, men, and anyone else interested in becoming a nurse — is key. So, too, he said, is turning out the "right" nurses: "Beyond all the rhetoric, we need the future nurse to be really, really sharp in the areas of both quality and safety."

The ANA's Cheryl Peterson agrees, but added that nursing education "can't change on a dime" and that employers must also do their part by giving the freshly minted nurse the necessary "space to learn."

Source : http://www.medscape.com/viewarticle/704668?sssdmh=dm1.488649&src=nldne

Saturday, June 20, 2009

Glove Perforation May Increase the Risk for Surgical Site Infection

Surgical glove perforation increases the risk for surgical site infection (SSI) unless antimicrobial prophylaxis is used, according to the results of a prospective observational cohort study reported in the June issue of the Archives of Surgery.

"All surgical staff members wear sterile gloves as a protective barrier to prevent hand-to-wound contamination during operations," write Heidi Misteli, MD, from University Hospital Basel in Basel, Switzerland, and colleagues. "When gloves are perforated, the barrier breaks down and germs are transferred. With the growing awareness among operating room staff of their risk of exposure to disease from patients, primarily human immunodeficiency virus and hepatitis B virus, gloves have begun to be regarded as a requirement for their own protection."

This study took place at University Hospital Basel, in which approximately 28,000 surgical procedures are performed each year. For this analysis, the study sample was a consecutive series of 4147 surgical procedures performed in the Visceral Surgery, Vascular Surgery, and Traumatology divisions of the Department of General Surgery. The main endpoint of the study was rate of SSI, as defined by the Centers for Disease Control and Prevention (CDC), and the main predictor variable was compromised asepsis because of glove perforation.

Of 4147 procedures performed, 188 (4.5%) overall were associated with SSI. Compared with procedures in which asepsis was maintained, procedures in which gloves were perforated had a higher likelihood of SSI, based on univariate logistic regression analysis (odds ratio [OR], 2.0; 95% confidence interval [CI], 1.4 - 2.8; P < .001).

The increase in the risk for SSI with glove perforation was different when surgical antimicrobial prophylaxis was or was not used (multivariate logistic regression analyses test for effect modification, P = .005). When antimicrobial prophylaxis was not used, the odds of SSI were significantly higher for glove perforation vs the group in which asepsis was maintained (adjusted OR, 4.2; 95% CI, 1.7 - 10.8; P = .003). In contrast, the likelihood of SSI was not significantly higher for procedures in which gloves were punctured when surgical antimicrobial prophylaxis was used (adjusted OR, 1.3; 95% CI, 0.9 - 1.9; P = .26).

"Without surgical antimicrobial prophylaxis, glove perforation increases the risk of SSI," the study authors write. "To our knowledge, this is the first study to explore the correlation between SSI and glove leakage in a large series of surgical procedures."

Limitations of this study include 22.1% missing data on glove perforation, prospective observational vs randomized controlled design, possible residual or unknown confounding, and use of nonvalidated techniques to detect glove leakage. Since this study was performed from 2000 to 2001, there have been significant changes in circulating relevant bacteria.

"Efforts to decrease the frequency of glove perforation, such as double gloving and the routine changing of gloves during lengthy surgical procedures, are therefore encouraged," the study authors conclude. "The present results support an extended indication of surgical antimicrobial prophylaxis to all clean procedures in the absence of strict precautions taken to prevent glove perforation. The advantages of this SSI prevention strategy, however, must be balanced against the costs and adverse effects of the prophylactic antimicrobials, such as drug reactions or increased bacterial resistance."

In an accompanying invited critique, Edward E. Cornwell III, MD, from Howard University Hospital in Washington, DC, notes additional study limitations.

"I do not believe the recommendation to extend antibiotic prophylaxis guidelines is justified," Dr. Cornwell writes. "Although the risk of SSI (with vs without glove perforation) among patients without antibiotic prophylaxis was significant on multivariate analysis, the data in this and other studies cited by the authors much more strongly support the measures suggested for lowering the risk of glove perforation. These measures would be substantially cheaper, more promising for efficacy, and less likely to produce allergies or adverse effects than giving prophylactic antibiotics to all patients."

The Department of General Surgery, University Hospital Basel, and the Freiwillige Akademische Gesellschaft Basel funded this study. The study authors and Dr. Cornwell have disclosed no relevant financial relationships.

Arch Surg. 2009;144:553-558.

Clinical Context

Despite the precautions deployed to maintain asepsis during surgery, the risk for transfer of pathogens remains. The transfer of skin-borne pathogens from staff hands is especially prevalent. All surgical staff members wear sterile gloves as a protective barrier to prevent hand-to-wound contamination during operations. However, the barrier breaks down as soon as the gloves are perforated. Factors leading to an increased risk for perforation include duration of operating time (significantly after 2 hours); improperly fit gloves; and puncture by needles, spiked bone fragments, or sharp surfaces on complex instruments. The impact of glove perforation on the risk for SSI is unknown.

The aim of this study was to determine whether clinically apparent surgical glove perforation increases the risk for SSI.

Study Highlights

  • From January 1, 2000, through December 31, 2001, a prospective observational cohort study was performed at the University Hospital Basel to evaluate the incidence of SSI in association with surgical glove perforation.
  • Consecutive series of 4147 surgical procedures performed in the Visceral Surgery, Vascular Surgery, and Traumatology divisions of the Department of General Surgery were enrolled.
  • Operations requiring no incision and procedures classified as wound class 4 (dirty infected) according to the CDC criteria were excluded from the study.
  • The outcome of interest was SSI occurrence as assessed according to the CDC standards.
  • All incidents of SSI were validated by a board-certified infectious disease specialist on the basis of a comprehensive review of the patient history, initial microbiology results, and outcome at least 30 days after surgery when no implants were involved or more than 1 year after surgery if an implant was in place.
  • The primary predictor variable was compromised asepsis because of glove perforation.
  • Prophylactic antimicrobial administration was given according to the CDC guidelines. Patients received antimicrobial prophylaxis if they underwent surgery classified as CDC wound classes 3 (contaminated), 2 (clean contaminated), and 1 (clean) involving a nonabsorbable implant or at the discretion of the surgeon.
  • Results of the study demonstrated that the overall SSI rate was 4.5% (188/4147 procedures).
  • SSI was classified as the following: superficial (n = 56), deep (n = 62), and organ/space (n = 70).
  • Asepsis was compromised by glove leakage in 677 interventions (16.3%); 51 instances (7.5%) of SSI were recorded from the compromised asepsis. In comparison, there were 137 instances in 3470 procedures (3.9%) when asepsis was not breached.
  • Univariate logistic regression analysis showed a higher likelihood of SSI in procedures in which gloves were perforated vs interventions with maintained asepsis (OR, 2.0; 95% CI, 1.4 - 2.8; P < .001).
  • However, multivariate logistic regression analyses showed that the increase in SSI risk with perforated gloves was different for procedures with surgical antimicrobial prophylaxis vs those without surgical antimicrobial prophylaxis (test for effect modification, P = .005).
  • Data were further analyzed for the risk association separately for glove perforations in surgeries with and without antimicrobial prophylaxis.
  • Without antimicrobial prophylaxis, glove perforation entailed significantly higher odds of SSI vs the reference group with no breach of asepsis (adjusted OR, 4.2; 95% CI, 1.7 - 10.8; P = .003).
  • On the contrary, when surgical antimicrobial prophylaxis was applied, the likelihood of SSI was not significantly higher for operations in which gloves were punctured (adjusted OR, 1.3; 95% CI, 0.9 - 1.9; P = .26).
  • Limitations of this study were that 22.1% of the information on glove perforation was missing, the study was a prospective observational study vs a randomized controlled trial, nonvalidated techniques were used to detect glove leakage, and results may be inapplicable because of changes in circulating relevant bacteria since this study was conducted from 2000 to 2001.

Clinical Implications

  • Factors that may lead to glove perforation include puncture by needles, spiked bone fragments, or sharp surfaces on complex instruments as well as duration of operating time (> 2 hours) and gloves that do not fit properly.
  • In the absence of surgical antimicrobial prophylaxis, glove perforation increased the risk for SSI.
Source : http://cme.medscape.com/viewarticle/704548?sssdmh=dm1.487338&src=nldne

Friday, June 19, 2009

Cardiac Calcifications

Introduction

Radiologic detection of calcifications within the heart is quite common. The amount of coronary artery calcification correlates with the severity of coronary artery disease (CAD). Calcification of the aortic or mitral valve may indicate hemodynamically significant valvular stenosis. Myocardial calcification is a sign of prior infarction, while pericardial calcification is strongly associated with constrictive pericarditis. Therefore, detecting and recognizing calcification related to the heart on chest radiography and other imaging modalities such as fluoroscopy, CT, and echocardiography may have important clinical implications.

In patients with diabetes mellitus, by determining the presence of coronary calcifications, patients at risk for future myocardial infarction and coronary artery disease could be identified, and future events could be excluded if no coronary calcifications were present.1

In an asymptomatic population, determination of the presence of coronary calcifications identified patients at risk for future myocardial infarction and coronary artery disease independent of concomitant risk factors. In patients without coronary calcifications, future cardiovascular events could be excluded.2
Pericardial Calcifications

Calcification of the pericardium usually is preceded by a prior episode of pericarditis or trauma. Infectious etiologies for pericarditis include viral agents (eg, coxsackievirus, influenza A, influenza B), tuberculosis, and histoplasmosis.


Incidence


Of patients with pericardial calcification, 50-70% have constrictive pericarditis. Extensive calcification may be present without signs or symptoms of pericardial constriction.


Features

On chest radiographs, pericardial calcification appears as curvilinear calcification usually affecting the right side of the heart (Images 3-4). This is often visualized better on lateral chest radiographs than on frontal views. Calcifications associated with tuberculous pericarditis present as thick, amorphous calcifications along the atrioventricular groove. This pattern may be observed less commonly with other forms of pericarditis as well.

CT is the best technique to detect pericardial calcification; however, overpenetrated films, conventional tomography, fluoroscopy, and MRI may be helpful.
Myocardial Calcifications

Myocardial calcification is more common in males than in females and usually occurs in patients who have sustained sizable infarcts and have survived more than 6 years after infarction. Most of these patients have a dominant right coronary artery, since this favors longer survival after infarction in the region of the left anterior descending coronary artery.


Incidence


Approximately 8% of patients who sustain a large myocardial infarction develop myocardial calcification. In these patients, infarcts usually are large and most frequently involve the anterolateral wall of the LV. LV aneurysm usually is present.


Features


Myocardial calcification is identified as thin and curvilinear shaped and usually appears toward the apex of the LV. The associated contour abnormality from the aneurysm is frequently noted. Rarely, calcification can appear spherical or platelike.

Detection of left atrial wall calcification has significant clinical implications. Most of these patients have congestive heart failure and atrial fibrillation from long-standing mitral valve disease. Mural thrombi secondary to atrial fibrillation are a frequent source of systemic and pulmonary emboli. Possible complications during cardiac surgery include dislodgement of thrombi, which results in cerebral embolism and uncontrollable hemorrhage if the left atrium (LA) is entered through the calcified region because of LA wall rigidity. LA calcification usually is secondary to endocarditis resulting from rheumatic heart disease, and the amount of calcification is often related to the duration of untreated disease.3


Features


LA calcification may be in the endocardial or subendocardial layer or within a thrombus. Calcification is usually thin and curvilinear (Images 8-9). Three patterns of calcification have been identified.

Type A: Calcification is confined to the LA appendage, the underlying lesion is often mitral stenosis; this type of calcification almost always is associated with thrombus in the appendage.

Type B: The free wall of the LA and mitral valve are calcified, although the valve calcification is not always appreciated from chest radiographs. This pattern indicates advanced mitral stenosis.

Type C: Small area of calcification is confined to the posterior wall of the LA. This results from a jet lesion because of mitral regurgitation and is termed a McCollum patch.
Valvular Calcifications

Valvular calcification identified radiographically suggests the presence of a hemodynamically significant stenosis. Dominant valvular insufficiency is not associated with radiographic depiction of calcification, except in patients with calcified stenotic valves secondarily destroyed by endocarditis. The aortic valve calcification is detected most frequently.


Aortic valve calcification


In patients younger than 40 years, a calcified aortic valve usually indicates marked aortic stenosis secondary to a congenital bicuspid aortic valve. In these patients, one cusp of the valve is larger than the other; therefore, the valve cannot function properly, resulting in prolapse, fibrosis, calcification, and stenosis. The average age at which calcification first is detected is 28 years. More than 90% of patients with congenital bicuspid valve have calcification by age 40 years.

In older patients, calcification of the aortic valve may be secondary to aortic sclerosis with degeneration of normal valve leaflets and may be associated with hemodynamically significant aortic stenosis. Aortic valve disease associated with rheumatic heart disease frequently is associated with mitral valve disease. The average age at which aortic valvular calcification first is detected is 47 years in patients with a history of rheumatic fever and carditis. However, aortic valvular calcification is infrequently seen in this entity, and fewer than 10% of patients without congenital bicuspid valve have calcification from age 40-65 years.


Features


* In bicuspid aortic valves, calcification may be nodular, semilunar, or mushroom shaped. A dilated ascending aorta often is seen. A thick, irregular, semilunar ring pattern with a central bar or knob is typical of stenotic bicuspid valves and results from calcification of the valve ring and the dividing ridge of the 2 cusps or the conjoined leaflet (Image 10). Rarely, 3 leaflet valves mimic this pattern because of fusion of 2 of the 3 leaflets. However, none of these features has a high sensitivity or specificity in predicting valvular anatomy.

* In patients with aortic sclerosis, calcification usually is nodular. Diffuse aortic dilatation can be observed. Heart size may be normal; however, LV dilation can occur with decompensation. Nodular calcification of the valve also is observed in patients with rheumatic aortic disease. The ascending aorta may be dilated, and signs of rheumatic mitral valve disease may be present.


Mitral valve calcification


Although mitral leaflet calcification is commonly a sequela of rheumatic mitral valve disease, its appearance may be very subtle, not readily apparent on plain films or echocardiograms.4


Features


* A nodular or amorphous pattern of calcification is observed, and signs of rheumatic mitral stenosis frequently are present. These include enlargement of the LA, especially the LA appendage, and pulmonary venous hypertension with cephalization and interstitial edema seen as Kerley B lines.
* Findings that indicate pulmonary arterial hypertension, such as enlarged central pulmonary arteries, can occur in patients with long-standing disease. Detection of mitral valve calcification from chest radiographs is uncommon; echocardiographic detection is far more common. Detection of the calcification has surgical implications, since in such instances valve replacement is preferred to commissurotomy.


Pulmonary valve calcification


Calcification of the pulmonary valve occurs rarely in patients with pulmonary valvular stenosis. If valve calcification is identified radiologically, the gradient across the valve often exceeds 80 mm Hg; valvar calcification also may be observed in patients with long-standing, severe pulmonary hypertension.
Tricuspid valve calcification

Tricuspid valve calcification is rare and most frequently is caused by rheumatic heart disease; however, it has been associated with septal defects, congenital tricuspid valve defects, and infective endocarditis.
Annular Calcifications

Myocardial fibers attach to the annulus or fibrous skeleton of the heart. The cardiac valves are suspended from the annulus.

The mitral annulus commonly calcifies. Annular calcification is a degenerative process seen most often in individuals older than 40 years and is especially common in women. Such calcification is not clinically important unless it is massive, in which case it can cause mitral insufficiency, atrial fibrillation (in presence of dilated LA), conduction defects, and infrequently mitral stenosis.5


Features


A, J, U, or reverse C-shaped bandlike calcification is observed involving the mitral annulus.6 Calcification can appear O-shaped if the anterior leaflet also is involved. Calcification appears bandlike and of uniform radiopacity compared to the nodular and more irregular opacity of mitral valve calcification (Image 11). Sensitivity of detecting mitral annular calcification is substantially higher with echocardiography. Some recent data suggest that this calcification is a form of atherosclerosis and can be used as a marker for ischemic heart disease.

Aortic annular calcification usually is associated with a calcified aortic valve and may extend superiorly into the ascending aorta or inferiorly into the interventricular septum. This type of calcification is often dystrophic, commonly seen in older individuals, and often clinically insignificant.

Tricuspid annular calcification is rare and usually is associated with long-standing and severe pulmonary hypertension.
Vascular Calcifications

Calcification involving the aortic arch occurs in more than 25% of normal patients from age 61-70 years. The ascending aorta usually is spared, since the arch and distal aorta most commonly are involved. Patients with hyperlipidemia and diabetes are predisposed to calcific atherosclerosis at a younger age. Calcification in these patients is observed as a curvilinear density along the ascending aorta and the arch.7

Syphilitic aortitis, an inflammatory aortitis involving the ascending aorta, sinuses of Valsalva and the aortic valve, is observed most commonly in patients older than 50 years. Syphilitic aortitis is associated with aortic insufficiency, ascending aortic aneurysms, and a positive serologic test for syphilis. In these patients, angina resulting from occlusion of the ostia of the coronary vessels also can be present. Calcification occurs in a linear pattern along the ascending aorta.

On gross specimen examination, the aorta has been described as revealing a "tree-bark" appearance (Image 12). Focal ascending aortic calcifications also may be observed in patients with Marfan syndrome and focal aortic dissection. Other causes of ascending aortic calcification include false aneurysm and chronic aortic dissection.

Calcification of the ductus arteriosus in adults may be found in patients with ductus patency as well as in those with occluded ductus. In children, calcification of the ligamentum arteriosum indicates a closed ductus. On a frontal chest radiograph, calcification is observed as a curvilinear or nodular density between the aorta and pulmonary trunk.
Coronary (artery) Calcifications

Coronary artery disease (CAD) is the leading cause of death in the United States. According to one American Heart Association estimate, in 1 year, at least 1 million Americans suffer from angina or myocardial infarctions. Of patients who suffer myocardial infarctions, 30% are younger than 65 years and 4% are younger than 45 years.

Evaluation of patients with CAD includes patient history (including review of symptoms and significant coronary risk factors), physical examination, and evaluation of a resting ECG. In patients with symptoms suggestive of CAD, additional investigation, including the physiologic response to stress, may be indicated. This may include stress ECG, stress echocardiography or MRI, or radionuclide perfusion imaging. Coronary calcification is a recognized marker for atherosclerotic CAD. Calcification can be identified on plain radiographs, fluoroscopy, and CT. More reproducible CT evaluation, including quantitation of coronary calcium, may be performed using helical and electron beam CT (EBCT).

Physicians are increasingly using CT detection of calcification to detect subclinical CAD, which may result in early initiation of diet and drug therapy.

Imaging of coronary calcification

Numerous modalities exist for identifying coronary calcification, including plain radiography, fluoroscopy, intravascular ultrasound, MRI, echocardiography, and conventional, helical, and electron-beam CT (EBCT).

Plain radiographs have poor sensitivity for detection of coronary calcification and have a reported accuracy as low as 42% (Image 15).

Fluoroscopy was the most frequently used modality in detecting coronary artery calcification before the advent of CT. The ability of fluoroscopy to detect small, calcified plaques is poor. In a recent study, only 52% of calcific deposits observed on EBCT were identified fluoroscopically. In addition, fluoroscopy is operator dependent, and certain patient characteristics (eg, body habitus, overlying anatomic structures, calcification in overlying anatomic regions) can compromise fluoroscopic examination.8

CT is highly sensitive for detecting calcification. In a study using calcification on CT as a marker of significant angiographic stenosis, sensitivities of 16-78% were reported. Reported specificities were 78-100%, and positive predictive values were 83-100%, suggesting that CAD may be likely to occur when coronary calcification is observed on CT. Conventional CT demonstrates calcification in 50% more vessels than fluoroscopy does in patients with angiographically proven stenosis. However, conventional CT has a slower scan time and is more prone to artifacts from cardiac and respiratory motion and volume averaging than helical or EBCT.9,10


Electron-beam CT


EBCT minimizes motion artifacts, since cardiac-gated imaging can be triggered by the R wave of the cardiac cycle.11 Imaging can be performed in diastole, minimizing cardiac motion. Typically, 20 contiguous, 100-millisecond, 3-mm thick sections are obtained during 1 or 2 breath-holds. Coronary calcification is observed as a bright white area along the course of coronary vessels.


Helical CT


Scans are performed with acquisition times approaching 0.5 second-250 milliseconds; the faster acquisition is possible with the newer multidetector scanners. Calcific deposits are identified as bright white areas along the course of coronary arteries (Image 16).

Coronary calcifications detected on EBCT or helical CT can be quantified, and a total calcification score can be calculated. In this schematic, an arbitrary pixel threshold of +130 Hounsfield units (HU) (+90 for some helical scanners) covering an area greater than 1.0 mm often is used to detect coronary artery lesions. Regions of interest are placed around the area of calcification. Once the region of interest is placed, scanner software displays peak calcification, attenuation in HU, and area of the calcified region in millimeters squared. The volume or Agatson score is displayed. The volume score is the area of the lesion, while the Agatson score is weighted to consider attenuation of pixels, as well as the area.

In the Agatson scoring system, +130-200 HU lesions are multiplied by a factor of 1, +201-300 HU by 2, +301-400 HU lesions by 3, and lesions greater than 401 HU are multiplied by a factor of 4. The sum of the individual lesion scores equals the score for that artery, and the sum of all lesion scores equals the total calcification score.

In one study, a total calcification score of 300 had a sensitivity of 74% and a specificity of 81% in detecting obstructive CAD. The negative predictive value of a zero calcification score was 98%. In another study, sensitivity for detecting calcific deposits in patients with angiographically significant stenosis was 100%, and specificity was 47%. In the same study, 8 patients without calcification showed no angiographic evidence of CAD, while 28 patients with calcification showed mild or moderate CAD.

However, despite the high sensitivity of EBCT, calcification scores do not always predict significant stenosis at the site of calcification. In another study, EBCT was compared with coronary angiography; only 1 patient with stenosis greater than 50% on angiography did not demonstrate coronary calcification on EBCT. Thus, absence of calcification appears to be a good predictor of the absence of significant luminal stenosis. However, absence of calcification does not always indicate the absence of atherosclerotic plaque.

A multicenter study reviewed cardiac event data in 501 mostly symptomatic patients with CAD who underwent both EBCT and coronary angiography. In this group, 1.8% of patients died and 1.2% had nonfatal myocardial infarctions during a mean follow-up period of 31 months. A calcification score of 100 or more was revealed to be highly predictive in separating patients with from those without cardiac events.

Conclusion


The amount of coronary calcification relates to the extent of atherosclerosis, although the relationship between arterial calcification and the probability of plaque rupture is unknown. A zero calcification score is a good predictor of absence of significant CAD. Detecting extensive coronary calcification on CT appears to be a marker of significant atherosclerotic burden and serves as an indication for a more aggressive evaluation of coronary risk factors and an early institution of dietary and/or drug therapy. However, the full implications of coronary calcification detection on CT and the role of this modality must await the results of ongoing investigations.


Keywords

cardiac calcifications, coronary artery calcification, coronary artery disease, aortic valve calcification, mitral valve calcification, valvular stenosis, myocardial calcification, pericardial calcification, constrictive pericarditis.

Source : http://emedicine.medscape.com/article/352054-overview?src=emed_whatnew_nl_0#calcifications

Thursday, June 18, 2009

Can Nitroglycerin Be Given to a Patient Who Has Taken Sildenafil?

Question

When a male patient who had taken sildenafil presents with an acute coronary syndrome (ACS), how real is the risk for serious hypotension if nitroglycerin (NTG) is titrated intravenously (IV)?

Response from Joanna Pangilinan, PharmD, BCOP
Pharmacist, Comprehensive Cancer Center, University of Michigan Health System, Ann Arbor, Michigan

The phosphodiesterase-5 (PDE5) inhibitor sildenafil is approved by the US Food and Drug Administration for the treatment of erectile dysfunction.[1] Sildenafil is contraindicated in combination with nitrates due to the risk for a pharmacodynamic drug interaction that can result in severe hypotension[1] and death.[2]

Men with coronary artery disease (CAD), a condition with increased prevalence of erectile dysfunction, may in certain medical situations be candidates for nitrates. Some patients with CAD could experience an ACS within 24-48 hours of taking a PDE5 inhibitor.[3]

In order to study this drug-drug interaction, Parker and colleagues[4] performed a randomized, double-blind, crossover trial to evaluate the effect of IV NTG in 34 men with stable CAD who received sildenafil 100 mg or placebo. Telemetry monitoring was used to assess supine blood pressure and heart rate at baseline and every 3 minutes during the assessment periods, which were 15 minutes prior to sildenafil/placebo administration, 15 minutes prior to NTG initiation, and 9 minutes after each change in NTG dose level (maximum 160 µg/min).

Sildenafil administration resulted in a mean ± standard deviation maximum decrease in systolic/diastolic blood pressure of 12 ± 12/8 ± 8 mm Hg compared with 5 ± 11/4 ± 7 mm Hg after placebo administration. Heart rate change after sildenafil was 2 ± 3 beats per minute and -1 ± 4 beats per minute for placebo. The NTG dose (median maximum) was 80 µg/min (range 0-160) for the sildenafil phase and 160 µg/min (range 20-160) for the placebo phase. The maximum rate of 160 µg/min was tolerated by 8 (25%) men during the sildenafil phase vs 19 (59%) during the placebo phase (P = .0008). Hypotension occurred in significantly more patients in the sildenafil group. The authors recommend caution when extrapolating these results to other subgroups of patients, including those with ACS.

As treatment of ACS should include all of the recommended therapeutic interventions, nitrates may not be required for a patient who develops an ACS 24-48 hours after administration of a PDE5 inhibitor.[3] Some suggest that recent sildenafil administration may not be absolutely contraindicated in patients who require IV NTG as long as the blood pressure is monitored appropriately.[3]

The American College of Cardiology (ACC) and American Heart Association (AHA) joint guidelines for the management of patients with unstable angina/non-ST-elevation myocardial infarction recommend that NTG or nitrates not be given to these patients within 24 hours of sildenafil.[5] Nitrates should also be avoided after use of other PDE5 inhibitors.[5,6] An ACC/AHA expert consensus document suggests that nitrates may be considered 24 hours after sildenafil dosing. Further delay may be necessary in the patient who might have a prolonged elimination half-life of sildenafil due to renal or hepatic dysfunction or coadministration of a cytochrome P450 3A4 inhibitor. If NTG is initiated in such circumstances, caution and careful monitoring are necessary.[7]

In conclusion, administration of any form of nitrates in a patient who has taken sildenafil poses a real and serious risk for serious hypotension. Coadministration is contraindicated and should be avoided.

References

  1. Viagra® [package insert]. New York, NY: Pfizer Inc; 2008.
  2. US Food and Drug Administration. Center for Drug Evaluation and Research. Postmarketing safety of sildenafil citrate (Viagra). Available at: http://www.fda.gov/cder/consumerinfo/viagra/safety3.htm. Accessed April 2, 2009.
  3. Werns SW. Are nitrates safe in patients who use sildenafil? Maybe. Crit Care Med. 2007;35:1988-1990.
  4. Parker JD, Bart BA, Webb DJ, et al. Safety of intravenous nitroglycerin after administration of sildenafil citrate to men with coronary artery disease: a double-blind, placebo-controlled, randomized, crossover trial. Crit Care Med. 2007;35:1863-1868.
  5. Anderson JL, Adams CD, Antman EM, et al. ACC/AHA 2007 guidelines for the management of patients with unstable angina/non-ST-elevation myocardial infarction: executive summary. A report of the American College of Cardiology/American Heart Association task force on practice guidelines (Writing Committee to revise the 2002 guidelines for the management of patients with unstable angina/non-ST-elevation myocardial infarction). Circulation. 2007;116:803-877.
  6. Kostis JB, Jackson G, Rosen R, et al. Sexual dysfunction and cardiac risk (the Second Princeton Consensus Conference). Am J Cardiol. 2005;96:313-321.
  7. Cheitlin MD, Hutter AM, Brindis RG, et al. Use of sildenafil (Viagra) in patients with cardiovascular disease. J Am Coll Cardiol. 1999;33:273-282.
Source : http://www.medscape.com/viewarticle/703602?src=mp&spon=34&uac=133298AG

Saturday, June 13, 2009

ADA 2009: Expert Committee Recommends Use of Hemoglobin A1C for Diagnosis of Diabetes

The American Diabetes Association (ADA), the International Diabetes Federation (IDF), and the European Association for the Study of Diabetes (EASD) have joined forces to recommend the use of the hemoglobin A1C assay for the diagnosis of diabetes.

The international expert committee's recommendations were announced here on Friday during the opening hours of the ADA's 69th Scientific Sessions and released simultaneously online in the July issue of Diabetes Care.

"This is the first major departure in 30 years in diabetes diagnosis," committee chairman David M. Nathan, MD, director of the Diabetes Center at Massachusetts General Hospital and professor of medicine at Harvard Medical School in Boston, declared in presenting the committee's findings.

"A1C values vary less than FPG [fasting plasma glucose] values and the assay for A1C has technical advantages compared with the glucose assay," Dr. Nathan said. A1C gives a picture of the average blood glucose level over the preceding 2 to 3 months, he added.

"A1C has numerous advantages over plasma glucose measurement," Dr. Nathan continued. "It's a more stable chemical moiety.... It's more convenient. The patient doesn't need to fast, and measuring A1C is more convenient and easier for patients who will no longer be required to perform a fasting or oral glucose tolerance test.... And it is correlated tightly with the risk of developing retinopathy."

A disadvantage is the cost. "It is more expensive," Dr. Nathan acknowledged. However, cost analyses have not been done, "...and costs are not the same as charges [to the patient]."

The committee has determined that an A1C value of 6.5% or greater should be used for the diagnosis of diabetes.

This cut-point, Dr. Nathan said, "is where risk of retinopathy really starts to go up."

He cautioned that there is no hard line between diabetes and normoglycemia, however, "...an A1C level of 6.5% is sufficiently sensitive and specific to identify people who have diabetes."

"We support the conclusion of the committee, that this is an appropriate way to diagnose diabetes," stated Paul Robertson, MD, president of medicine and science at the ADA and professor of medicine at the University of Washington in Seattle.

"Now, we have to refer the committee's findings to practice groups for review of the implications and for recommendations," Dr. Robertson told Medscape Diabetes & Endocrinology after the committee's presentation.

"We purposely avoided using estimated average glucose, or EAG, as this is just a way to convert the A1C into glucose levels.... And one thing we want to try to get away from is the term prediabetes," Dr. Nathan said. "It suggests that people with it will go on to get diabetes, but that is not the case."

"We don't know if we will be diagnosing more patients with diabetes or less, with AIC," Dr. Nathan commented. Cut-off values or practice guidelines have not been established. More study needs to be done first, but "physicians should not mix and match A1C and blood glucose levels. They should stick with one in reviewing a patient's history," Dr. Nathan asserted.

"There is no gold standard assay," said session moderator Richard Kahn, PhD, chief medical and scientific officer of the ADA, which is headquartered in Alexandria, Virginia. "All of these tests measure different things. They all have value. But A1C is the best test to assess risk of retinopathy."

"We [the ADA] are not issuing a position statement at this time," Dr. Robertson stressed when speaking with Medscape Diabetes & Endocrinology. "It is too soon to write a position paper yet. We need to know what we are getting into first."

"Some parts of the world are not going to be able to use this," Dr. Robertson added. "It may be too expensive to use in the developing world. Some of these countries have severe chronic anemia, hemolytic anemia, and so on, where we will have to fall back on traditional tests. We are being very cognizant of the international implications." A1C assays are inaccurate in cases of severely low hemoglobin levels.

"We don't think physicians will have a hard time adopting the test...a lot of them are doing it already. We think it will only take a couple of years to be adopted widely into clinical practice," Dr. Kahn told Medscape Diabetes & Endocrinology. "Physicians won't be shocked by this report, but patients — and insurance companies — might be. There are wider social issues that haven't been looked at yet."

Source : http://www.medscape.com/viewarticle/704021?src=mpnews&spon=34&uac=133298AG

Friday, June 12, 2009

Half of Strokes Early After TIA Occur Within 24 Hours

A new study shows that nearly half of all strokes that occur after a transient ischemic attack (TIA) occur within the first 24 hours, highlighting the need for emergent intervention, the researchers say.

The good news is that the ABCD2 score, a validated risk score, was reliable in this hyperacute phase, meaning that "appropriately triaged emergency assessment and treatment are feasible," the researchers, with senior author Peter M. Rothwell, MD, from the Stroke Prevention Research Unit at Oxford University and John Radcliffe Hospital, in the United Kingdom, conclude.

This is the first rigorous population-based study of the risk for recurrent stroke within 24 hours of TIA, Dr. Rothwell told Medscape Neurology. "We found that nearly half of all the strokes that occur within 30 days after a TIA actually occur within those first 24 hours, so unless we intervene more quickly and treat it as a true emergency, rather than a 'see-urgently' problem, we'll miss the opportunity to prevent some of those early recurrent strokes," he said.

The results, reported on behalf of the Oxford Vascular Study, are published in the June 2 issue of Neurology.

Risk Underestimated

Over the past few years, Dr. Rothwell's group and others have been examining the natural history of TIA and minor strokes, looking at extent to which the early risk for recurrent stroke has been underestimated in the past and trying to determine how best to prevent recurrent events after the warning signal of TIA has occurred.

Results of the Early Use of Existing Preventive Strategies for Stroke (EXPRESS) trial, of which Dr. Rothwell was principal investigator, showed that urgent aggressive intervention after a TIA or minor stroke cut the 90-day risk for recurrent stroke by 80%, as well as reducing fatal and nonfatal stroke, disability, hospital admission days, and costs by the same magnitude (Rothwell PM et al. Lancet 2007; 370:1398-1400; Luengo-Fernandez R et al. Lancet Neurol 2009;8:218-219).

On the basis of these kinds of findings, clinical guidelines in most countries have changed significantly, recommending that patients should be assessed within 24 hours of a TIA or minor stroke, down from a recommendation of 7 days only a year ago.

Still, while 24 hours is better than 7 days, "it's still not quite a medical emergency," he said. In this paper, they sought to determine the real risk for recurrence in the 24 hours following a TIA, "to see what the very early risk really is in the first few hours and what might be gained, therefore, by seeing patients even earlier, as well as what might be gained by better public education to get patients to present immediately when they have 1 of these minor episodes."

The ABCD2 risk score aims to help clinicians identifying those at highest risk but was derived for prediction of the risk for stroke at 7 days and has not been examined in this hyperacute phase, Dr. Rothwell noted.

Using data from the Oxford Vascular Study, a prospective, population-based incidence study of TIA and stroke, they determined the risk for recurrent stroke at 6, 12, and 24 hours after an index event.

Of 1247 patients with a first TIA or stroke, 35 had recurrent strokes within 24 hours, all of them in the same arterial territory, the authors report. In 25 of these patients with recurrent strokes, the initial event was a TIA.

Of the 488 patients in total whose initial event was a TIA, 42% of the 25 events that occurred within the first 30 days actually occurred within the first 24 hours.

Stroke Risk at 6, 12, and 24 Hours Following Transient Ischemic Attack

Time Point, h Stroke Risk (%) 95% CI
6 1.2 0.2 – 2.2
12 2.1 0.8 – 3.2
24 5.1 3.1 – 7.1

"The other thing we were keen to do was make sure that the ABCD2 risk score, which is now embedded in all the national and international guidelines, actually worked for the risk of stroke within the first few hours," Dr. Rothwell noted. "The guidelines say that patients with scores less than 4 needn't be seen quite so urgently as those with higher scores, but that's only really been looked at for 7-day risk."

What they found is that basic triage using the ABCD2 score "still seems reasonable in patients that present within the first few hours," he said. The 12- and 24-hour risks were strongly related to the risk score (P = .02 and .0003, respectively). However, these findings were still based on small numbers of outcomes, they caution, and further studies on this would help to confirm their results.

Of some concern, 16 of the 25 (64%) recurrent stroke patients with TIA as their initial event did seek medical attention, usually from their family doctor, after their TIA but did not receive antiplatelet therapy acutely, nor were they sent to the acute intervention clinic at their institution. "However, the fact that the majority of patients sought medical attention prior to their recurrence indicates that emergency triage and treatment are feasible, if front-line services recognize the need," the authors write.

Dr. Rothwell added that he sees the medical profession and neurologists in particular as getting the message that patients presenting with a TIA and a high risk score need to be seen "immediately, rather than tomorrow, which is the current guideline."

"I think the bigger challenge is to get that message over to the public, because at the moment only about 50% of patients who have a TIA seek medical attention within 24 hours, and a lot of patients don't seek medical attention at all," he said.

TIA: A Medical Emergency

Asked for comment on these findings, Philip B. Gorelick, MD, professor and head of the department of neurology and rehabilitation and director of the Center for Stroke Research at the University of Illinois College of Medicine at Chicago, said that accumulating data confirm that recent TIA should be treated as a medical emergency.

A recent change in the definition of TIA by the American Heart Association/American Stroke Association (Easton JD et al. Stroke 2009;40:2276-2293) supports this conclusion, Dr. Gorelick noted, suggesting that neuroimaging and diagnostic workup should be carried out within 24 hours of a TIA when patients present within this time period and that it is reasonable to hospitalize patients who have had such an episode within the previous 72 hours if they have an ABCD2 score of 3 or more, he added

Importantly, it is critical to rapidly determine the etiology of TIA — for example, whether it results from a cardiac source embolism or large artery disease, Dr. Gorelick noted. "Currently, a tissue-based definition of TIA has been adopted. In aggregate data, it has been estimated that about 39% of [magnetic resonance imaging] diffusion-weighted image studies in patients with TIA show a cerebral ischemic injury pattern, and therefore a cerebral infarction has actually occurred."

The current study emphasizes again the importance of rapid diagnosis and treatment, since there was a 5.1% risk for stroke in the first 24 hours after TIA, with many of the strokes leading to a poor outcome, he said. "Furthermore, the 7-day stroke rate was close to 10%," he noted. "Although 64% of these early cases sought urgent medical attention prior to recurrent stroke, none received antiplatelet therapy acutely."

The lesson from this and other studies is that TIA is not benign, and urgent diagnosis and treatment is indicated, even though this may not occur in real-world experience, Dr. Gorelick concluded. In this effort, the ABCD2 score is a reliable clinical tool that assesses the acute risk for stroke in TIA patients.

"We need to continue to educate the public and healthcare professionals about the importance of recent TIA as a predictor of stroke and the urgency of diagnosis and treatment," Dr. Gorelick told Medscape Neurology. "Emergency TIA assessment and treatment programs have proven to dramatically reduce the risk of stroke after TIA. Widespread establishment of such programs should be considered, as we need to get TIA patients under the care of those who have experience in vascular neurology and who can make a difference."

The study was funded by the UK Medical Research Council, the National Institute of Health Research, the Stroke Association, the Dunhill Medical Trust, and the Oxford Partnership Comprehensive Biomedical Research Centre. The authors report no disclosures.

Neurology. 2009;72:1941-1947. Abstract

Source : http://www.medscape.com/viewarticle/703922?sssdmh=dm1.481497&src=nldne

Saturday, May 23, 2009

World Health Organization Issues Guidelines on Hand Hygiene in Healthcare

May 6, 2009 — The World Health Organization (WHO) has issued Guidelines on Hand Hygiene in Health Care, offering a thorough review of evidence on hand hygiene in healthcare and specific recommendations to improve hygiene practices and reduce transmission of pathogenic microorganisms to patients and healthcare workers (HCWs).

The guidelines target hospital administrators and public health officials as well as HCWs, and they are designed to be used in any setting in which healthcare is delivered either to a patient or to a specific group, including all settings where healthcare is permanently or occasionally performed, such as home care by birth attendants. Individual adaptation of the recommendations is encouraged, based on local regulations, settings, needs, and resources.

Hand Hygiene Indications

Indications for hand hygiene are as follows:

• Wash hands with soap and water when visibly dirty, when soiled with blood or other body fluids, or after using the toilet.

• Handwashing with soap and water is preferred when exposure to potential spore-forming pathogens, such as Clostridium difficile, is strongly suspected or proven.

• In all other clinical situations, use an alcohol-based handrub as the preferred means for routine hand antisepsis, if hands are not visibly soiled. Wash hands with soap and water if alcohol-based handrub is not available.

• Hand hygiene is needed before and after touching the patient; before touching an invasive device used for patient care, whether gloves are used; after contact with body fluids or excretions, mucous membranes, nonintact skin, or wound dressings; if moving from a contaminated body site to another body site on the same patient; after touching inanimate surfaces and objects in the immediate vicinity; and after removing gloves.

• Hand hygiene is needed before handling medication or preparing food using an alcohol-based handrub or handwashing with water and either plain or antimicrobial soap.

• Soap and alcohol-based handrub should not be used together.

Hand Hygiene Techniques

Specific recommendations for hand hygiene technique are as follows:

• Rub a palmful of alcohol-based handrub over all hand surfaces until dry.

• When washing hands, wet hands with water and apply enough soap to cover all surfaces; rinse hands with water and dry thoroughly with a single-use towel. Whenever possible, use clean, running water. Avoid hot water, which may increase the risk for dermatitis.

• Use the towel to turn off the tap or faucet, and do not reuse the towel.

• Liquid, bar, leaf, or powdered soap is acceptable; bars should be small and placed in racks that allow drainage.

Surgical Hand Preparation

Specific recommendations for surgical hand preparation are as follows:

• Before beginning surgical hand preparation, remove jewelry. Artificial nails are prohibited.

• Sinks should be designed to reduce the risk for splashes.

• Visibly soiled hands should be washed with plain soap before surgical hand preparation, and a nail cleaner should be used to remove debris from underneath the fingernails, preferably under running water.

• Brushes are not recommended.

• Before donning sterile gloves, surgical hand antisepsis should be performed with a suitable antimicrobial soap or alcohol-based handrub, preferably one that ensures sustained activity. Alcohol-based handrub should be used when quality of water is not assured.

• When using an antimicrobial soap, scrub hands and forearms for the length of time recommended by the maker, usually 2 to 5 minutes.

• When using an alcohol-based surgical handrub, follow the maker's instructions; apply to dry hands only; do not combine with alcohol-based products sequentially; use enough product to keep hands and forearms wet throughout surgical hand preparation; and allow hands and forearms to dry thoroughly before donning sterile gloves.

Selecting Hand Hygiene Agents

Some specific recommendations for selection and handling of hand hygiene agents are as follows:

• Provide effective hand hygiene products with low potential to cause irritation.

• Ask for HCW input regarding skin tolerance, feel, and fragrance of any products being considered.

• Determine any known interaction between products used for cleaning hands, skin care products, and gloves used in the institution.

• Provide appropriate, accessible, well-functioning, clean dispensers at the point of care, and do not add soap or alcohol-based formulations to a partially empty dispenser.

Skin Care Recommendations

Some specific recommendations for skin care are as follows:

• Educate HCWs about hand-care practices designed to reduce the risk for irritant contact dermatitis and other skin damage.

• Provide alternative hand hygiene products for HCWs with confirmed allergies to standard products.

• Provide HCWs with hand lotions or creams to reduce the risk for irritant contact dermatitis.

• Use of antimicrobial soap is not recommended when alcohol-based handrub is available. Soap and alcohol-based handrub should not be used together.

Recommendations for Glove Use

Some specific recommendations for use of gloves are as follows:

• Glove use does not replace the need for hand hygiene.

• Gloves are recommended in situations in which contact with blood or other potentially infectious materials is likely.

• Remove gloves after caring for a patient, and do not reuse.

• Change or remove gloves if moving from a contaminated body site to either another body site within the same patient or the environment.

"In hand hygiene promotion programmes for HCWs, focus specifically on factors currently found to have a significant influence on behaviour, and not solely on the type of hand hygiene products," the guidelines authors write. "The strategy should be multifaceted and multimodal and include education and senior executive support for implementation. Educate HCWs about the type of patient-care activities that can result in hand contamination and about the advantages and disadvantages of various methods used to clean their hands."

Four of the guidelines authors have disclosed various financial relationships with GOJO, Clorox, and GlaxoSmithKline, and other companies and institutions. A complete description of their disclosures is available in the original article. The other guidelines authors have disclosed no relevant financial relationships.

WHO Guidelines on Hand Hygiene in Health Care. May 2009.

Clinical Context

In 2004, WHO convened a group of international experts in infection control to prepare guidelines for hand hygiene in healthcare. In 2002, the Centers for Disease Control and Prevention Guideline for Hand Hygiene in Health-Care Settings was adopted. Following a systematic review of the literature and task force meetings, the Advanced Draft of the WHO Guidelines on Hand Hygiene in Health Care was published in 2006. An Executive Summary of the Advanced Draft of the Guidelines is available separately (http://www.who.int/gpsc/tools/en/). Pilot testing of the advanced draft occurred, with subsequent updating and finalization of the guidelines.

The WHO Guidelines on Hand Hygiene in Health Care includes a review of scientific data, consensus recommendations, process and outcome measurements, proposals for large scale promotion of hand hygiene, patient participation in promotion of hand hygiene, and a review of national and subnational guidelines. The recommendations are expected to be valid until 2011 and will be updated every 2 to 3 years.

Study Highlights

  • Indications for washing hands with soap and water include visibly dirty hands, hands visibly soiled with body fluids, or after using the toilet.
  • Handwashing with soap and water is preferred after exposure to potential spore-forming pathogens, including Clostridium difficile outbreaks.
  • Alcohol-based handrub is preferred in the following situations if hands are not visibly soiled: before and after touching a patient; before handling an invasive device for patient care; after contact with body fluids or excretions, mucous membranes, nonintact skin, or wound dressings; between contact with a contaminated body site to another site on the same patient; after contact with inanimate surfaces and objects; and after removing sterile or nonsterile gloves.
  • Handwashing with soap and water is recommended when alcohol-based handrub is unavailable.
  • Alcohol-based handrub or soap and water can be used before handling medication or preparing food.
  • Concomitant alcohol-based handrub and soap use is not recommended.
  • Soap and water hand-washing technique includes using a towel to turn off the faucet, thorough drying of hands, and single towel use.
  • Acceptable forms of soap are liquid, bar, leaf, or powdered.
  • Bar soap racks should allow drainage to ensure that the soap dries.
  • Alcohol-based handrub technique includes applying palmful amount of handrub, covering all surfaces, and rubbing hands until dry.
  • Surgical hand hygiene recommendations include removal of jewelry, no brushes, and use of either antimicrobial soap or alcohol-based handrub according to the maker's recommendations.
  • Selection of hand hygiene agents should consider input from HCWs, interaction with other products or gloves, risk for contamination, accessibility and proper functioning of dispensers, approval of dispensers for flammable materials, and cost comparisons.
  • Soap or alcohol-based handrub should not be added to partially empty soap dispensers.
  • Skin care irritation in HCWs can be avoided by providing educational programs, alternative hand hygiene products for those with allergies or adverse reactions to standard products, and hand moisturizers to reduce irritant contact dermatitis.
  • Glove use does not replace the need for handrub or handwashing.
  • Gloves should be used if contact with potentially infectious body fluids, mucous membranes, or nonintact skin is anticipated.
  • Gloves should be removed or changed after each patient or after contact with a contaminated body site.
  • Artificial nails or extenders should not be used, and the length of natural nail tips should be less than 0.5 cm.
  • Educational and motivational programs for HCWs should focus on behavior; be multimodal; include senior executive support; educate about the advantages and disadvantages of various hand hygiene methods; monitor adherence and provide performance feedback; and encourage partnership between patients, families, and HCWs.
  • Healthcare administrators should provide and monitor safe, continuous water supply; provide alcohol-based handrub at the point of patient care; prioritize compliance; provide leadership, administrative support, and financial resources; ensure training; implement a multidisciplinary, multifaceted, and multimodal program to improve adherence; and adhere to national safety guidelines and local legal requirements.
  • National governments should prioritize adherence; consider funded, coordinated implementation and monitoring; support strengthening of infection control in healthcare settings; promote community hand hygiene; and encourage use of hand hygiene as a quality indicator in healthcare settings.

Clinical Implications

  • The WHO guidelines recommend handwashing with soap and water for visibly dirty hands, hands visibly soiled with body fluids, after toilet use, exposure to potential spore-forming pathogens, and if alcohol-based handrub is not available in other situations.
  • The WHO guidelines recommend alcohol-based handrub before and after touching patients; before handling invasive devices; after contact with body fluids or excretions, mucous membranes, nonintact skin, or wound dressings; between touching contaminated body site and another body site; after contact with inanimate surfaces and objects; and after removing gloves.
Source : http://cme.medscape.com/viewarticle/702403?src=cmenews

Call for Routine Cardiac Screening in Emergency-Department Patients with Cocaine Intoxication, Addiction

May 21, 2009 (San Francisco, California) — Despite the fact that cocaine abuse accounts for approximately 25% of nonfatal myocardial infarctions (MIs) in young people, most addicted individuals presenting to the psychiatric emergency department are not routinely screened for this potentially lethal complication, new research suggests.

A retrospective study presented here at the American Psychiatric Association 162nd Annual Meeting showed that, of 122 cocaine-addicted patients, only 42% received an electrocardiogram (ECG) upon presentation to the emergency department. However, of these individuals, more than 90% had abnormal ECG results — including significant patterns of peak T waves. Further, 4 patients (8.2%) had ST elevations indicative of significant cardiac ischemia.

Valerie D'Aurora and John Charooonbara

"Many cocaine-addicted patients present to the psychiatric emergency department vs a medical emergency department. So this is potentially the only opportunity to screen for cardiac complications. Yet our research suggests psychiatrists are not being vigilant enough with respect to this," Valerie D'Aurora, from St. George's University School of Medicine, in Grenada, the West Indies, told Medscape Psychiatry.

Baseline ECG Should Be Standard Practice

To determine current management of cocaine-addicted patients presenting to the psychiatric emergency department and examine cardiac risk factors in this patient population, the researchers conducted a chart review of 122 patients with a diagnosis of cocaine dependence.

Of these individuals, 52 (42.6%) received an ECG and 4 (3.3%) had measurement of cardiac bioenzymes, including troponin and creatine kinase (CK)-MB. Among subjects who did receive an ECG, the most common findings were:

  • Nonspecific ST-T wave changes in 15 (31.2%) patients.
  • Peaked T waves in 23 (47.9%) patients.
  • Early afterdepolarizations in 19 (39.6%) patients.
  • ST elevations in 2 or more contiguous leads in 4 (8.3%) patients. However, the investigators note there were no baseline ECGs available for comparison to determine the acuity of changes.

The researchers also note ECGs were conducted, on average, 2 days after patients presented to the emergency department, with follow-up in only 2 (1.6%) patients. This, in spite of the fact that research shows the greatest risk for MI is 1 hour after cocaine use and is independent of dose, frequency, or routine. They also note that even trace amounts of cocaine in urine indicate the need to implement acute coronary syndrome (ACS) protocol.

"We're lucky if these patients get to the emergency department an hour after use, so they need to get an ECG immediately upon presentation," said Ms. D'Aurora.

Few Patients Assessed for Cardiac Risk Factors

Concomitant alcohol dependence was identified in 40 (32.8%) patients, and 81 (66.4%) individuals were nicotine dependent. In addition, 63 (51.6%) were also dependent on opioids and 26 (21%) on benzodiazepines.

When researchers analyzed data on additional cardiac risk factors, they found that this information, including family history of heart disease, hypertension, obesity, diabetes, and abdominal aortic aneurysm, were documented in only 21.3% of study subjects.

In light of these findings, Ms. D'Aurora and colleagues have developed an algorithm for individuals presenting to the psychiatric emergency department with a high suspicion of cocaine abuse to optimize patient care that includes a baseline ECG within 1 hour of presentation to the emergency department.

"What was really surprising about this study is what is not being done. When these patients have positive ECGs, they are not being referred to cardiologists per the ACS guidelines."

Regardless of whether they present to a medical emergency department or psychiatric emergency, they should be assessed for potential cardiac complications, said Ms. D'Aurora.

"With all of the other things that go on in a psychiatric emergency department, I think the importance of this is probably underrated. But even if it just means taking out your stethoscope and listening to the heart or taking a pulse, cardiac assessment in these patients needs to become part of the standard management in the psychiatric department," she said.

High Index of Suspicion

Asked by Medscape Psychiatry to comment on the study, Mark Willenbring, MD, director of the division of treatment and recovery research at the National Institute on Alcohol Abuse and Alcoholism, said the findings highlight the need to have a higher index of suspicion for cardiac complications in cocaine addiction and intoxication.

"We've known for a long time that this patient group is at particular risk of cardiac complications. This study suggests that there are significant cardiac abnormalities that are being missed in patients who are triaged to a psychiatric service, and I suspect that's quite likely," said Dr. Willenbring.

"That said, I'm not a cardiologist, and so I don't know if providing a routine ECG in all of these patients is necessary or cost-effective, but I do think one should always have a very high index of suspicion and investigate for the possible presence of cardiac abnormalities with a lower threshold in these patients than you would with others," he added.

Source : http://www.medscape.com/viewarticle/703149?sssdmh=dm1.474949&src=nldne