by Sharon Begley, Newsweek, May 18, 2010
When it comes to predicting risk of disease, Alzheimer's genes—and others—strike out.
When James Watson, codiscoverer of the double helix, had his genome fully sequenced in 2008, there was one piece of DNA he insisted the lab not tell him about: whether he had a genetic variant that significantly increases the chance of developing Alzheimer’s disease. Called apoE, the gene comes in three variants, of which APOE4 increases the risk of Alzheimer's between 10- and 30-fold. Different people have different feelings about learning what lies in their medical future, especially if it is something for which there is neither cure nor treatment. (House got good mileage out of this dilemma when Thirteen, played by Olivia Wilde, decided to find out whether she carries the gene for the inevitably fatal, incurable Huntington's disease. She does.) If studies coming out over the last few months are any indication, however, most of us can postpone making this difficult decision: the revolution in using DNA to read people's medical future is turning out to be more hype than hope.
The latest research to throw cold water on the crystal-ball powers of DNA is a paper in the current issue of the Journal of the AmericanMedical Association. It starts out as a standard genomewide association study (GWAS) in which scientists sequence genomes of people with and without particular diseases and identify genetic variants associated with those illnesses. In this case, Monique Breteler of the University Medical Center in Rotterdam and her colleagues analyzed the genomes of just over 35,000 people, some healthy and some with Alzheimer's, and found that four DNA misspellings (or, in the vernacular, single-nucleotide polymorphisms) were connected to Alzheimer's in that they were common to people with the disease but were not found in healthy people.
Until recently, that would have been that: a rigorous, thorough analysis—just over 35,000 genomes—leading to headlines about newly discovered genes linked to this dreaded disease. (Two of the four identified misspellings were previously known, and two are new.) But to their credit, Breteler's team took the next step. They used the four misspellings, along with individuals' age and sex and whether or not they carried the apoE4 genetic variant that so frightened Watson. The results were not pretty. Adding the newly discovered genes "did not improve the ability of a model that included age, sex, and apoE to predict" whether someone would develop Alzheimer's. The genes, concluded the scientists, were "not clinically useful."
In a phone interview, Breteler went further. "Adding these genes to traditional risk factors, such as age and sex, does nothing to aid prediction" of whether someone will develop Alzheimer's, she told me. "Knowing your genetic status will not help. We may still be in the Stone Age when it comes to gene-based prediction." Identifying risk genes isn't pointless, however: they can identify new causes of the disease, and therefore new ways to treat it.
The finding that adding Alzheimer's-risk genes to plain old age, sex, and apoE status does not improve the accuracy of disease prediction seems to defy everything the public is being told about the dawn of a new era of personalized medicine, in which knowing our genomes will tip us off about what diseases we are most at risk for. Such genome-based forecasting is deemed vastly superior to such antediluvian methods as family history. And it is the basis for the explosion in consumer-based genome testing, such as that offered by 23andme, Navigenics, and Pathway Genomics, whose plan to sell its saliva-swab DNA collection kits at Walgreens stores was shot down by the FDA last week.
Yet, as the JAMA study shows, there are serious doubts about how useful genomic information is going to be, outside of a few rare applications such as the ability of a child with leukemia to metabolize chemotherapy, one of the earliest attempts to pair genomics with medicine. Just last year, a study in JAMA concluded that determining whether a patient carries genes that affect the risk of blood clots (venous thromboembolisms) does not necessarily prevent blood clots. In a related setback last year, Medicare concluded that genetic tests that indicate how well patients metabolize the clot-buster warfarin does not meaningfully help doctors determine the safe dose; the agency therefore declined to pay for the tests. This year, another JAMA study, of 19,313 women, found that using multiple genetic markers to assess someone's risk of cardiovascular disease produces no better a risk assessment than old-fashioned tests such as cholesterol level, blood pressure, and family history. And this was a study that used 101 genetic variants. Not to pile on, but let me mention one more, on assessing a woman's risk of breast cancer. A study of almost 12,000 women by scientists at the National Cancer Institute, published in TheNew England Journal of Medicine in March, found that supplementing traditional risk factors (whether first-degree relatives such as a mother or sister developed breast cancer; reproductive history) with 10 genetic variants associated with breast cancer did no better at predicting whether a woman would get the disease than the traditional factors alone.
How can knowing whether or not someone has genes associated with a disease not be helpful in predicting whether that disease will strike? No one knows for sure. But it must reflect the fact that the effect of a gene depends on a person's "genetic background"—all the other genes he or she has. And it also reflects a person's environment. In some environments a gene does lead to disease; in others, it doesn't.
Individuals can differ in whether or not they even want to know their risk of Alzheimer's. Some prefer not knowing; others believe it will help them plan, financially and personally. One in every five of us who reaches age 65 will develop Alzheimer's disease. But at minimum, believers in personalized medicine should not be selling the public a bill of goods. I spent a week at Harvard Medical School last year, meeting with scientists, and one of my most surprising conversations was with geneticist David Altschuler, who to all appearances should be a cheerleader for genome-based personalized medicine. (He was a leader of the HapMap project to link large swaths of genetic variation to disease.) Yet he told me that using an individual's genome to assess the risk of disease is "overhyped." He continued, "If you ask what percentage of diagnostic tests in the history of medicine have been helpful, the answer is very few. There is a long history of new technologies being applied broadly beyond their utility." He echoes Breteler's view that the greatest benefit of GWAS and similar studies of genes and disease will be to illuminate the mechanisms that cause disease, and thus offer ways to intervene in those mechanisms to prevent or treat an illness.
Personalized medicine has many high-profile partisans, such as Francis Collins, director of the National Institutes of Health, who made the case for the field in his recent book. Nevertheless, second thoughts are clearly setting in as a result of studies like those I outlined above. Last year, geneticist Steve Jones of University College London wrote in The Daily Telegraphthat despite the billions of dollars that governments, industry, and foundations have poured into genomics and personalized medicine, "the mountain has labored and brought forth a mouse," one that will have little effect on how medicine is practiced, let alone predicting someone's risk of disease.
jeudi 20 mai 2010
Inscription à :
Publier les commentaires (Atom)
Aucun commentaire:
Enregistrer un commentaire