Monthly Archives: January 2014

Why UK citizen’s should opt out of the Government’s new patient database

We have until March to opt out of the care.data initiative. The ‘theoretical risk’ that we might be re-identified fr om our personal data once it is made available to third parties is a compelling reason to opt-out. However, this is not the only reason. Care.data is part of a major legislative programme that includes the Clinical Research Practice Datalink (CRPD) and the 100,000 genome project – through which whole-sequenced genomes will be put to commercial use. These major infrastructural developments have been accompanied by radical changes to privacy law that have resulted in a cultural shift in the governance of information.

These sweeping changes in privacy law were introduced without consultation, and the risks they entail will be borne by those whose medical records may be accessed without their consent. How did we get to this point? Fourteen years ago, written evidence by SmithKline Beecham to the Select Committee on Health, House of Commons, advanced the view that the “NHS represents a singular but under-utilised resource for population genetics, and healthcare informatics more generally.”

In 2007, further written evidence to the Select Committee on Health, House of Lords, by the Association of the British Pharmaceutical Industry, stated that “there is an international race for benefit and competitive advantage in research where the UK could have a significant Unique Selling Point (USP).” By 2008, a major legislative programme to secure this competitive advantage was well underway. As part of the NHS National IT Programme, electronic medical records were rolled-out across England and Wales on an opt-out basis. Although parts of the NHS National IT Programme were discontinued, the ambition – to quote from the ‘Plan for Growth’ 2012 – of “using e-health record data to create a unique position for the UK in health research” remained unchanged. The Clinical Practice Research Datalink (CPRD), which provides data services to the research and life sciences communities, was established in 2008. And in the summer of 2012, and in response to a Freedom of Information request, the Department of Health confirmed that they are to establish a “central repository for storing genomic and genetic data and relevant phenotypic data from patients”. Further correspondence led to a clear statement of the government’s position with respect to the uses of the whole-sequenced genomes held on the national repository: “we plan to capitalise on the UK’s strengths in genomics. The UK is well placed to play a world-leading role in this next phase of the biomedical revolution.” One of the “purposes of the initiative is to support the growth of UK genomics and bioinformatics companies.” Later that year, the government announced that the genomes of 100,000 people were to be sequenced over the course of three to five years. ‘Genomics England’, launched by the Department of Health, in 2013, was to oversee the sequencing of the personal genomes and the creation of a dataset of “whole genome sequences, matched with clinical data” – “at a scale unique in the world.”In addition to these developments, radical changes to information governance, such as Section 251 of the NHS Social Care Act (2006), were introduced. Overriding the common law of confidentiality and Article 8 of the Human Rights Act (2008), Section 251 makes it possible to access people’s medical records without their consent. An equally significant development was the introduction of the Health & Social Care Act 2012, through which Patient-identifiable data can be made available, via the NHS Commissioning Board and the HSCIC. So we now have care.data, and the decision we face about whether or not to opt-out.

Focusing initially on rare inherited diseases, the 100,000 genome project began the first phase of sequencing in Dec, 2013. The role of the recently established ‘Ethics Advisory Group’ is to provide the ethical guidance, and it acknowledges that “irreversible de-identification of whole genome sequence cannot be fully guaranteed for technical reasons”. In respect of limiting the kind of research conducted on the repository data, the Ethics Advisory Group claims that “it would be impractical for it to be possible for patients to place restrictions on the research undertaken on the data, for example by limiting it to ‘non-commercial research.’” As the current legal framework provides no determinate guidance regarding what are acceptable uses of whole-sequenced genomes, the ball really is in the Ethics Advisory Group’s court.

Following a request from the Secretary of State for Health, a major independent review of information sharing was launched, called Caldicott2. Through democratic consultation with all relevant stakeholders, Caldicott2 might have provided a timely, non-partisan assessment of the information governance regime. However, prior to the publication of the Caldicott2 recommendations in March 2013, the government’s ‘Strategy for UK life sciences 2012’ had already made it clear that it would move to “a more progressive regulatory environment.” Caldicott2 was therefore never going to be able to meet its aspirations to give the public a stake in deciding whether or not information would be shared.

As the Caldicott2 report claims that “genetic information should not be treated any differently from other forms of information”, can we expect a laissez-faire approach to whole sequenced genomes?

By choosing to opt out we are making an appeal to our right to democracy. We will soon face a scenario in which medical records will be linked to the whole-sequenced genomes of the population of England and Wales. The sheer scale only augments the associated risks, and when commercialisation appears to be the driving force, we find ourselves in uncharted territory. Therefore, it is more than just opting-out, but rather, taking a stand on what kind of society we want in the future.

Edward Hockings , Guardian

A genetic “Minority Report”: How corporate DNA testing could put us at risk

On Nov. 22, 2013, the FDA sent a now-infamous letter to the genetic research startup 23andMe, ordering the company to stop marketing some of its personal DNA testing kits. In its letter, the agency told 23andMe that it was concerned about the possibility of erroneous test results, about false positives and false negatives. The FDA warned that false positives – for example, being told that one has a high risk of breast cancer when really one doesn’t – might lead customers to seek expensive testing or medical treatment that they don’t really need. False negatives – which are just the opposite – might lead customers to ignore serious health problems or deviate from a prescribed treatment regime. The company had been out of contact with the FDA since May of 2013, and had not filed the required information to allay the bureau’s concerns.

When word about the letter got out, the ever-ready machine of Internet journalism whirred quickly to life. Defenders of the genetic research firm argued that the information, if used properly and with a physician’s supervision, is not only a wondrous tool for protecting health and prolonging life, but a fascinating look into the mysteries of one’s genetic code. Besides, they continued, the technology is here to stay; fighting it will be like lashing the sea to stop the tide. Critics, however, shared the FDA’s worries that the company’s products might drive a frenzy of self-diagnosis and hypochondria. In their view, putting unfiltered diagnostic information in an anxiety-prone person’s hands could be dangerous – better to leave it to the trained judgment of a licensed doctor. Neurologists have reported, for example, that otherwise healthy twenty-somethings, upon getting back their 23andMe results, have marched into their clinics demanding MRIs to check for signs of Alzheimer’s.

But how does 23andMe’s process actually work? Upon receiving a saliva sample from a customer, 23andMe draws statistical inferences about that person’s likelihood of having certain diseases based on what are called “single nucleotide polymorphisms” or “SNPs” (pronounced “snips”). A SNP is a genetic mutation at a single, specific location along a person’s DNA strand. Rather than the base pair that the vast majority of the population has for a given location, an individual with a SNP has a different base pair, which can lead to anomalies in the production of amino acids (i.e., proteins in the body). When 23andMe finds certain SNPs in someone’s DNA, it then cross-references its vast databases of medical literature, which contain findings from thousands of studies showing correlations between certain SNPs and certain diseases. To guard against misinforming its customers, 23andMe maintains precise standards about which studies make it into its databases. The political fight about the company, thus, is drawing the troops into a swamp – into the muddy marsh of arguing over whether consumers are intelligent enough to process this sort of sophisticated statistical and epidemiological information on their own. The question quickly becomes not about 23andMe’s little kits, but about government paternalism more generally.

Legally speaking, however, the entire argument is moot. There is no need to wade into the muck of one’s feelings about big-government paternalism or the perils of new technologies – at least not right now. And that is for a simple reason: The FDA is right. According to the law, 23andMe’s kit is a “medical device,” which federal law defines as any product useful in diagnosis or treatment, which means that the kit is subject to the FDA’s regulations about the marketing of medical products. Those regulations require 23andMe to provide the FDA with information about its statistical methods, to conduct studies of its products’ effects, and to address the agency’s concerns about the risks of erroneous test results. It hasn’t done so. In the real world, thus, 23andMe will have to comply the FDA’s guidelines, and it will have to find some way of reassuring the government that its brightly-colored little kit is not actually Pandora’s box. From a legal perspective, that is the end of the discussion.

Which is not, of course, to say that it should end the discussion. There is another aspect of 23andMe’s business, one which has received less attention from the media (with the exception of an excellent writeup in Scientific American by Charles Seife), but which is, in actuality, both equally troubling and equally fascinating. The company also houses a sizable research wing. 23andMe intends to aggregate the genetic information it receives and correlate that information with self-supplied data from customers about their biological traits – for instance, whether they clasp their hands with left thumb over right or vice versa; whether black coffee tastes bitter to them; or, more seriously, whether they have Parkinson’s disease. According to its website, the company hopes to use this information to find new ways to predict the incidence of disease in the population.

If the project is successful, 23andMe’s roof would shelter a veritable warehouse of genetic information, mapping correlations between phenotypes, SNPs and diseases across a huge swath of the population (albeit not a particularly diverse swath, given the company’s likely market demographics). To date, 23andMe says that it has received saliva samples from approximately 500,000 people – a figure many times the size of those employed in most large-scale genetic studies – and the company has only existed since 2006. Once it clears its name with the FDA, the amount of genetic information available to the firm’s research arm will only grow. “The long game here is not to make money selling kits, although the kits are essential to get the base-level data,” said Patrick Chung, a 23andMe board member. And the company’s privacy policy, it’s worth noting, makes no promises that it will not share aggregate-level genetic data with its vendors and affiliates, such as the company that manufactures the chips used in processing saliva samples.

In a dashed-off writeup, Ezra Klein blithely opined that the company’s research wing is simply “an experiment in big-data genetics.” Citing Harvard political science professor Daniel Carpenter, who recently published a lengthy history of the FDA, Klein argued that new technologies like 23andMe’s might provide a salubrious occasion to reevaluate how the law deals with cutting-edge bioinformatics. (How exactly the rules should change, though, Klein shrewdly declined to say.) There might be something to the point, but saying that we might possibly want to eventually give some consideration to maybe letting the political process (in consultation with appropriate experts, of course) arrive at some new rules about how such products might or might not be regulated is, ultimately, not to say much at all. It is almost as if Klein sees that there is a steep cliff off which he might take a sharp plunge into a sea of difficult and frightening questions, and decides instead simply to gesture offhandedly in its general direction.

But the only way forward is downward. Whatever happens to 23andMe, the idea of aggregating and cross-referencing huge amounts of genetic information is now a cultural reality. A private corporation, backed by some of the largest tech companies in the world (including Google), is planning to do genetic research through statistical modeling on an unprecedented scale. The implications of such a venture are profound, but the national conversation about the company has focused almost exclusively on what consumers will do with the information. This is not to say that it is not perfectly reasonable to have such a conversation. In fact, it is even reasonable to slosh around in the muddy waters of the swamp – the battle over paternalism and the authority of the medical establishment. But if we dare not ask, for example, whether an advanced, large-scale statistical model of human genetics might not be used to make predictions about, say, criminal behavior, or academic success, or lifetime income, who will? Let us dare to dive into the sea.

To be clear, at present, 23andMe’s databases are used for properly epidemiological purposes. And the company seems to be quite serious about protecting the privacy of customer data and fostering trust. Identifying information, for example, is encrypted and stored separately from genetic material. I am not, that is, accusing them of wrongdoing  – of conspiring to create a genetic dystopia like the ones envisioned in films like “Gattaca” and “Minority Report.” Rather, I am suggesting that the idea of a massive genetic database holds all the ominous potential, if not used with extreme circumspection, to lead exactly there.

For instance, there is no reason, in principle, why the information available to 23andMe could not be used to make predictions about future crimes. With the information in 23andMe’s hands, it would be possible, for example, to see whether you have the so-called Warrior Gene, the presence of which, a famous study found, correlates with a high likelihood of violent behavior, and then send you a “How likely are you to become a murderer?” report. There is also no reason why the company’s statistical techniques could not be used to make predictions about intelligence. For example, the journal Nature reported last May that the Beijing Genomics Institute intends to study 1,600 mathematically and verbally gifted children (as measured by IQ) in a search for SNPs that might explain their extraordinary intelligence. If studies like this disclose significant results, and if their methods meet 23andMe’s guidelines for scientific integrity, the company could begin marketing an all-new kit called “Will your baby become a super genius?”

From there, it is not hard to imagine going even further. Earlier this year, in Maryland v. King, the Supreme Court held, in a 5-4 ruling, that it was constitutional for the police to collect a sample of a person’s DNA when he or she is arrested for – not even formally charged with – a crime. In a decision that none other than Justice Scalia criticized, in a fiery dissent, as authorizing the creation of a “genetic Panopticon,” the Court concluded that it was constitutional for the police to compare these DNA samples with national databases of DNA evidence found at the scenes of unsolved crimes. What is to stop the police from going further – from working together with bioinformatics companies like 23andMe to build a crime-prediction and crime-detection mega-model? Although 23andMe’s information is currently stored in a form that makes it difficult for law enforcement to access, it would be possible in principle to cross-reference all three sources of information – bioinformatics customers, criminal arrestees and crime-scene evidence. With such a tool, the police could both find criminals foolish enough to want to know about their risks of disease and, more frighteningly, find SNPs that are correlated with getting arrested or committing crimes. Are we supposed to take comfort in the company’s own privacy policy, which explicitly provides that it will surrender individual-level genetic material if required to do so by law? “Minority Report” indeed.

To many readers, worrying about these uses – and about whether they will lead to some sort of “Gattaca”-like dystopia – might sound like hyperbole, if not the ravings of a paranoid lunatic. After all, 23andMe has announced no plans to create such products. Asked for comment about these uses, 23andMe’s Catherine Afarian quite reasonably told me that they are, at this point, “more science fiction than science fact.” Furthermore, in 2008, Congress passed, with enormous bipartisan support and the signature of President George W. Bush, the Genetic Information Nondiscrimination Act or “GINA.” GINA prohibits the use of information about a person’s genetic predispositions in pricing health insurance or making employment decisions, such as hiring and promotion. So what, one might reasonably ask, is the problem? Isn’t the only remaining question, indeed, the one about whether we think people are smart enough to process this sort of genetic information on their own?

But the point, again, is not only about this particular company, but about the broader trend toward making predictions about what will happen to someone based on the mutations in his or her genes. 23andMe is just the beginning; their kit is merely the prototype for a kind of bioinformatics product that companies will package and market to us in the years to come. They have, in fact, just proved that we are eager to buy. And while the passage of GINA undoubtedly represents a reassuring and admirable step toward ensuring that discoveries about our genetic predispositions are not used against us (though even then only in specific contexts), it cannot stop the momentum of our broader culture. Curiosity is a rushing river. If people want to know, the law cannot stop them from finding out. The only law that reigns when you click the button and send 23andMe your $99, when you follow the instructions and gather your saliva, when you then raptly read – with intense and morbid curiosity – the report that the company sends back, is the law of the market. At the end of the day, there is only you and the choices you make.

What, then, are we supposed to do? There is at least one serious response available. We might, that is, not give up on the idea that we can reverse the tide. We have the power, if we choose to exercise it, to pass laws that would serve as significant bulwarks against the more insidious uses to which genetic data could be put. For example, we might still make laws to prohibit the police practices that the Supreme Court said were constitutional in Marlyand v. King. Or we might pass a broader, more robust genetic non-discrimination act, one that would bar the use of genetic information in education, in police profiling, in commerce and countless other arenas in which we otherwise enjoy civil rights. Or we might even pass a genetic privacy law, one that would outlaw certain data-sharing and retention practices and impose heavy penalties for violations. (This last option seems particularly far-fetched in a world where we apparently tolerate widespread spying by our own government.)

But I doubt that anything like this will happen. As it stands today, even if the FDA – the agency we all, in practice, must rely on to ensure the safety and integrity of these kinds of products – makes some new rules, 23andMe’s kits, adorned with new warnings and restrictions, can return to market. And even if you, personally, decide not to participate in 23andMe’s process, there is nothing you can do to stop others from placing an order for a kit. It appears that all we can do is hope. Maybe, if they are scrupulous, 23andMe will do nothing more than make contributions to epidemiology and to our understanding of our biological makeup. After all, there are millions of people in the world suffering with serious diseases, many of which likely have genetic links. Surely we should not wish to stop the gathering of data that could lead to major scientific breakthroughs?

Indeed not. It is, however, the good name of science that is treated most roughly in the national conversation about 23andMe. Many working scientists (ironically including some at 23andMe) maintain a ferocious skepticism about the power of correlational genetic studies to reveal anything meaningful about the phenomena that they purport to measure. Asked for comment about the study of super-intelligent youngsters mentioned above, geneticist Peter Visscher from the University of Queensland, Brisbane, told Nature: “Even for human height, where you have samples of hundreds of thousands, the prediction you’d get for a newborn person isn’t very accurate.” Likewise, in response to evidence about the so-called Warrior Gene, Harvard neuroscientist Joshua Buckholtz wrote in NOVA that, in his view, “any test for a single genetic marker will likely be meaningless for either explaining or predicting [individual] human behavior.” And in general, the most skeptical readers of the findings from scientific studies tend to be scientists themselves.

All of which serves as a reminder that 23andMe is, in the final analysis, a marketer of data, a computer-science firm and a builder of complex correlational models. If we are convinced by what it tells us, it is only because we have implicitly accepted its premises – that our own behavior owes its origins to our biology, and that if we know everything about our genes, the events of our lives follow as night the day. Our fates are sealed on the day we are born. But these notions conflate the external markers and tools of science – the statistical methods known as “causal inference” – with scientific knowledge itself. Human understanding is broader, deeper and wider than can be contained in any formal system built within it. On the day when all these companies unite, bringing together all their predictive data into one über-model, will we kneel around the humming servers, in awe that we have actually built Laplace’s demon? Will we then ask them how we should live our lives?

Benjamin Winterhalter, Salon

NYPD Brew: Beer Cans Found in Stationhouse Spark CSI Probe

After four empty beer cans and a chilled six-pack were discovered inside a Bronx police stationhouse, top NYPD brass called in a team of forensic experts to try to detect DNA and fingerprints, DNAinfo New York has learned.

Chief of Detectives Phil Pulaski’s Inspections Unit summoned the scientific firepower about two weeks ago, after they were informed that Budweisers were found in a wastebasket and refrigerator in the 47th Precinct dormitory, where officers catch a few winks while waiting off-duty for their next shift to start, sources said.

Under a prohibition ordered by former Commissioner Raymond Kelly, officers are barred from having alcohol inside police facilities.

The trouble started when a Bronx patrol captain who works as an integrity officer went to the 47thPrecinct looking for a uniformed lieutenant at about 2 a.m. Friday,  Jan. 10, and wandered into the dormitory area. There he found the precinct detective squad commander sleeping in a cushy chair, sources said.

The commander, who is a lieutenant, got off work about midnight and decided to sleep in the station house rather than drive home because he was due to punch in for another tour at about 8 a.m.

Sources say the two supervisors had a frosty exchange of words. The altercation was about to boil over when the captain began rummaging around the room and discovered the beers.

Despite the fact that the lieutenant was fit for duty and there was no evidence anyone had been drinking, the captain notified the chief of detectives’ office, sources said.

More than a dozen detectives who worked at the precinct were quickly interrogated, virtually shutting down investigations in the north Bronx precinct, which covers Woodlawn and Wakefield, sources said.

Those interviews failed to crack the case, so officials called in forensics investigators to get more evidence.

Fingerprints of all police officers are on file at the NYPD, but not their DNA samples. It was not immediately clear what evidence, if any, was recovered.

Since then, the probe seems to have gone flat.

Michael Palladino, president of the detectives union, said, “the union does not encourage alcohol consumption, however, treating four empty beer cans in the garbage like it was the ‘Dirty 30’ [corruption] caper is excessive and a waste of precious time and resources.”

Regardless of the probe’s outcome, the incident has further strained morale within the detective ranks, and the rift between detectives and management has grown so wide a team of Clydesdales could drive right through it, sources said.

For example, the detectives union did not extend a routine invitation to Chief Pulaski to attend their annual winter gathering this weekend in upstate New York.

Palladino refused to confirm or deny the invitation issue, saying only that “our convention is taking place this weekend. Other than that, we don’t discuss who attends publicly.”

NYPD officials did not return calls for comment.

Murray Weiss, DNA Info

Health records of every NHS patient to be shared in vast database

The medical notes of every NHS patient are to be pooled in a vast database to improve research.

In the coming days every household in Britain will receive a letter advising them that, from April, their medical histories will be shared with researchers and pharmaceutical companies unless they opt out.

Today leading charities including Cancer Research UK, Diabetes UK and the British Heart Foundation launch a campaign to highlight the importance of allowing the notes to be accessed for the advancement of medical science.

They say the database will help them understand the causes of disease, spot side-effects to new drugs and detect outbreaks of infectious diseases.

Doctors have promised that patients will remain anonymous. But data protection campaigners have warned that individuals risk being identified and that the notes may be inaccurate.

Health professionals have admitted the ‘care.data’ database could be vulnerable to targets from hackers and criminal networks, but say it is necessary to improve the health service.

In the 1950s health data played a major role in uncovering a link between smoking and lung cancer. More recently, the health data of children with autism born since 1979 in eight UK health districts helped scientists dismiss claims of a link between the MMR jab and autism,

Professor Peter Weissberg, Medical Director at the British Heart Foundation, said: “Locked inside our medical records is a mine of vital information that can help medical scientists make discoveries that can improve patient care and save lives.

“With the right safeguards in place to protect patient confidentiality this new system will be of enormous benefit to patients and help reduce the burden of heart disease in the future.

“I don’t think anyone would say there is not a risk. But you can walk into a hospital and pick up notes at any time. People don’t because they are of very limited use outside of medical research. The benefits are enormous and the risks are small.”

Professor Liam Smeeth of the London School of Hygiene and Tropical Medicine added:

“People’s concerns are understandable. Even if they have nothing that is embarrassing in their records they still may not want it to be seen by anyone other than their medics.

“I can’t guarantee there won’t be a single slip, but the risk is tiny. This is not about individual people.

“Nobody can sit here and guarantee that medical records are completely immune from criminal hacking. I can’t guarantee that these are 100 per cent free from criminal activity.

“But this is data that’s absolutely necessary to keep NHS up to date with the times. Not opting out is important.”

In the past experts have warned that patients may be deterred from being frank with their doctors if they feel their records will be shared elsewhere.

When a local scheme to share GP information was set up in Oxford low-income mothers refused to talk to GPs about post-natal depression because they were worried that it would get back to social services.

Dr Sarah Wollosten, GP and conservative MP told a fringe meeting at the Tory Party Conference that medical records may contain errors, both mistakes consisting of other people’s notes, errors in recording and pejorative remarks.

Julia Manning of thinktank 2020health said: “Such is the secrecy surrounding our medical records, most of us have never seen them.

“Only occasionally are we made aware that this means they could have significant errors in them.

“Stories occasionally come to light in the press of people denied insurance payouts for a condition they didn’t know they had or for a condition that they didn’t have but their records say they did.

“People’s private medical information should not be uploaded to a national database until they are fully informed of process and confident their personal information is correct.”

The group medConfidential are encouraging patients to opt out.

Phil Booth of medconfidential said: “It’s no surprise NHS England has engaged charities to promote its new scheme, while playing down the non-medical, non-research organisations and companies outside the NHS which will also be given access.

“Research might be one of the more palatable uses for the deeply personal information that is to be taken, but it’s far from the only one.

“Much as researchers and others might want our health data, forcing GPs to upload patients’ details not only contravenes research ethics – riding roughshod over consent – it risks undermining the trust between doctor and patient essential to medical care.

“It’s everyone’s right to keep their family’s medical records confidential, but the only way you can do that now is to opt out. Opting out doesn’t stop you volunteering for properly-run research studies.”

However professor Sir John Tooke, President of the Academy of Medical Sciences said the risks are low when compared with the benefits.

“For the majority of medical research projects the risk of disclosure of sensitive information is extremely low.

“On the other hand, the risks to public health of impeding such research are potentially very large.”

And many charities have agreed.

Sharmila Nebhrajani, Chief Executive of the Association of Medical Research Charities said: “I believe people will be willing to make the public spirited act of sharing their medical records with researchers as long as they are confident that their data will be treated with care.”

Dr Harpul Kumar, Chief Executive of Cancer Research UK said: “Advances in medical research rely on access to the records of patients. The UK is in a unique position because it has more comprehensive data that anywhere else in the world.”

Cancer survivor Richard Stephens said: “As someone who has survived two cancers, I have seen first hand how our health records can help improve people’s lives.

“I might not be alive today if researchers had not been able to access the data in the health records of other cancer patients to produce the most effective treatments and the best care for me, and by making my own records available to researchers I know I am helping other patients in the future.”

Sarah Knapton, Telegraph

A New Era of Privacy

In today’s high-tech world, our privacy concerns typically revolve around usernames, passwords and PIN numbers. But one piece of personal data has been overlooked until recently: your genetic information.

Much like our usernames and passwords act as unique digital identifiers, our genomes represent unique biological identifiers, distinguishing us from others. Each of the cells in our bodies contains and protects information encoded in our DNA, collectively known as our genome.

The technology to sequence our genomes has existed for years now, most notably helping scientists complete the Human Genome Project in 2003. A major hindrance to the project was the time and money needed to sequence DNA.

Technological advances have allowed companies, including San Diego-based Illumina, to lower costs. As a result, genome sequencing is becoming a more mainstream tool for clinicians to use in diagnoses and therapies.

Genetic sequences can help determine whether you are likely to develop diseases like cancer, heart disease or Alzheimer’s. With that clue, clinicians have a better idea what treatments to recommend to patients.

Scientists and physicians agree that DNA sequencing will revolutionize “personalized medicine.” Rather than treating a type of disease, physicians will use genetic information to cater therapies to the individual, leading lead to more effective regimens, fewer detrimental side effects and lower healthcare costs.

While few disagree about the benefits of genomic sequencing in the medical field, the different ethical issues, like many technological advances before it, are just beginning to be debated.

Because each person’s DNA is unique, privacy becomes a major issue. When a person’s DNA is used for sequencing, although clinicians are only interested in certain genes known as “markers” for a disease, they have access to a person’s entire genome – including genetic information outside the bounds of a particular disease. Who should have control and access to this information?

This is trickier than concealing passwords or usernames.

While we all have different genomic sequences, we are not all trained to understand and interpret the meaning of this information. For anyone to be able to use the information for research and development, we need to grant access to experts to examine and interpret this data.

Recently, the Food and Drug Administration ordered 23andMe, a personalized genomic sequencing company, to stop marketing its health-related genetic tests. The FDA decided that interpretation of the genetic data needed to be regulated before consumers could act on it, but said consumers could use the information as medical advice. 23andMe has since stopped marketing any health claims, but still provides genealogical data as well as personalized sequencing.

Some privacy concerns have already been addressed in the Genetic Information Nondiscrimination Act enacted by Congress. This protects people from discrimination based on genetic information for employment or health insurance reasons. The act protects equality while also raising awareness that genetic information is an important biological element that needs protection. As time passes, there are sure to be more regulations introduced that revolve around our genomes.

While most people’s instinct is to protect personal information, genetic sequences are vital to many scientific endeavors. Comparing genomes can help generate more predictive profiles of diseases to make better diagnoses. Genomic sequencing also helps in developing new drugs and more effective uses for existing drugs.

Our genetic information is what makes each of us truly unique. The desire to protect it is understandable. But to keep pace with society’s expectation for progress in science and medicine, we have to find a way to balance this need for privacy in a rapidly evolving field of study.

Christopher Abdullah, Voice of San Diego

Testing times for the consumer genetics revolution

With the highest-profile seller of $99 genetic tests under fire, will public trust in personalised medicine suffer, an ethicist wonders

IT’S 2008. The New Yorker is chronicling a celebrity “spit party”, at which notables – nicknamed the “Spitterati” – eject saliva into tubes to find out their risk of developing illnesses such as diabetes, heart disease and cancer. The firm involved is 23andMe, a direct-to-consumer genetic testing company whose service was named Invention of the Year by Time magazine.

Fast-forward five years. 23andMe receives a demand from the US Food and Drug Administration (FDA) to stop selling its health-related tests pending scientific analysis. In a separate event, a Californian woman, Lisa Casey, files a $5 million class action lawsuit alleging false and misleading advertising. 23andMe suspends sales of its test, putting paid to its target of reaching 1 million customers by the end of 2013. Where did it all go wrong?

In November, after what the FDA describes as years of “diligently working to help [23andMe] comply with regulatory requirements”, the agency sent a scathing letter to the firm’s CEO Anne Wojcicki. It stated that 23andMe’s Personal Genome Service was marketed without approval and broke federal law, since six years after it began selling the kits, the firm still hasn’t proved that they work.

Doubts go back a long way. In the year of the spit party, the American Society for Clinical Oncology commissioned a report that concluded the partial type of analysis involved wasn’t clinically proven to be effective in cancer care. In 2010 the US Government Accountability Office concluded that “direct-to-consumer genetic tests [involve] misleading test results… further complicated by deceptive marketing”.

What 23andMe offered was a $99 test for 250 genetically linked conditions, based on a partial reading of single-nucleotide polymorphisms (SNPs). These are points where the genomes of different individuals vary by a single DNA base pair. There are some 3 billion base pairs in the human genome – this test targets only a fraction of them. Different companies sample different SNPs and so return different results for the same person.

To illustrate this point, in his book Experimental Man, science writer David Ewing Duncan recalled how he received three conflicting assessments of heart attack risk from three different companies. The director of one, deCODEme – no longer offering such tests – telephoned him from Iceland to urge him to start taking cholesterol-lowering statins. Yet the other two tests – one from 23andMe, one from Navigenics, which no longer offers consumer tests – had rated him at medium or low risk. Given that some statins carry side effects such as muscle weakness, Duncan might have been ill-advised to follow deCODE’s urgent advice.

This is the root of the FDA’s concerns. In its letter to 23andMe, it raised the risk that customers could get false information that leads to drastic and misguided medical steps. Wojcicki now says: “We want to work with [the FDA], and we will work with them.” But is it too little, too late?

And what of the class action lawsuit, brought by Casey after buying a test? It focuses on the test’s accuracy but goes further, targeting what Casey’s attorney calls “a very thinly disguised way of getting people to pay [23andMe] to build a DNA database”.

By asking customers to fill in surveys about health and lifestyle, 23andMe has been creating a valuable “biobank” for patenting purposes and industry collaboration. The firm has always sought customer consent for use of identifiable data and hasn’t disguised its aim. “The long game here is not to make money selling kits, although the kits are essential to get the base level data,” says 23andMe board member Patrick Chung. “Once you have the data, [23andMe]… becomes the Google of personalised healthcare.”

Last June, this strategy culminated in a potentially lucrative genetic patent related to Parkinson’s disease. The company had offered its test free to people with the illness and might have expected praise. But an angry customer wrote: “I had assumed that 23andMe was against patenting genes. If I’d known you might go that route with my data, I’m not sure I would have answered any surveys.”

What impact will all this have on 23andMe’s brand strategy? The firm has tried to create a sense of solidarity, emphasising what it called “common interests, affinities and passions”. As the firm wrote on its blog in 2008: “Wikipedia, YouTube and MySpace have changed the world by empowering individuals to share information. We believe this same phenomenon can revolutionize healthcare.”

If customer trust is threatened, that won’t happen – even if the firm switches to sequencing the whole genome or exome (the protein-coding parts of the genome), avoiding the worst inaccuracies of SNP testing. Whole-genome sequencing has become cheaper, although it’s still out of reach of the mass market the firm needs to build the biobank. Earlier this year the company piloted a whole-exome service for $999.

Given its status as the poster child of mass-market genetic testing, do 23andMe’s travails affect personalised medicine more generally? In the year that it started operations, 2007, then-Senator Barack Obama introduced his Genomics and Personalized Medicine bill, remarking that “in no area of research is the promise greater than in personalised medicine”. Many advocates expected the shift to start at the popular, consumer level.

So while the most consciously populist genetic testing service wrestles with its critics in the months ahead, there is a growing danger that wider public acceptance of personalised medicine in the clinical setting may also suffer in the fallout from 23andMe’s woes.

Donna Dickenson, New Scientist

DNA evidence in Grim Sleeper case was taken legally, judge rules

A judge ruled Tuesday that DNA evidence that led to accused Grim Sleeper serial killer Lonnie Franklin Jr. was lawfully obtained from a pizza slice, cups and napkins seized by a police officer who posed as a restaurant busboy in 2010.

Appearing on the stand for the first time, Franklin testified that he was attending a birthday party July 5, 2010, at John’s Incredible Pizza Co. in Buena Park when the DNA samples were taken.

Two days later, he was arrested by Los Angeles police.

Defense attorneys Seymour Amster and Louisa Pensanti had argued in pretrial hearings that the officer cleared Franklin’s plates and utensils before he was finished and were therefore taken illegally. The attorneys also argued that Franklin had a reasonable expectation that his plates would be thrown into a pile with others, which would make it impossible for police to definitely prove which remnants belonged to him.

“I felt it would be mixed with the rest of the trash,” Franklin said on the stand, drawing sighs of frustration from victims’ families in the courtroom.

Los Angeles County Superior Court Judge Kathleen Kennedy rejected the defense’s argument as “specious and ridiculous,” noting that no reasonable person would think about where their trash went if they didn’t think they were under police surveillance.

“If he were really concerned about such things, he would not eat or he would take his trash with him,” Kennedy said. “The fact of the matter is, the defendant ate his food like anybody else, and when it was taken away, it was abandoned … and he no longer had a reasonable expectation of privacy.”

Franklin, 61, is charged with killing 10 women over a 22-year period. He has been held without bail since his arrest. He is also charged with the premeditated attempted killing of an 11th victim.

The former LAPD garage attendant and city garbage collector targeted vulnerable women, including prostitutes and drug addicts, prosecutors allege. Seven of the killings occurred from 1985 to 1988, and the others from 2002 to 2007.

Franklin became known as the Grim Sleeper because of the decade that passed without any known killings. Police say Franklin has been linked to at least six other homicides, although he has not been charged in any of those deaths.

Frustrated by their inability to find the Grim Sleeper, whose DNA did not match samples in a law enforcement database, Los Angeles police in early 2010 asked the state to look for a DNA profile similar enough to be a possible relative of the killer.

State computers produced a list of 200 genetic profiles of people in the database who might be related to the serial killer. One of those profiles shared a common genetic marker with the DNA found at each of the 15 crime scenes.

The resulting pattern indicated a parent-child relationship. Knowing that the Grim Sleeper had to be a man, they tested the DNA of the 200 offenders whose profiles resembled the crime-scene DNA to determine whether any appeared to share the Y chromosome, which boys inherit from their fathers.

The test produced one match: the same profile as the match on the first test. Franklin’s son had been swabbed for DNA in 2009 after a felony weapons arrest.

Questioning on Tuesday was limited to Franklin’s felony convictions — grand theft auto, assault, battery, false imprisonment and receiving stolen property — and the time he spent at the pizza restaurant.

Franklin, who is scheduled to stand trial this summer, is due back in court Jan. 21 for another pretrial hearing.

 Paresh Dave, LA Times

Nutritional supplement marketers to drop misleading disease claims

Improved safeguards of sensitive medical information are also being required

Two companies that market genetically customized nutritional supplements have agreed to settle charges of deceptive advertising that claimed the products treat diabetes, heart disease, arthritis, insomnia, and other ailments.

The proposed settlements with the Federal Trade Commission (FTC) also resolve charges that the companies engaged in lax information security practices.

GeneLink, Inc. and its former subsidiary, foruTM International Corp. used a network of individual affiliates to market nutritional supplements and a skincare product that were purportedly customized to each consumer’s unique genetic profile. The profile was said to have been based on an assessment of the DNA obtained from a cheek swab provided by the consumer. The supplements and skin repair serum each cost more than $100 per month.

Unsupported health claims

The FTC’s complaint alleges that GeneLink and foru violated the FTC Act by making false or unsupported health claims about their genetically customized products. Company-approved marketing materials included claims that the customized nutritional supplements could compensate for an individual’s genetic disadvantages, and that the customized skin repair serum’s effectiveness was scientifically proven. The companies also claimed through testimonials that the customized nutritional supplements could treat serious conditions such as diabetes, heart disease and insomnia.

According to the FTC, the companies also deceptively and unfairly claimed that they had taken reasonable and appropriate security measures to safeguard and maintain personal information collected from nearly 30,000 consumers. But the complaint maintains the companies failed to protect the security of personal information — including genetic information, Social Security numbers, bank account information, and credit card numbers; did not require service providers to have appropriate safeguards for consumers’ personal information; and failed to use readily available security measures to limit wireless access to their network.

“This case is about the consequences of making false claims,” said Jessica Rich, Director of the FTC’s Bureau of Consumer Protection. “It doesn’t matter whether the claims deal with the benefits of direct-to-consumer genetic testing or the privacy of personal information. It’s against the law to deceive people about your product and to make promises you don’t keep.”

Setting stricter standards

The proposed settlements prohibit the marketers from claiming that any drug, food, or cosmetic will treat, prevent, mitigate, or reduce the risk of any disease including diabetes, heart disease, arthritis, or insomnia — by modulating the effect of genes, or based on a consumer’s customized genetic assessment — unless the claim is true and supported by at least two adequate and well-controlled studies.

The orders also require that claims that a product effectively treats or prevents a disease in persons with a particular genetic variation must be backed up with randomized clinical trials conducted on subjects who have that genetic variation.

In addition, the companies may not make any other claims about the health benefits, performance, or efficacy of any drug, food, or cosmetic by modulating the effect of genes, or the consumer’s customized genetic assessment –unless the claim is true and based on competent and reliable scientific evidence.

The proposed orders also prohibit the marketers from misrepresenting scientific research regarding such drug, food, or cosmetic, or any genetic test or assessment. The orders also provide a safe harbor for advertising claims that have been approved by the FDA.

Under the proposed orders, GeneLink and foru also are prohibited from providing their affiliates, or any person or entity, with the means to make the prohibited health claims. The proposed settlements also require the companies to monitor claims their affiliates make on their behalf.

Finally, the proposed orders require the companies to establish and maintain comprehensive information security programs and submit to security audits by independent auditors every other year for 20 years. The proposed settlements also bar GeneLink and foru from misrepresenting their privacy and security practices.   

James Limbach, Consumer Affairs

Biometric Security Poses Huge Privacy Risks

Security through biology is an enticing idea. Since 2011, police departments across the U.S. have been scanning biometric data in the field using devices such as the Mobile Offender Recognition and Information System (MORIS), an iPhone attachment that checks fingerprints and iris scans. The fbi is currently building its Next Generation Identification database, which will contain fingerprints, palm prints, iris scans, voice data and photographs of faces. Before long, even your cell phone will be secured by information that resides in a distant biometric database.

Unfortunately, this shift to biometric-enabled security creates profound threats to commonly accepted notions of privacy and security. It makes possible privacy violations that would make the National Security Agency’s data sweeps seem superficial by comparison.

Biometrics could turn existing surveillance systems into something categorically new—something more powerful and much more invasive. Consider the so-called Domain Awareness System, a network of 3,000 surveillance cameras in New York City. Currently if someone commits a crime, cops can go back and review sections of video. Equip the system with facial-recognition technology, however, and the people behind the controls can actively track you throughout your daily life. “A person who lives and works in lower Manhattan would be under constant surveillance,” says Jennifer Lynch, an attorney at the Electronic Frontier Foundation, a nonprofit group. Face-in-a-crowd detection is a formidable technical problem, but researchers working on projects such as the Department of Homeland Security’s Biometric Optical Surveillance System (BOSS) are making rapid progress.

In addition, once your face, iris or DNA profile becomes a digital file, that file will be difficult to protect. As the recent nsa revelations have made clear, the boundary between commercial and government data is porous at best. Biometric identifiers could also be stolen. It’s easy to replace a swiped credit card, but good luck changing the patterns on your iris.

These days gathering biometric data generally requires the cooperation (or coercion) of the subject: for your iris to get into a database, you have to let someone take a close-up photograph of your eyeball. That will not be the case for long. Department of Defense–funded researchers at Carnegie Mellon University are perfecting a camera that can take rapid-fire, database-quality iris scans of every person in a crowd from a distance of 10 meters.

New technologies will also make it possible to extract far more information from the biometrics we are already collecting. While most law-enforcement DNA databases contain only snippets of the genome, agencies can keep the physical DNA samples in perpetuity, raising the question of what future genetic-analysis tools will be able to discern. “Once you have somebody’s DNA, you have all sorts of very personal info,” Lynch says. “There is a lot of fear that people are going to start testing samples to look for a link between genes and propensity for crime.”

Current law is not even remotely prepared to handle these developments. The legal status of most types of biometric data is unclear. No court has addressed whether law enforcement can collect biometric data without a person’s knowledge, and case law says nothing about facial recognition.

It is unfortunate that the only body capable of enacting broad and lasting protections against the misuse of biometric data is the U.S. Congress. Yet perhaps legislators can agree that the law needs to catch up with technology. If so, they should start with principles that Lynch and the Electronic Frontier Foundation have proposed. Among other things, such legislation should limit the amount and type of data that the government can store and where they can be stored. It should restrict the collation of different types of biometric data into a single database. And it should certainly require that all biometric data be stored in the most secure manner possible.

Identity theft, fraud and terrorism are real problems. Used properly, biometrics could help protect against them. But the potential for misuse is glaringly obvious. We must begin setting rules to govern the use of these technologies now.

Editors, Scientific American