Tag Archives: privacy

Creepy Ads Use Litterbugs’ DNA to Shame Them Publicly

Next time you’re about to toss a cigarette butt on the ground, consider this freaky fact: It takes less than a nanogram (or less than one billionth of the mass of a penny) of your dried saliva for scientists to construct a digital portrait that bears an uncanny resemblance to your very own face. For proof look to Hong Kong, where a recent ad campaign takes advantage of phenotyping, the prediction of physical appearance based on bits of DNA, to publicly shame people who have littered.

If you walk around the city, you’ll notice portraits of people who look both scarily realistic and yet totally fake. These techno-futuristic most-wanted signs are the work of ad agency Ogilvy for nonprofit Hong Kong Cleanup, which is attempting to curb Hong Kong’s trash problem with the threat of high-tech scarlet lettering. It’s an awful lot like the Stranger Visions project from artist Heather Dewey-Hagborg, who used a similar technique a couple years back to construct sculptural faces as a way to provoke conversation around what we should be using these biological tools for.

In the case of Hong Kong’s Face Of Litter campaign, the creative team teamed up with Parabon Nanolabs, a company out of Virginia that has developed a method to construct digital portraits from small traces of DNA. Parabon began developing this technology more than five years ago in tandem with the Department of Defense, mostly to use as a tool in criminal investigations.

Parabon’s technique draws on the growing wealth of information we have about the human genome. By analyzing saliva or blood, the company is able to make an educated prediction of what you might look like. Most forensic work uses DNA to create a fingerprint, or a series of data points that will give a two-dimensional look at an individual that can be matched to pre-existing DNA samples. “We’re interested in using DNA as a blueprint,” explains Steven Armentrout, founder of Parabon. “We read the genetic code.”

The DNA found on the Hong Kong trash is taken to a genotyping lab, where a massive data set on the litterbug is produced. This data, when processed with Parabon’s machine-learning algorithms, begins to form a rough snapshot of certain phenotypes, or traits. Parabon focuses on what it describes as highly heritable traits—or traits that have the least amount of environmental variability involved. Things like eye color, hair color, skin color, freckling, and face shape are far easier to determine than height, age, and even hair morphology (straight, wavy, or curly).

The Ogilvy team says it accounted for age by studying market research on the types of litter it processed. For example, people ages 18-34 are more likely to chew gum, so any gum samples were automatically given an average age in that range. Whereas the portraits of cigarette litterers, more common among the 45-plus group, were depicted as slightly older.

It’s an imperfect science in some regards, and yet, the capabilities are astounding—and more than a little scary. Ogilvy says it received permission from every person whose trash they picked up, so in that way, it’s not a true case of unsolicited public shaming. And Parabon itself says its services are only available for criminal investigations (and, apparently, ad campaigns). But the message is still chilling. A project like The Face of Litter should serve as a provocation to talk critically about privacy, consent, and ethics surrounding the unsanctioned appropriation of someone’s DNA. So for now, the next time you drop that empty bag of Doritos onto the ground, you’re in the clear. But in the future? Just know it’s totally possible that you might be seeing your likeness plastered onto the subway walls.

Liz Stinson, Wired

Newborn screening collides with privacy fears

The wrinkled heel of nearly every baby in the United States is pricked at birth, and a few drops of blood are dabbed on filter paper and shipped off for analysis. Started in the 1960s, this newborn screening program tests for more than 30 rare and serious diseases that are treatable if caught early in life. Now, many public health experts who help run or advise the program are worried what the future holds. A new law shaped by a coalition of privacy advocates and conservative politicians requires consent for federally funded research on newborn blood spots, which include DNA but no names. Seeking consent sounds innocuous, even welcome. But experts are concerned that the law, which took effect in March, could hamstring not just fundamental research but also the kind of studies that routinely improve screening. Efforts to improve newborn testing often require studies on hundreds of thousands of stored blood samples; seeking consent for each one would be prohibitive and impractical. When California researchers sought informed consent to test a cuttingedge screening technology on blood spots from 400,000 newborns, for example, overworked hospital staff did not contact nearly half of eligible families, hampering the study. “Do you want genetic privacy at the expense of everything else?” asks David Orren, chief legal counsel of the Minnesota Department of Health in St. Paul.

When it began lumbering through Congress, the Newborn Screening Saves Lives Reauthorization Act of 2014 was unremarkable; it simply updated an expiring 2007 law that provided federal support for state-run newborn screening programs. In early 2014, the bill passed in the Senate—unanimously, and “in about 30 seconds,” says Cynthia Pellegrini of the March of Dimes in Washington, D.C., who advocated for the bill. The controversy began a few days before the House of Representatives voted on the bill last June, when a nurse named Twila Brase, who runs the Citizens’ Council for Health Freedom, a nonprofit in St. Paul that presses for medical privacy, reached out to the office of Michele Bachmann, a tea party icon whose district included the northern suburbs of Minneapolis-St. Paul until she retired from Congress earlier this year. Brase, who also opposes the federal mandate for electronic health records and the Affordable Care Act, had been fighting storage and research on newborn blood spots for years. Brase’s contact had its desired effect: When the bill reached the House floor, Bachmann delivered an emotional speech. “This legislation presumes that every parent of every newborn in the United States of America pre-agrees that the government can have their baby’s blood sample, which contains their DNA code,” she said. “Americans should not see the death of privacy, especially of the most sensitive private information that every American can have.”

Bachmann’s speech came too late to affect the House vote. The bill passed. But because legislators had added some minor tweaks to the language before voting, the bill had to return to the Senate, so that the two chambers were passing identical text. That gave time for Bachmann’s qualms to catch the attention of members of the conservative Senate Steering Committee, including Rand Paul (R–KY) and Patrick Toomey (R–PA). They sought input from her, as well as from officials from the March of Dimes, the National Institutes of Health (NIH), and other research and advocacy groups. After much discussion, the senators settled on the clause mandating informed consent when newborn blood spots were used in federally funded research. It passed both chambers and was signed into law by President Barack Obama a week before Christmas. At the crux of scientists’ and public health advocates’ concerns is what fits under the umbrella of “research,” which federal regulations define as investigations that “develop or contribute to generalizable knowledge.” Does testing a new screening technology qualify as research? What about studies of a test for a disease not currently on a screening panel, to determine whether it should be added? “There are public health functions that are mixed up with” what might be considered “pure” research, says Logan Spector, an epidemiologist at the University of Minnesota, Twin Cities. And some research that seems unrelated to newborn screening might not be: Probing leukemia’s origins, as researchers studying blood spots have done, could also represent nascent steps toward a test for leukemia risk.

Jeffrey Botkin, a pediatrician and bioethicist at the University of Utah in Salt Lake City, who is part of a federal advisory panel on newborn screening, worries about the impact of mandating informed consent. But he’s sympathetic to its appeal. “It’s good to be the subject of much more public dialogue and scrutiny,” Botkin says. Many acknowledge that screening programs could do a far better job of educating parents and doctors, ideally before a baby’s birth rather than in the distracted hours afterward. The Office for Human Research Protections is drafting guidelines on the law and plans to define what qualifies as research. In the meantime, scientists and state health departments are trying to anticipate the law’s effects. “We’ve essentially frozen” our repository, says Michael Watson, the executive director of the American College of Medical Genetics and Genomics in Bethesda, Maryland, which runs a virtual bank of dried blood spots. None of the four participating states plans to provide information from blood spots collected after March, when the law took effect. A pilot study to develop a test for detecting Duchenne muscular dystrophy “has been slowed down tremendously,” Watson says. There’s also a big question about whether the law is an early jolt of a larger seismic shift in how deidentified samples are handled. Until now, studying such samples, which carry no names or addresses and are not linked to an individual’s health records, hasn’t required informed consent. But in January, NIH began expecting grantees on genomic research to seek consent before using deidentified samples. The newborn screening law is turning that recommendation into a national requirement, at least for blood spots. (A handful of states already mandate consent.) Other samples, like tumor tissue or deidentified blood samples from adults, could be next. The Department of Health and Human Services is rewriting its “Common Rule” governing human subject research. An upcoming draft will reveal whether it wants consent for all deidentified samples. Once those regulations are finalized, perhaps within a couple of years, the newborn screening requirement for consent will be subsumed by the Common Rule. The Common Rule is Brase’s next frontier. She plans to comment on the proposed draft rules when they’re released, to urge that all deidentified samples be subject to informed consent before scientists can access them. “When researchers decide we’re theirs, that sets people up to oppose what’s happening in research,” she says.

Jennifer Couzin-Frankel, Science Magazine

Privacy Questions Plague the 100,000 Genomes Project

With the UK launch of 23andMe’s home DNA testing kit, the legalisation of mitochondrial DNA transfer, and the 100,000 Genome Project underway, optimism abounds about the science of genetics delivering on its early promise. But there is also cause for greater caution and oversight than some biotechnology enthusiasts would like to admit.

These developments are taking place with insufficient regard for their social and ethical implications. We can, however, be sure that the policies and regulatory frameworks currently being enacted will shape future decisions. A balance needs to be struck between the pro-R&D agenda that is driving a permissive regulatory regime, and sensitivity towards concerns about genetic technologies that public consultation, if properly conducted, can alert us to. It is in the public interest that there is wider discussion of the full implications of recent developments.

The 100,000 Genome Project is a case in point. The collection and sequencing of 100,000 individuals’ genetic information is intended to constitute the first phase of what will be a national genomic database. The ‘50 million Genome Project’ will include the genomes and clinical data of all NHS patients in England and Wales. ,We submitted a Freedom of Information request to the Department of Health (DoH) to clarify the way in which data from those participating in the 100,000 Genome Project would be shared with third parties. In line with prior public announcements, and what Genomics England claim on their website  , the DoH initially told us that both clinical information and “genomic data files from the 100,000 Genome project to which academics, researchers and industry members will have access will be anonymous”.

However, following further correspondence the DoH admitted that data made available to third parties, including commercial entities, would not, in fact, be anonymised but rather “pseudonymised”. This has profound implications. Anonymised data is stripped of anything that would permit the identification of the individual in question from the data. Pseudonymised information is quite different, as it provides information on – in the DoH’s own words – “age or age range” and “wider geographical information”. The information made available to third parties includes clinical data pertaining to an individual’s medical history, potentially spanning decades. With such information as age/age range and geographical location – combined with the wealth of ‘big data’ held on databases and available online – it may then be possible to identify those participating in the 100,000 Genome Project.

The DoH’s justification for this sleight of hand defies belief. It is stated that in “public access documents the term ‘anonymisation’ has been used because the term ‘pseudonymisation’ is not widely understood. It is planned that a footnote clarifying the terminology will be added to communication material”. Needless to say this is wholly inadequate: ‘anonymisation’ and ‘pseudonymisation’ are not synonyms, and to pretend otherwise – with only a ‘planned’ footnote to explain as much – directly contradicts the principle of transparency.

Since the DoH admits that “public access documents” state that all data will be anonymised when the format is in fact pseudonymisation, the question arises as to whether those participating in the 100,000 Genome Project have in fact given informed consent for their data to be used in this way. We asked the DoH to specify whether the model of consent employed meant that participants would be made aware of the possible uses of their clinical or genomic data. The DoH admitted that “it is impossible to inform patients at the outset of the potential ways in which their genome might be used”. Furthermore, at “the time of consent participants cannot know every potential exact use of their data.” The following explanation for what appears to be deliberate obfuscation was provided: “longitudinal project aiming to stay at the forefront of genomic research it is impossible to inform patients at the outset of the potential ways in which their genome might be used.”

‘Genomic research’ is extraordinarily vague and provides little in the way of guidance regarding what kind of things whole-sequenced genomes might be used for. It is little consolation the data can only be used in line with the Data Access and Acceptable Uses Policy; however, this policy has not yet been made available to the public.

Finally there is the question as to what extent clinicians will be party to such information. The DoH admitted to us that those involved in administering care “will be able to access their patient’s data, including the raw genome data which, through its nature, requires them to have access to identifiable data”. Once more, the lack of oversight is notable. Clinicians accessing such data will merely be “expected to work within their remit and to abide by the ethical code of conduct for their profession”. There is no reference to how this is to be enforced, if indeed at all.

These revelations raise the issue of trust. Clinicians and other interested parties will have access to sensitive information such as whether or not someone has a predisposition to particular illnesses. Looming over this, however, is the question of trust in the government to hold and use genetic information responsibly. This has to be viewed in the context of the impending expansion of the 100,000 Genome Project into a national genomic database and the integration of personalised medicine in mainstream healthcare, from 2017. It is thus imperative that there is widespread public debate about the acceptable uses of whole-sequenced genomes, and that all developments in this area should be subject to rigorous oversight.

Thus far the 100,000 Genome Project has failed to live up to the required standards of transparency. This is by no means an isolated phenomenon in the UK’s approach to the governance of personal, and often highly sensitive, data. The disparity between democratic accountability – often taking the form of superficial public engagement exercises – and the powerful lobbying mechanisms at the disposal of vested interests needs to be redressed. Only through greatly increased openness from the government can we foster meaningful debate about the risks and implications of this new frontier in medicine.

Edward Hockings and Lewis Coyne, Guardian

Protecting our children’s DNA

Before they are more than a couple of days old, 98 percent of the roughly 4 million babies born in the U.S. each year have a small sample of blood taken and screened for a variety of inherited conditions. Caught early, many of these conditions can be successfully treated, preventing death and disability.

Newborn screening is one of the great public health success stories in this country, but what happens to the samples after the screening process is completed raises serious and troubling questions of consent and privacy.

Newborn screening is the only widespread health testing in the U.S. conducted not by an individual’s doctor, hospital, or health care provider but by individual state departments of public health. It’s these state agencies that often continue to store these biological samples long after the screening process is over. Indeed, 19 states store the biological samples of newborns for more than two years.

In the case of California and a handful of other states, these samples are indefinitely stored in state repositories and made available to researchers — for a fee. If there is one commonality among state newborn screening practices, it’s the complete lack of transparency of the entire process.

Most parents are poorly informed about screening programs; having just had a baby and still in the hospital they often don’t see any written materials and such programs are rarely explained in person.

Nevertheless, parents in California and most states are assumed to have consented to long-term storage and third-party use of their child’s biological sample unless they explicitly refuse in writing. Parents, understandably, want to be actively involved in decision-making regarding their children’s personal health information. That choice is currently being denied.

The concern of parents that states retain their children’s biological information is heightened because storage procedures and security at these state facilities are arcane and we still have few laws that truly protect the privacy of genetic information. We are at a critical time in the development of medicine: the mapping of the human genome has provided powerful new tools to understand the genetic basis of disease and genetic tests can help diagnose genetic conditions, guide treatment decisions, help predict risk of future disease, inform reproductive decision-making and assist medication selection.

Californians are enthusiastic about the promise of genetic medicine; but are understandably fearful about how this powerful information can be abused. The sheer amount of genetic data being generated today, and its commercialization, raises serious medical privacy concerns. Many individuals are legitimately concerned that their genetic information will be used against them and are unwilling to participate in medical research or be tested clinically, even when they are at risk for serious disease.

The government has not classified the collection and use of newborn screening data as research and its unclear whether the Common Rule, which requires informed consent for human subject research, would apply. This lack of clarity leaves newborn data ripe for misuse.

Consent not only allows parents to make choices about their child’s genetic privacy but is crucial to promoting greater governmental transparency. Such transparency is especially important because newborn screening and storage is often exempted from state genetic privacy laws. Researchers and administrators working with these samples know very well how alarming newborn blood spot biobanking can sound to most people, which explains why many of these clinicians, researchers and state labs would prefer newborn screening practices keep a low profile. That desire shouldn’t trump the public’s interest. Moreover, there’s just no evidence to indicate that better consent and privacy policies would affect any actual benefits from these biobanks.

Newborn screening is one of the few forms of genetic testing to which almost everyone is exposed. Yet parental and just general public knowledge of newborn screening and storage practices is extremely limited.

Assemblyman Mike Gatto, D-Glendale, has recently introduced a bill (AB 170) to address some of these concerns. It requires the state Department of Health to do a better job of informing Californians about the state’s newborn sample storage policies.

The bill also offers parents, and children when they reach adulthood, more control over the decision-making process regarding the retention and use of these samples.

California must revise its approach to long-term storage and use of newborn DNA samples, and include parents in the decision-making process. With no overall governing privacy framework to ensure individual privacy and control over one’s own information, a public debate around newborn screening protections can’t happen soon enough.

Jeremy Gruber is president and executive director of the Council for Responsible Genetics, a public policy organization, U-T San Diego

Government DNA collection under microscope in California

In 2015, genes have many uses.

Soon after every baby in California is born, a hospital worker extracts and logs its genetic information. It will be tested for diseases and then stashed permanently in a warehouse containing a generation of Californians’ DNA.

For those charged with a felony – or, potentially, just arrested – a sliver of genetic code will be taken and placed in a state database that has grown rapidly in the last decade.

As scientists have mapped the personalized blueprints contained in each strand of DNA, the government has been collecting and storing reams of genetic material to combat disease and capture criminals. In seeking to shape when public agencies can take genetic information and how they can use it, lawmakers face a tension between individual privacy and public health and safety.

“You want to make sure government isn’t collecting too much DNA, but you also recognize it is the modern fingerprint,” said Assemblyman Mike Gatto, D-Los Angeles, though he differentiated genetics from fingerprints: “You’re taking the very stuff of life.”

It begins with a prick to the heel. Blood from every baby born in California is screened for diseases such as sickle cell anemia and severe combined immunodeficiency. Every state has a similar program. Where California differs is its policy of storing dried blood on cards indefinitely and, for a fee, loaning them out for research.

The advantages of immediately identifying and treating diseases are indisputable. What makes Gatto and privacy advocates nervous is the knowledge that the government can hold on to that information and share it without consent. Security concerns intensify those fears.

“I think it’s only a matter of time before there’s a high-profile hack, and then somebody would have access to your data,” Gatto said. “As we increasingly discover genes for everything from alcoholism to a propensity for violence, someone could interfere with your ability to get a job by saying, well, that person has the alcohol propensity gene.”

Gatto has a pair of bills that would allow parents to have their babies’ samples destroyed, and dictate when police officers can glean DNA. With the support of district attorneys, Assemblyman Jim Cooper, D-Elk Grove, has a bill allowing DNA collection from people convicted of certain misdemeanors.

While the Department of Public Health emphasizes that the infants’ information is kept anonymous and never assembled into a full genetic profile, skeptics point to a series of studies in which researchers identified supposedly anonymous donors to public genetics databases.

“DNA is a strong identifier of a person, and there is always a theoretical possibility of identifying someone,” Yaniv Erlich, an assistant professor of computer science at Columbia University, wrote in an email. He added that California “mitigates the risk of harm” by not storing some details and penalizing unauthorized release of any data.

Critics also question whether researchers can get their hands on samples. Texas shared newborn samples with a military laboratory hoping to enhance its forensic capabilities, alarming those who said the data should be used strictly for medical research.

“As we build out criminal DNA databases in California and nationwide,” asked Jennifer Lynch, a senior staff attorney at the Electronic Frontier Foundation, “are we going to get the point where law enforcement says, ‘Well, we have this giant repository with the information of everyone born in California in the last 30 years, and that’s a huge treasure trove’?”

Just as the newborn database’s benefits are firmly established, DNA has become indispensable for law enforcement.

Sacramento District Attorney Anne Marie Schubert called forensic DNA “the greatest tool ever given to law enforcement to find the guilty and to exonerate the innocent.” Since Proposition 69 in 2004 empowered law enforcement to sweep up samples from anyone arrested for a felony, the number of people in a Department of Justice database has grown substantially.

“That changed how we deal with DNA in this world,” Schubert said.

But the program is in dispute. The California Supreme Court will soon take up a case challenging DNA collection from people who have not been charged or convicted. Privacy advocates warn about overly broad data collection that ensnares the innocent and the guilty alike.

“Once you start collecting DNA before a person is even convicted of a crime, you’ve started down a road where you’ve erased any balance between the legitimate needs of law enforcement and individual rights,” said Jeremy Gruber, president of the Council for Responsible Genetics.

Speaking from years of experience in the Sacramento County Sheriff’s Department, Cooper said such fears are unfounded. He argued that most Californians will never enter the database.

“If you’re not out committing homicides or sex crimes, your DNA’s never going to pop up,” Cooper said. “So I think if you get arrested and you’re involved in this, there’s a certain right that you lose.”

Gatto’s bill would create new rules if the California Supreme Court strikes down Proposition 69’s mandatory collection provisions. It allows DNA collection only after a probable cause hearing and would automatically erase from the statewide database people who are not convicted.

“People who are innocent of crimes, they should have the right to have their genetic information be as private as they want it,” Gatto said.

The case follows the U.S. Supreme Court’s 2013 ruling that swabbing the DNA from someone arrested for probable cause was legal, akin to taking fingerprints. Justice Antonin Scalia warned in a vehement dissent of the long-term consequences.

“As an entirely predictable consequence of today’s decision,” Scalia wrote, “your DNA can be taken and entered into a national DNA database if you are ever arrested, rightly or wrongly, and for whatever reason.”

While imposing limits on collection in some areas, Gatto’s bill enables DNA collection after misdemeanor convictions that would disqualify someone from owning a firearm.

Similarly, Cooper’s bill would have law enforcement collect samples not just from felony offenders but from people who are convicted of misdemeanors, such as fraud or drug possession, that were collection-triggering felonies until voters reduced sentences by passing Proposition 47 in 2014.

The change has dammed the flow of DNA into the state database, prosecutors say, in the process reducing their ability to make connections to previous crimes and find case-cracking leads. They note that repeat offenders tend to have long records: If a person’s DNA is already logged because of a less serious offense, investigators can identify that person when he breaks the law again.

“Whether it’s theft, possession of drugs, we’ve been able to tie them back to some of the most heinous crimes,” San Bernardino County District Attorney Mike Ramos said at an event announcing Cooper’s bill.

None of this is abstract for Gatto. His father was shot dead by a home intruder last year. As Gatto awaits a break in the case, he is working to regulate the very technology that could bring his father justice.

“For families like mine, who are waiting for breakthroughs that can be caused by DNA and similar evidence, it can be a very long, painful wait if these technologies are prohibited by the courts,” Gatto said. “The right balance is what’s critical here.”

Jeremy B. White, Sacramento Bee

Fire at a Brooklyn Warehouse Puts Private Lives on Display

No lives were lost in the huge fire that gutted a storage building on the Brooklyn waterfront over the weekend. But the flames put plenty of lives on display as the crumpling warehouse belched up its contents: decades’ worth of charred medical records, court transcripts, lawyers’ letters, sonograms, bank checks and more.

“They’re like treasure maps, but with people’s personal information all over them,” Spencer Bergen, 24, said of the half-charred scraps that he said he had seen strewn around the Williamsburg neighborhood as far inland as Berry Street, several blocks from the warehouse.

New York City sent disaster recovery contractors, equipped with nets, shovels and protective boots, to try to collect the debris. But still, beachcombers sifted freely through the trove of documents, picking their way through remnants of the days when many records were on paper and the city government was one of the few takers for north Brooklyn’s waterfront land.

Compared with the large — and increasingly commonplace — online breaches of personal information at corporations like Home Depot, Target and Sony, the potential damage from stray scraps of paper may seem slight. Still, a glance at a rocky jetty just south of the warehouse revealed a scattering of records stamped “confidential,” a health insurance form with a person’s Social Security number, a urinalysis report complete with a patient’s name and copies of checks featuring bank account numbers.

“If you wanted to steal an identity, I’m sure if you looked at that piece of paper, you’d find a medical record,” said Sherry Hanson, 50, one of the many curious onlookers who clambered down the rocks at the edge of Bushwick Inlet Park to get a closer look at the heaps of paper on Sunday.

Among the government agencies that said they had housed records in the CitiStorage warehouse at 5 North 11th Street were the state court system, and the city’s Administration for Children’s Services and the Health and Hospitals Corporation. Several local hospitals had stored medical records there as well. CitiStorage said the building, with six million cubic feet of storage, also held documents from law and financial services firms.

Reached on Sunday, the hospitals and city agencies sought to play down the possibility that reams of sensitive information had been thrown to the wind. At the same time, however, they said it was too early to know what types of documents had been lost.

The warehouse disgorged so many papers that they clogged the water-intake system of one of the fireboats aiming high-powered jets of water into the smoldering, ice-covered building, trying to smother flames that were still flaring up on Sunday. The current carried more papers to shore, luring people who paged through some documents, photographed others and kept more than a few as souvenirs.

“What if this was all diaries, instead of personal information? Love letters?” mused Loretta Rae, 38, who lives nearby. “If it was diaries,” she joked, “I’d definitely be down there reading it.”

Munirih Quinlan, 29, who works at a hospital , examined slides of what appeared to be an X-ray that had landed on a rock.

“This is crazy,” she said, recalling her training in recognizing Medicare fraud stemming from identity theft. “If you post anything,” she advised others, “make sure it doesn’t have people’s personal information on it.”

The city learned firsthand the dangers of storing important documents in waterfront buildings when storm surges from Hurricane Sandy ravaged two Police Department storage facilities in Red Hook and Greenpoint in October 2012. The department is still struggling to determine the extent of the damage to the Greenpoint building, which contained thousands of pieces of evidence.

Despite plans to move evidence away from the vulnerable Brooklyn waterfront buildings, however, the department has yet to do so, and the blaze over the weekend raised questions about how slowly the city was digitizing or otherwise protecting its records.

What types of records were stored in the CitiStorage warehouse or how many were damaged or dispersed remained a matter of confusion on Sunday evening. The state court system and the Administration for Children’s Services said they had been in the process of removing files from the building, making it unclear what still remained there, while the Health and Hospitals Corporation said it kept vital patient records in electronic form and that its operations would be unaffected.

Some members of the Greater New York Hospital Association — which includes Mount Sinai Health System, NewYork-Presbyterian Hospital, North Shore-Long Island Jewish Health System and NYU Langone Medical Center — kept records at CitiStorage, said Brian Conway, a spokesman for the association, but it was not clear which, if any, were involved.

About the possibility that confidential patient information might have been disclosed on a large scale as the wind scattered unburned records, Mr. Conway said, “There’s no reason to believe that’s a possibility.”

Yet in one indication of the city’s concern, the disaster recovery contractors, in their neon yellow jackets, sealed off the entrance to the rocky jetty with yellow caution tape early Sunday and began to scoop documents out of the water with nets and shovels.

“We’re just here to clean up the debris,” said one of the workers, adding that he did not have permission to explain further.

At a news conference on Sunday afternoon, the fire commissioner, Daniel A. Nigro, said the blaze was expected to continue smoldering for days as the paper inside continued to feed the flames.

The fire, which reached seven alarms, began around 6:20 a.m. on Saturday. But firefighters had also been called there two hours earlier for a smaller fire in the same location, which they found had been contained by the building’s sprinkler system. The firefighters then shut down the sprinklers to prevent further water damage to the paper records, and because sprinkler heads must be replaced after discharging water.

By the time the second emergency call came in, the sprinklers were offline, and the blaze was already large enough to draw scores of firefighters.

“It’s a building full of fuel,” Mr. Nigro said. “Once it got started, it was difficult to extinguish, especially under the extraordinarily rough conditions for the firefighters, with the extreme cold and strong winds.”

He said the department had interviewed three warehouse employees, but investigators had not been able to enter the building and were not close to determining the fire’s cause. Marshals were investigating whether the first fire had rekindled or a second fire started independently, and whether the fire had been deliberately set or sparked accidentally.

In Williamsburg, where luxury high-rises have rapidly replaced the old factories and warehouses and residents fear the 11-acre site where the CitiStorage building sits is next, it was not hard to find people who believed the fire’s cause was obvious.

Less than two blocks downwind from the smoldering waterfront, the cafe MatchaBar, on Wythe Avenue, reopened on Sunday; the acrid, ashy smoke had kept it closed the day before. Among the artists, musicians and writers gathered there was Lisa Markuson, 28, a blue-haired poet, who perched by the window with a Smith Corona typewriter, offering free haikus to customers.

Her ode to the fire:

we’re all pretty sure

that this was no accident

smoke clouds our vision

Vivian Yee, NY Times

Privacy is Dead, Invasive Technology is Here to Stay

Imagine a world where mosquito-sized robots fly around stealing samples of your DNA. Or where a department store knows from your buying habits that you’re pregnant even before your family does.

That is the terrifying dystopian world portrayed by a group of Harvard professors at the World Economic Forum in Davos on Thursday, where the assembled elite heard that the notion of individual privacy is effectively dead.

“Welcome to today. We’re already in that world,” said Margo Seltzer, a professor in computer science at Harvard University.

“Privacy as we knew it in the past is no longer feasible… How we conventionally think of privacy is dead,” she added.

Another Harvard researcher into genetics said it was “inevitable” that one’s personal genetic information would enter more and more into the public sphere.

Sophia Roosth said intelligence agents were already asked to collect genetic information on foreign leaders to determine things like susceptibility to disease and life expectancy.

“We are at the dawn of the age of genetic McCarthyism,” she said, referring to witch-hunts against Communists in 1950s America.

What’s more, Seltzer imagined a world in which tiny robot drones flew around, the size of mosquitoes, extracting a sample of your DNA for analysis by, say, the government or an insurance firm.

Invasions of privacy are “going to become more pervasive,” she predicted.

“It’s not whether this is going to happen, it’s already happening… We live in a surveillance state today.”

Political scientist Joseph Nye tackled the controversial subject of encrypted communications and the idea of regulating to ensure governments can always see even encrypted messages in the interests of national security.

“Governments are talking about putting in back doors for communication so that terrorists can’t communicate without being spied on. The problem is that if governments can do that, so can the bad guys,” Nye told the forum.

“Are you more worried about big brother or your nasty little cousin?”

However, despite the pessimistic Orwellian vision, the academics were at pains to stress that the positive aspects of technology still far outweigh the restrictions on privacy they entail.

In the same way we can send tiny drones to spy on people, we can send the same machine into an Ebola ward to “zap the germs,” Seltzer said.

“The technology is there, it is up to us how to use it,” she added.

“By and large, tech has done more good than harm,” she said, pointing to “tremendous” advances in healthcare in some rural areas of the developing world that have been made possible by technology.

And at a separate session on artificial intelligence, panellists appeared to accept the limit on privacy as part of modern life.

Rodney Brooks, chairman of Rethink Robotics, an American tech firm, took the example of Google Maps guessing — usually correctly — where you want to go.

“At first, I found that spooky and kind of scary. Then I realised, actually, it’s kind of useful,” he told the forum.

Anthony Goldbloom, a young tech entrepreneur, told the same panel that what he termed the “Google generation” placed far less weight on their privacy than previous generations.

“I trade my privacy for the convenience. Privacy is not something that worries me,” he said.

“Anyway, people often behave better when they have the sense that their actions are being watched.”

The World Economic Forum in the swanky Swiss ski resort of Davos brings together some 2,500 of the global business and political elite for a meeting that ends Saturday.

 Agence France-Presse

Every Patient a Subject

Personalized medicine, the hoped-for use of the information in our genes to inform our medical care, may end up helping people live longer, healthier lives. Or it may not—the jury is still out. But one thing is certain: As our unique genomic data enter our medical records, researchers will be tempted to use that invaluable resource. The results may be good for science but bad for patients’ privacy.

In 2013, reporter Carole Cadwalladr, writing for the Guardian, described her encounter with the paradox of personalized medicine: Unlocking one’s genetic code may feel empowering, but the implications can be frightening. Cadwalladr agreed to let Illumina, a company that makes and uses gene-reading machines, sequence her DNA and use her genome in research in connection with an upcoming conference.

At a conference she attended, Illumina gave all the participants party favors: iPads with copies of their own genomes. Cadwalladr was unnerved to realize that her unique genetic code was now stored by Illumina in the Amazon Cloud and could, like all digital data, be potentially hacked and leaked. But, she reminded herself, she had been told the risks and benefits and had made an informed choice to volunteer.

This choice is denied to many subjects of genomic research—a group that may one day soon include almost all of us. Cadwalladr told us she was “surprised to learn” that current norms for medical research permit a scientist who gets a sample of blood, tissue, or saliva to sequence and use that genome without the donor’s specific consent, or even without her knowledge. The scientist then may share those genomic data with others, including a database maintained by the U.S. National Institutes of Health that’s used by researchers and companies worldwide. This can all happen without any notice to the people whose DNA was sequenced. (In fact, if the study is federally funded, in some cases the scientist must share the information.) These practices are currently acceptable, as long as the genome is viewed as “de-identified”—meaning it isn’t linked to obvious identifiers, such as names, addresses, or phone numbers, and it is not, in itself, considered identifiable.

That sounds reasonable, but “de-identification” is becoming only a reassuring myth. Subjects of genomic research should not confidently expect to remain anonymous. The possibility of “re-identifying” people from either their genomes or the health or demographic data connected with those genomes is real. The probability of re-identification is unclear but certainly growing, as the focus of genomic research shifts from the individual to the population, from small collections of DNA to vast electronic databases of genomic and health information.

Advances in data science and information technology are eroding old assumptions—and undermining researchers’ promises—about the anonymity of DNA specimens and genetic data. Databases of identified DNA sequences are proliferating in law enforcement, government, and commercial direct-to-consumer genetic testing enterprises, especially in genetic genealogy. That growth is increasing the likelihood that anyone with access to such nonanonymous “reference” databases could use them to re-identify the person who provided a “de-identified” gene sequence. People with access could include amateur genetic genealogists but also hackers.

Similarly, information about a person’s health conditions or demographic characteristics can be used for re-identification. How many 6-foot-2-inch-tall 62-year-old white men are there in a given state with white hair, an artificial left hip, type A positive blood, and a prescription for warfarin?

Newer re-identification risks will emerge as scientists learn to profile individuals using information encoded in the genome itself, such as ethnicity and eye color. Authors of a recent study published in PLOS Genetics described a method to use the genome and computerized rendering software to “computationally predict” 3-D models of individual “faces” of particular genomes; in a subsequent paper the authors describe how these techniques will be useful in criminal investigations.

Today medical ethicists, lawyers, and data scientists dispute whether de-identification remains a reliable means of privacy protection. One camp maintains that the risks of re-identification are overstated, creating a climate that impedes research unnecessarily; another group of experts, the “re-identification scientists,” counter by demonstrating repeatedly how they can re-identify supposedly anonymous subjects in genomic research databases.

Yet to date, this conversation has been largely academic. Gene-sequencing technology is only now maturing into clinical use, and the number of people whose genomes have been sequenced for research in the United States is relatively small compared with the total patient population. Though many of these research subjects contributed DNA before the advent of sequencing technology and are likely unaware that their genomes have been sequenced and shared, most did consent to participate in some form of medical research and provided DNA samples for this purpose. In theory, therefore, these subjects, like Carole Cadwalladr, all knew they were assuming new privacy risks by joining a study.

This is about to change, as gene sequencing moves from the research laboratory to the clinic— and we need to consider the consequences carefully. When the day arrives that each patient’s genome is sequenced routinely in the course of our medical care, all our genomic data will become part of, or linked to, our permanent, electronic medical records.

EMRs with gene-sequence information will be a treasure trove for genomic research on a populationwide scale, allowing researches to forgo recruiting DNA donors in favor of obtaining genomic data directly from the EMR. The temptation to do good by doing research on this vast scale will be irresistible; the mushrooming literature on such genome-wide association studies shows that these very large studies may offer researchers enough statistical power to tease apart the complex interplay of genetic contributions to almost any health condition imaginable, from schizophrenia to diabetes.

Commonly accepted practices for records-based research, which don’t require patient consent, could eventually cause many of us to become the subjects of genomic research without our knowledge. As has already happened for many of the nearly 1 million subjects in the NIH genomic database, our genomes might then be distributed to researchers worldwide, and we’d never know. That people who volunteered for specific studies have their genomes distributed across the world without their knowledge is bad enough. That this might happen to people who have sought medical care but have not volunteered for research would be worse.

Patients today generally don’t know when their medical records have been disclosed for research, or to whom—making it difficult to object. In the not-so-distant future, when medical records include our unique genomes, this status quo will be ethically unacceptable. To date, regulators have interpreted federal health privacy law to permit providers to treat whole genome sequence data as “de-identified” information subject to no ethical oversight or security precautions, even when genomes are combined with health histories and demographic data. Either this interpretation or the law should be changed.

The same EMRs that will make this research possible could also be used to record patients’ choices whether to participate in research, but that is not generally happening. If the research community truly believes that science must conscript patient genomes for public benefit, it should make that case openly, explaining how notice and consent will impose undue burdens on crucial research. Otherwise, do the right thing: Ask patients first.

Jennifer Kulynych and Hank Greely,  Slate

CRG Presses US Senate for Privacy and Consent Protections in Newborn Screening

H.R. 1281 is the Newborn Screening Saves Lives Reauthorization Act of 2014. It seeks to reauthorize and amend a number of state grant programs and federal requirements for newborn screening. The bill passed the US House by a voice vote on June 24, 2014.

Every US state conducts  a newborn screening test on every baby born within its borders. State law generally requires that a nurse takes a few drops of blood from the heel of each newborn, and submits it to a state laboratory. There, researchers test blood to look for 50 or so medical conditions that could be dangerous–generally metabolic,  genetic or endocrine conditions. Newborn screening has proven a very effective and successful health program.

Unfortunately, once the initial testing is completed, the samples are not destroyed. Rather than discarding the screening tests, many state departments of health store the bloodspots for a period of time, even indefinitely, using them for quality control and sometimes research. Usually parents have no idea this is occurring.

In 2007, President Bush signed the Newborn Screening Saves Lives Act, which provides grants to state entities administering newborn screening programs. The intent of the law is to unify and nationalize the data collection resulting from state screening programs by standardizing data collection and reporting.

Notably, this law does not specify the protocol for sharing newborn screening data between research projects, and does not address issues that might arise if the newborn’s information is linked or linkable to the newborn screening sample. It furthermore does nothing to address parental consent with regard to data storage and sharing.

Newborn screening is one of the few types of genetic testing to which every American is exposed, yet few Americans are aware of the serious privacy and lack of consent issues with the storage and use of samples. CRG is working hard with Members of the US Senate to amend the legislation so that the important benefits of newborn screening are supported while upholding principles of privacy and parental consent.

A copy of the letter to the Senate is below.


Privacy Issues Stall Newborn Screening Bill

A bill that would support newborn screening nationwide has stalled in Congress because some Republican senators have privacy concerns about genetic research funded by the legislation.

The senators won’t comment individually, but the Senate Steering Committee has indicated it wants a provision added to the bill to require parental consent before genetic research and genomic sequencing could be done on a child’s newborn screening sample.

Nearly every baby in the country is tested for genetic disorders shortly after birth. Blood is collected on a card that is sent to state public health labs for testing, in order to identify conditions that are often easily treatable. The cards are often later used anonymously for research. The senators holding up the bill believe that a child could be identified from such research.

Newborn-screening advocates have said they are willing to discuss ways to more clearly define and limit research but are worried that the focus on the research — just one part of the bill — will derail it from becoming law before Congress ends its lame-duck session this month.

“We certainly hope we don’t have to start this process over from scratch in 2015 — this issue is far too important for infants and families,” said Cynthia Pellegrini, senior vice president at the March of Dimes, one of several advocacy groups supporting the Newborn Screening Saves Lives Reauthorization Act.

The bill, which involves $19.9 million in spending, would reauthorize a 2008 measure that funds many programs supporting the country’s state-run newborn screening systems. Newborn screening will still continue in all states even if the bill does not pass — as each state operates its own program — but amendments to the bill would not be instituted.

This year both the House and Senate added timeliness measures to the bill after a Milwaukee Journal Sentinel investigation in November 2013 found that thousands of hospitals were sending babies’ blood samples late to state labs.

Those changes will require that experts systematically track and improve the timeliness of newborn screening programs nationwide. The Journal Sentinel found that newborn screening varies widely in quality depending on the state or hospital where a child is born. Other amendments made to the bill include:

∎ The U.S. Centers for Disease Control and Prevention would be directed to evaluate laboratory quality and surveillance activities, with a focus on timeliness, so state labs can collect and share standardized data.

∎ The Government Accountability Office, the investigative arm of Congress, would be required to prepare a report within two years that examines the timeliness of newborn screening throughout the country, while also summarizing guidelines, recommendations and best practices to support a timely newborn screening system.

∎ A committee of experts that advises the U.S. Health and Human Services secretary would be directed to provide recommendations on improving timeliness in newborn screening programs.

If the bill does not pass in December, funding for that committee would run out in the spring. A new sponsor in the Senate would also be needed, as U.S. Sen. Kay Hagan, D-N.C., was defeated in the November election.

Ellen Gabler, Milwaukee Journal Sentinel