Protecting Personal Data

Patenting technology and research is common practice in most science fields, but what happens when biotechnology companies start patenting products of nature? Dan Munro addresses the upcoming Supreme Court hearing in his article “Data War Reaches Supreme Court” for Forbes. Human genes are becoming subject to patents at an increasing rate, restricting the research done to cure diseases and develop personal health technologies.

When a company owns patents on certain human genes, any other research group wanting to use that gene in developing medical treatment technologies must pay royalties in order to gain access to it. This is creating a bias in the research findings, preventing certain types of research from taking place and mostly to protect profits. “Last year the drugs worth about $35 billion in annual sales lost their patent status. 2015 looks to be similar for drugs totaling about $33 billion in annual sales,” reported Munro.

The article identifies four ways this debate over data ownership relates to the wider scope of healthcare reform:

1) Healthcare costs (where the U.S. easily surpasses all other industrialized countries –   by a wide margin)

2)   Trust and Patient Engagement (how to get patients more engaged with their health)

3)   Quantified Self (tracking all of our data – in order to manage our health more effectively)

4)   Personalized Medicine (therapies customized to our individual genetic composition)

When we think about data uses, we often think about statistics. The idea that a company could patent and restrict access to information about our bodies and data produced by our bodies is a frightening concept. The decision to who has the rights to our genetic material and personal data is being considered in Association for Molecular Pathology, et al. v. Myriad Genetics, et al.

Big Computing to Cure Disease

People can soon donate their computer’s idle time to the advancement of science at no cost. In June, nonprofit organization Quantum Cures will begin utilizing the unused processing power of our idle devices to cure diseases. Most people carry around smart phones and tablets that represent great strides in the accessibility of machines capable of great computation.  But what is all of that computational capability really accomplishing?  The Ars Technica article “Crowdsourcing the Cloud to Find Cures for Rare and ‘Orphaned’ Diseases” addresses one outlet for all of this potential.  Where Big Data is taking advantage of the fact that we have so much storage space to store vast amounts of data, Quantum Cures is exploring a cloud computing initiative.

Quantum Cures will use the same method pioneered by Berkeley University, which utilizes “volunteer” computers to process information to search for extraterrestrial life.  Quantum Cures will use Inverse Design software designed by Duke University and Microsoft to help process vast amounts of information and identify possible treatments for diseases that have fallen by the wayside.

To engineer a drug, they are looking at proteins related to a disease and searching for drugs that can potentially interact with them by using a quantum mechanics / molecular mechanics modeling system.  Lawrence Husick, co-founder of Quantum Cures, explained part of the process to Ars Technica. “Each instance of the software takes the quantum mechanical molecular model of the target protein and a candidate molecule and calculates the potential bonding energy between the two,” Sean Gallagher reported. This process is repeated for millions of molecules for which only a few pass the tests.

Quantum Cure has focused on diseases most pharmaceutical companies consider to be bad investments, including AIDS and malaria. The computing power and time involved with the process is immense, but when nonprofit organizations ask for volunteers to donate their CPU time, this can all be accomplished for much less. “The software installs with user-level permissions and will allow individuals to set how much of their compute time is made available,” Hesik told Ars Technica.

The Petabyte Age Deconstructs the Scientific Method

Scientific method has recently been called out by Peter Norvig, Google’s research director, at the O’Reilly Emerging Technology Conference in March 2008 when he offered an update to George Box’s maxim: “All models are wrong, and increasingly you can succeed without them.” Chris Anderson of Wired reported on the potential shift in the scientific method in an article, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.”

Anderson identifies Google’s success during the “The Petabyte Age” as an indicator of this shift. The availability of massive amounts of data that can be synthesized into meaningful statistics could very well change the future of research. “It forces us to view data mathematically first and establish context later,” he wrote.

The idea that you need a model of how things happen before you can connect data to a correlation of events might be on the way out. With access to enough data, the statistics themselves are significant. “Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity,” wrote Anderson.

This use of data without context has huge implications for research. If you can use the resulting statistics to say, “This is what is happening” before the research is fully conducted, getting people on board to find out how and why might be easier. If you can guarantee correlation before the research is fully conducted, finding support to prove underlying mechanics could be considerably easier.

A program called Cluster Exploratory has been developed to provide funding for research designed to run on a large-scale computing platform. This could be the first of many funding programs for research pertaining to finding derived from this data and lead to substantial scientific findings. Anderson wrote, “Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”