Protecting Personal Data

Patenting technology and research is common practice in most science fields, but what happens when biotechnology companies start patenting products of nature? Dan Munro addresses the upcoming Supreme Court hearing in his article “Data War Reaches Supreme Court” for Forbes. Human genes are becoming subject to patents at an increasing rate, restricting the research done to cure diseases and develop personal health technologies.

When a company owns patents on certain human genes, any other research group wanting to use that gene in developing medical treatment technologies must pay royalties in order to gain access to it. This is creating a bias in the research findings, preventing certain types of research from taking place and mostly to protect profits. “Last year the drugs worth about $35 billion in annual sales lost their patent status. 2015 looks to be similar for drugs totaling about $33 billion in annual sales,” reported Munro.

The article identifies four ways this debate over data ownership relates to the wider scope of healthcare reform:

1) Healthcare costs (where the U.S. easily surpasses all other industrialized countries –   by a wide margin)

2)   Trust and Patient Engagement (how to get patients more engaged with their health)

3)   Quantified Self (tracking all of our data – in order to manage our health more effectively)

4)   Personalized Medicine (therapies customized to our individual genetic composition)

When we think about data uses, we often think about statistics. The idea that a company could patent and restrict access to information about our bodies and data produced by our bodies is a frightening concept. The decision to who has the rights to our genetic material and personal data is being considered in Association for Molecular Pathology, et al. v. Myriad Genetics, et al.

The Petabyte Age Deconstructs the Scientific Method

Scientific method has recently been called out by Peter Norvig, Google’s research director, at the O’Reilly Emerging Technology Conference in March 2008 when he offered an update to George Box’s maxim: “All models are wrong, and increasingly you can succeed without them.” Chris Anderson of Wired reported on the potential shift in the scientific method in an article, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.”

Anderson identifies Google’s success during the “The Petabyte Age” as an indicator of this shift. The availability of massive amounts of data that can be synthesized into meaningful statistics could very well change the future of research. “It forces us to view data mathematically first and establish context later,” he wrote.

The idea that you need a model of how things happen before you can connect data to a correlation of events might be on the way out. With access to enough data, the statistics themselves are significant. “Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity,” wrote Anderson.

This use of data without context has huge implications for research. If you can use the resulting statistics to say, “This is what is happening” before the research is fully conducted, getting people on board to find out how and why might be easier. If you can guarantee correlation before the research is fully conducted, finding support to prove underlying mechanics could be considerably easier.

A program called Cluster Exploratory has been developed to provide funding for research designed to run on a large-scale computing platform. This could be the first of many funding programs for research pertaining to finding derived from this data and lead to substantial scientific findings. Anderson wrote, “Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”

Data Driven Media

The uses of Big Data are expanding beyond the technological and business worlds into the realm of entertainment. In regards to this expansion, The New York Times ran an article by David Carr, “Giving Viewers What They Want,” which addressed the growing uses of Big Data within the media industry. There is debate about whether data from Netflix users can be reliable in determining the success or failure of a new program, but it seems that “House of Cards” is the Netflix success story.

Netflix recently used Big Data to analyze information gleaned from their 33 million subscribers worldwide to develop a concept for their new original program “House of Cards.” Based on the data, they combined well-reviewed actors, themes and directors to create a show their viewers would love. “Film and television producers have always used data…, but as a technology company that distributes and now produces content, Netflix has mind-boggling access to consumer sentiment in real time,” Carr writes.

Despite the apparent success, some – including John Langford, president of FX – are skeptical of data being an indicator of response to innovative programs. Langford is quoted: “Data can only tell you what people have liked before.” Alternatively, Rick Smolan, author of “The Human Face of Big Data,” was quoted stating, knowing what viewers are watching gives Netflix a competitive edge when developing programming.

If Big Data becomes the trend for developing television concepts, we may see a rise in consumer-driven decisions in other media from design to content, writer or medium. This is already happening on a smaller scale, but with an increase in data the possibility for input and innovation is limitless.