Is Artificial Intelligence Dangerous?

Is Artificial Intelligence Dangerous? Our Thoughts on the Matter.

If you’ve followed us for some time, you’re well aware that we like to discuss the future of technology, Artificial Intelligence, data, and analytics. And what is at the forefront of thought concerning AI more often than this one question: is artificial intelligence dangerous?

Many thought-leaders, like Stephen Hawking, think AI will be the death of us all. But we don’t quite agree with that doomsday attitude. We think, much like anything in this world, AI and it’s inherent danger or safety is in how we use it. So, really, it’s up to us.

We enjoyed this article on Futurism, titled, “Artificial Intelligence is Only Dangerous if Humans use it Foolishly”. We’ve got a couple of favorite parts of the article that we’ll share here, but you should definitely go read it for yourself and join us in discussing this important question.

There are a lot of concerns over the safety of AI (in most science fiction movies the AI outsmarts us too quickly), and there are very real concerns that AI will replace a large number of jobs (47 percent, according to this study).

There may be a big push to use AI to replace everything and everyone, but as Dom Galeon says, “Moreover, there’s the danger of looking at AI as the magical solution to everything, neglecting the fact that AI and machine learning algorithms are only as good as the data put into them.”

Artificial Intelligence is only as dangerous as we make it.

It’s not the AI we should fear. It’s the way humans will utilize that AI. And while we can sit and imagine numerous doomsday scenarios, the plain fact is that AI will likely replace many of the utilities and jobs we rely on now. Hopefully, this move will improve the human condition. But the AI can only do what we allow it to do. What algorithms are we using? What data are we feeding it? We should keep this in mind so that we don’t let the need get ahead of us.

As Galeon says, “Ultimately, the greatest threat to humanity isn’t AI. It’s how we handle AI.”

What are your thoughts on the future of AI? Should we fear it? Share your ideas in the comments.

Getting Out of the Rut

Scholastic Assessment Test, or SAT, scores were a big thing in high school.  Everyone wanted to know everyone else’s score so they could numerically compare one person’s intelligence to their own.  And sure, the SAT accomplishes what you would expect a standardized test to accomplish, but Vanderbilt professors Harrison J. Kell, David Lubinski, Camilla P. Benbow, and James H. Steiger may have more to say on the subject.  The Business Insider article, “Kids Who Do Well On This Spatial Intelligence Test May Have Potential That’s Going Completely Unrealized,” addresses their findings. Max Nisen explains, “when you add a test for spatial reasoning ability to the mix, you get an even better predictor of someone’s future accomplishments, creativity, and innovative potential.” Spatial reasoning describes an individual’s ability to visualize and manipulate an object in their head.  The thought processes that go into solving a multiple choice problem on a standardized test can only reveal so much about the person answering a question.  Spatial reasoning involves the use of mathematical and creative concepts, as well as a bit of imagination, and is largely unmeasured by most standardized tests.

And if you think about it, this should not come as a surprise to anybody.  You can drill mathematical concepts all day, but these won’t come into play unless you have a situation where you need math to solve the problem.  Math requires its own brand of creativity, but spatial reasoning can take someone to a place where a math equation could never have brought them.  Having the ability to visualize and manipulate objects can give someone the edge in understanding problems and take them out of the rut of the routine realm of standardized test questions.  Using tools that measure spatial reasoning, such as the Differential Apptitude Test could help educators recognize the ability of students and create a more meaningful education for them.

The Unifying Theory of Intelligence (?)

If someone were to tell you that there was a single equation that could accurately describe the incredible breadth and diversity of intelligent behavior, you’d probably look at them and scoff. How could any single equation possibly capture everything from the choice between what to wear in the morning and what move to make in a game of chess? So, this equation may have a while yet before it can definitively address everything, but this mathematical relationship developed by Alexander Wissner-Gross of Harvard University and Cameron Freer of the University of Hawaii may begin to start addressing many intelligent behaviors.

In Inside Science’s “Physicist Proposes New Way To Think about Intelligence”, author Chris Gorski describes that the main principle of this theory draws on the postulation that “intelligent behavior stems from the impulse to seize control of future events in the environment (insidescience.org).” The math behind the theory stems from an unusual, yet familiar source: entropy. A core concept of physics, entropy is used to describe chaos and disorder in a given system. It wouldn’t be wrong to say that this theory effectively utilizes thermodynamics as a model for intelligence. The math is implemented in a software engine the researchers have cleverly named Entropica, and is used to model simple environments to test the theory. Inside Science’s article describes a test where Entropica is given a simple environment with tools and possibilities. “It actually self-determines what its own objective is,” said Wissner-Gross. “This [artificial intelligence] does not require the explicit specification of a goal, unlike essentially any other [artificial intelligence].”

So what does this mean for society? As you might imagine, a unifying theory for just about anything is going to have groundbreaking implications. Economics, robotics, and social sciences, to name a few, are all fields that could be impacted with this research. Additionally, a model that could accurately predict how consumers will respond to a change in price would be enormously beneficial for businesses. And the incredible possibilities that this stirs within artificial intelligence circles will no doubt have people implementing this theory into the next generation of AI. Perhaps one day we will be able to model how we came to this point, and where our intelligence will take us in the future.

Protecting Personal Data

Patenting technology and research is common practice in most science fields, but what happens when biotechnology companies start patenting products of nature? Dan Munro addresses the upcoming Supreme Court hearing in his article “Data War Reaches Supreme Court” for Forbes. Human genes are becoming subject to patents at an increasing rate, restricting the research done to cure diseases and develop personal health technologies.

When a company owns patents on certain human genes, any other research group wanting to use that gene in developing medical treatment technologies must pay royalties in order to gain access to it. This is creating a bias in the research findings, preventing certain types of research from taking place and mostly to protect profits. “Last year the drugs worth about $35 billion in annual sales lost their patent status. 2015 looks to be similar for drugs totaling about $33 billion in annual sales,” reported Munro.

The article identifies four ways this debate over data ownership relates to the wider scope of healthcare reform:

1) Healthcare costs (where the U.S. easily surpasses all other industrialized countries –   by a wide margin)

2)   Trust and Patient Engagement (how to get patients more engaged with their health)

3)   Quantified Self (tracking all of our data – in order to manage our health more effectively)

4)   Personalized Medicine (therapies customized to our individual genetic composition)

When we think about data uses, we often think about statistics. The idea that a company could patent and restrict access to information about our bodies and data produced by our bodies is a frightening concept. The decision to who has the rights to our genetic material and personal data is being considered in Association for Molecular Pathology, et al. v. Myriad Genetics, et al.

Big Computing to Cure Disease

People can soon donate their computer’s idle time to the advancement of science at no cost. In June, nonprofit organization Quantum Cures will begin utilizing the unused processing power of our idle devices to cure diseases. Most people carry around smart phones and tablets that represent great strides in the accessibility of machines capable of great computation.  But what is all of that computational capability really accomplishing?  The Ars Technica article “Crowdsourcing the Cloud to Find Cures for Rare and ‘Orphaned’ Diseases” addresses one outlet for all of this potential.  Where Big Data is taking advantage of the fact that we have so much storage space to store vast amounts of data, Quantum Cures is exploring a cloud computing initiative.

Quantum Cures will use the same method pioneered by Berkeley University, which utilizes “volunteer” computers to process information to search for extraterrestrial life.  Quantum Cures will use Inverse Design software designed by Duke University and Microsoft to help process vast amounts of information and identify possible treatments for diseases that have fallen by the wayside.

To engineer a drug, they are looking at proteins related to a disease and searching for drugs that can potentially interact with them by using a quantum mechanics / molecular mechanics modeling system.  Lawrence Husick, co-founder of Quantum Cures, explained part of the process to Ars Technica. “Each instance of the software takes the quantum mechanical molecular model of the target protein and a candidate molecule and calculates the potential bonding energy between the two,” Sean Gallagher reported. This process is repeated for millions of molecules for which only a few pass the tests.

Quantum Cure has focused on diseases most pharmaceutical companies consider to be bad investments, including AIDS and malaria. The computing power and time involved with the process is immense, but when nonprofit organizations ask for volunteers to donate their CPU time, this can all be accomplished for much less. “The software installs with user-level permissions and will allow individuals to set how much of their compute time is made available,” Hesik told Ars Technica.

The Petabyte Age Deconstructs the Scientific Method

Scientific method has recently been called out by Peter Norvig, Google’s research director, at the O’Reilly Emerging Technology Conference in March 2008 when he offered an update to George Box’s maxim: “All models are wrong, and increasingly you can succeed without them.” Chris Anderson of Wired reported on the potential shift in the scientific method in an article, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.”

Anderson identifies Google’s success during the “The Petabyte Age” as an indicator of this shift. The availability of massive amounts of data that can be synthesized into meaningful statistics could very well change the future of research. “It forces us to view data mathematically first and establish context later,” he wrote.

The idea that you need a model of how things happen before you can connect data to a correlation of events might be on the way out. With access to enough data, the statistics themselves are significant. “Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity,” wrote Anderson.

This use of data without context has huge implications for research. If you can use the resulting statistics to say, “This is what is happening” before the research is fully conducted, getting people on board to find out how and why might be easier. If you can guarantee correlation before the research is fully conducted, finding support to prove underlying mechanics could be considerably easier.

A program called Cluster Exploratory has been developed to provide funding for research designed to run on a large-scale computing platform. This could be the first of many funding programs for research pertaining to finding derived from this data and lead to substantial scientific findings. Anderson wrote, “Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”

Gray Box Model

Cloud computing has the advantage of being much more flexible than similar hardware-based services.  However, cloud services tend to fall behind when it comes to database-intensive applications due to limitations in hard drive speeds.  Updating data in a hard drive is the limiting factor for most computers nowadays, as the process is limited by the speed of the stick that is writing the information to the disk.

MIT’s news article, “Making Cloud Computing More Efficient,” written by Barzan Mozafari, explains that “updating data stored on a hard drive is time-consuming, so most database servers will try to postpone that operation as long as they can, instead storing data modifications in the much faster – but volatile – main memory.”

At the SIGMOD conference, MIT researchers will reveal algorithms used by a new system called DBSeer that uses a “gray box model” that should help solve this problem.  DBSeer will use machine-learning techniques that will be able to predict the resource usage and needs of individual database-driven application servers.  Cloud computing servers are often divided up into multiple “virtual machines,” which are partitions of servers which are each allocated a set amount of processing power, memory, etc.  DBSeer will hopefully be able to predict a database’s unique needs and idiosyncrasies so it can predict whether or not it is viable to allocate additional resources from other partitions to solve a task.  If a virtual machine is just sitting there idle, DBSeer will assess whether or not it is prudent for that virtual machine to continue sitting there, or spend its allocated resources to complete a task on another partition.

Ultimately, this will allow servers to be much more efficient without further investment in hardware.  This trend that follows with Big Data is really getting computer scientists to question if there are more efficient ways to handle our problems with the hardware that we have.  It is all about maximizing productivity by questioning our own methods, rather than simply investing in more hardware.

Data Driven Media

The uses of Big Data are expanding beyond the technological and business worlds into the realm of entertainment. In regards to this expansion, The New York Times ran an article by David Carr, “Giving Viewers What They Want,” which addressed the growing uses of Big Data within the media industry. There is debate about whether data from Netflix users can be reliable in determining the success or failure of a new program, but it seems that “House of Cards” is the Netflix success story.

Netflix recently used Big Data to analyze information gleaned from their 33 million subscribers worldwide to develop a concept for their new original program “House of Cards.” Based on the data, they combined well-reviewed actors, themes and directors to create a show their viewers would love. “Film and television producers have always used data…, but as a technology company that distributes and now produces content, Netflix has mind-boggling access to consumer sentiment in real time,” Carr writes.

Despite the apparent success, some – including John Langford, president of FX – are skeptical of data being an indicator of response to innovative programs. Langford is quoted: “Data can only tell you what people have liked before.” Alternatively, Rick Smolan, author of “The Human Face of Big Data,” was quoted stating, knowing what viewers are watching gives Netflix a competitive edge when developing programming.

If Big Data becomes the trend for developing television concepts, we may see a rise in consumer-driven decisions in other media from design to content, writer or medium. This is already happening on a smaller scale, but with an increase in data the possibility for input and innovation is limitless.

Big Data and Weather Forecasting

In his article “How Big Data Can Boost Weather Forecasting” for Wired, author Steve Hamm discusses how Big Data analysis techniques are being used to more accurately predict weather patterns. As his first example, Hamm discusses how the Korean Metrological Administration (KMA) is working to upgrade its predictive systems in order to better prepare the Korean peninsula for storms that “[carry] dense clouds of yellow dust from China’s Gobi Desert that are sometimes loaded with heavy metals and carcinogens”. As part of their upgrades, the KMA is dramatically increasing the agency’s storage capabilities in hopes of more accurately forecast weather patterns through an increased ability to quickly analyze large amounts of information.

Such efforts to increase predictive capabilities are also being made in other parts of the world. Following the destruction caused by Hurricane Sandy, “leaders of the city of Hoboken…are considering building a wall around the city to keep the tidal Hudson River at bay”, but such efforts will be in vain if scientists are unable to predict how the changing climate will affect the river’s future depth and behavior. Due to the scale of the issue, IBM is assisting in researching more accurate predictive methods through a project called Deep Thunder, a “long-term weather analysis project”.

Currently, Deep Thunder has been used to accurately predict “the snowfall totals in New York City during the mammoth snowstorm” in February, and was also able to predict “when the snowfall would start and stop”. IBM is currently working to implement Deep Thunder in Rio de Janeiro for the 2016 Olympics, and provide attendees access to the predictive information through “iPad and cloud applications”. The accuracy and speed of Deep Thunder have great implications for the future of climate prediction; if the planet’s weather can be consistently be predicted, the damages and injuries caused by catastrophic weather could be greatly mitigated during future events.

Crowdsourcing Killer Outbreaks

Nanowerk’s article “Crowdsourcing killer outbreaks” presents an idea that is both a forward-thinking and efficient technological step for science, and a world-shaking challenge to the traditional concept of priority in the sciences. Here we see two dangerous pathogens, one that threatens human lives and the other that is scarring ecosystems. In both cases speed is of the essence for treatment as the danger increases exponentially as time goes on. The solution is met not by individual labs competing with each other to reach the cure before the other, but by collaboration on a global scale.

The “crowdsourcing” method, which is simply technologically mediated mass collaboration, allowed for different genomic labs to instantly share their findings with each other to reach a solution much faster than could have been possibly done with the competition method. By combining resources and exchanging information, the project became one of an international lab working together.

However, this does present a problem on the legal and financial levels. Universities and research labs depend on patents and priority (the concept that whoever discovers something first gets the credit) in order to fund themselves. With the research being shared through the creative commons, the determination of who gets priority is left very much up in the air.

I feel that there is much to be gained by the crowdsourcing of science. Even though there are some legal and economic details to be ironed out, there is too much to be gained to let the traditional methods hold us from making some serious progress in situations of dire need.