Getting Out of the Rut

Scholastic Assessment Test, or SAT, scores were a big thing in high school.  Everyone wanted to know everyone else’s score so they could numerically compare one person’s intelligence to their own.  And sure, the SAT accomplishes what you would expect a standardized test to accomplish, but Vanderbilt professors Harrison J. Kell, David Lubinski, Camilla P. Benbow, and James H. Steiger may have more to say on the subject.  The Business Insider article, “Kids Who Do Well On This Spatial Intelligence Test May Have Potential That’s Going Completely Unrealized,” addresses their findings. Max Nisen explains, “when you add a test for spatial reasoning ability to the mix, you get an even better predictor of someone’s future accomplishments, creativity, and innovative potential.” Spatial reasoning describes an individual’s ability to visualize and manipulate an object in their head.  The thought processes that go into solving a multiple choice problem on a standardized test can only reveal so much about the person answering a question.  Spatial reasoning involves the use of mathematical and creative concepts, as well as a bit of imagination, and is largely unmeasured by most standardized tests.

And if you think about it, this should not come as a surprise to anybody.  You can drill mathematical concepts all day, but these won’t come into play unless you have a situation where you need math to solve the problem.  Math requires its own brand of creativity, but spatial reasoning can take someone to a place where a math equation could never have brought them.  Having the ability to visualize and manipulate objects can give someone the edge in understanding problems and take them out of the rut of the routine realm of standardized test questions.  Using tools that measure spatial reasoning, such as the Differential Apptitude Test could help educators recognize the ability of students and create a more meaningful education for them.

The Unifying Theory of Intelligence (?)

If someone were to tell you that there was a single equation that could accurately describe the incredible breadth and diversity of intelligent behavior, you’d probably look at them and scoff. How could any single equation possibly capture everything from the choice between what to wear in the morning and what move to make in a game of chess? So, this equation may have a while yet before it can definitively address everything, but this mathematical relationship developed by Alexander Wissner-Gross of Harvard University and Cameron Freer of the University of Hawaii may begin to start addressing many intelligent behaviors.

In Inside Science’s “Physicist Proposes New Way To Think about Intelligence”, author Chris Gorski describes that the main principle of this theory draws on the postulation that “intelligent behavior stems from the impulse to seize control of future events in the environment (insidescience.org).” The math behind the theory stems from an unusual, yet familiar source: entropy. A core concept of physics, entropy is used to describe chaos and disorder in a given system. It wouldn’t be wrong to say that this theory effectively utilizes thermodynamics as a model for intelligence. The math is implemented in a software engine the researchers have cleverly named Entropica, and is used to model simple environments to test the theory. Inside Science’s article describes a test where Entropica is given a simple environment with tools and possibilities. “It actually self-determines what its own objective is,” said Wissner-Gross. “This [artificial intelligence] does not require the explicit specification of a goal, unlike essentially any other [artificial intelligence].”

So what does this mean for society? As you might imagine, a unifying theory for just about anything is going to have groundbreaking implications. Economics, robotics, and social sciences, to name a few, are all fields that could be impacted with this research. Additionally, a model that could accurately predict how consumers will respond to a change in price would be enormously beneficial for businesses. And the incredible possibilities that this stirs within artificial intelligence circles will no doubt have people implementing this theory into the next generation of AI. Perhaps one day we will be able to model how we came to this point, and where our intelligence will take us in the future.

Big Computing to Cure Disease

People can soon donate their computer’s idle time to the advancement of science at no cost. In June, nonprofit organization Quantum Cures will begin utilizing the unused processing power of our idle devices to cure diseases. Most people carry around smart phones and tablets that represent great strides in the accessibility of machines capable of great computation.  But what is all of that computational capability really accomplishing?  The Ars Technica article “Crowdsourcing the Cloud to Find Cures for Rare and ‘Orphaned’ Diseases” addresses one outlet for all of this potential.  Where Big Data is taking advantage of the fact that we have so much storage space to store vast amounts of data, Quantum Cures is exploring a cloud computing initiative.

Quantum Cures will use the same method pioneered by Berkeley University, which utilizes “volunteer” computers to process information to search for extraterrestrial life.  Quantum Cures will use Inverse Design software designed by Duke University and Microsoft to help process vast amounts of information and identify possible treatments for diseases that have fallen by the wayside.

To engineer a drug, they are looking at proteins related to a disease and searching for drugs that can potentially interact with them by using a quantum mechanics / molecular mechanics modeling system.  Lawrence Husick, co-founder of Quantum Cures, explained part of the process to Ars Technica. “Each instance of the software takes the quantum mechanical molecular model of the target protein and a candidate molecule and calculates the potential bonding energy between the two,” Sean Gallagher reported. This process is repeated for millions of molecules for which only a few pass the tests.

Quantum Cure has focused on diseases most pharmaceutical companies consider to be bad investments, including AIDS and malaria. The computing power and time involved with the process is immense, but when nonprofit organizations ask for volunteers to donate their CPU time, this can all be accomplished for much less. “The software installs with user-level permissions and will allow individuals to set how much of their compute time is made available,” Hesik told Ars Technica.

Gray Box Model

Cloud computing has the advantage of being much more flexible than similar hardware-based services.  However, cloud services tend to fall behind when it comes to database-intensive applications due to limitations in hard drive speeds.  Updating data in a hard drive is the limiting factor for most computers nowadays, as the process is limited by the speed of the stick that is writing the information to the disk.

MIT’s news article, “Making Cloud Computing More Efficient,” written by Barzan Mozafari, explains that “updating data stored on a hard drive is time-consuming, so most database servers will try to postpone that operation as long as they can, instead storing data modifications in the much faster – but volatile – main memory.”

At the SIGMOD conference, MIT researchers will reveal algorithms used by a new system called DBSeer that uses a “gray box model” that should help solve this problem.  DBSeer will use machine-learning techniques that will be able to predict the resource usage and needs of individual database-driven application servers.  Cloud computing servers are often divided up into multiple “virtual machines,” which are partitions of servers which are each allocated a set amount of processing power, memory, etc.  DBSeer will hopefully be able to predict a database’s unique needs and idiosyncrasies so it can predict whether or not it is viable to allocate additional resources from other partitions to solve a task.  If a virtual machine is just sitting there idle, DBSeer will assess whether or not it is prudent for that virtual machine to continue sitting there, or spend its allocated resources to complete a task on another partition.

Ultimately, this will allow servers to be much more efficient without further investment in hardware.  This trend that follows with Big Data is really getting computer scientists to question if there are more efficient ways to handle our problems with the hardware that we have.  It is all about maximizing productivity by questioning our own methods, rather than simply investing in more hardware.

Opportunities Ripe for Data

With today’s technology, we are able to record a large amount of data on our daily lives.  We leave a history of the websites that we visit, our phones can track our location and companies can see what you are buying with your credit card.  Big Data is about taking in this huge amount of data and turning it into useful information. The Wall Street Journal article “Leveraging Data to Drive Innovation” included comments from  Chris Anderson, CEO of 3D Robotics: “We are drowning in data.  But we don’t have enough ability to analyze it.”

If we could come up with efficient ways to crawl through data, we would be better informed and be able to make better decisions that benefit our health.  We could keep track of our vitals constantly, giving us a more complete picture of what we need to do in order to be healthy.  Alternatively, that information can be commercialized. Insurance companies, the food industry, and other commercial interests would use the information to refine their advertising.

By not taking advantage of these pieces of data, we are missing out on potential innovations that can move science and the economy forward.  If homeowners could easily identify what in their home was consuming the most energy, they could take steps towards reducing energy costs.  If automobile manufacturers could reduce emissions from cars by just 1%, that would create a significant decrease in pollution for some cities. It is just a matter of taking the physical world and putting it into measurable data.

Map-Based Visualizations

The trend in computing today is questioning every aspect of life and compiling data on it.  We can take massive volumes of data and identify deficiencies in the way we run our lives. Directions Magazine’s article, “New map-based visualization provides insight into Seattle commuting data” is an excellent example of this.  IDV Solutions is a data visualization company that took information from the U.S. Census Bureau and other public sources to create an infographic that displays detailed information on the geography of Seattle’s commuting trends.  Infographics help translate the data that we take from the physical world and put it in a data structure that takes the shape of the physical world as well, allowing the reader to quickly spot trends and make decisions based on data that we would not be able to see otherwise.

The implementation of these infographics can bring attention to the efficiency of the public transportation system in Seattle. It can also help determine where improvements can be made.  Even more exciting is the prospect of applying the same techniques to every major population center in the world.  A one percent improvement in efficiency can produce massive reductions in fuel expenditures and gas emissions when applied on a global scale.  The technology exists for the repetition of this process to be a real possibility; it just requires that someone act on it.  The application of this technology is an investment that can save time and resources for commuters. It can also open up other avenues of savings.  The aspects of life that can be improved are endless. We just need to figure out how to put the world in a database.

The Implications of Google Maps

The Atlantic’s article, “How Google Builds Its Maps – and What It Means for the Future of Everything” isn’t an article about Google Maps as much as it is about Google’s approach to handling data.  As one would expect, there is a lot more to the inner structure of Google Maps than a map from a satellite image.  The directions that you get when you ask how to get from point A to point B stem from a long line of logic problems, logic problems that Google would only know the answers to if Google had committed manpower to investigate the roads themselves.  The physical space that Google Maps allow you to navigate is filled with data that first must be combed for consistency.  Sure, first Google uses other maps with data inputs already, but Google’s commitment to correct data has sent their employees driving all across the world in order to build a massive dataset that is comparable to the physical world that we call reality.

This commitment to accuracy doesn’t just affect Google’s map interfaces, however; this type of thorough and dedicated investment in technologies and applications gives Google a competitive edge in other markets. Google’s strength is the utilization of information, and investments in such areas are helping Google gain an edge over Apple in the growing battle over mobile phones. In particular, The Atlantic’s Alexis Madrigal posits “geo data” in particular “will play an important role in the battle for mobile phones”. While geo data is becoming undeniably important in the mobile phone market, its usages and implementations have larger implications. Google’s large data management techniques as a whole indicate great potential for future developments in the overall field of data analysis.

Connection Revolution

If you had asked the computer scientists of the late 1960s, it’s unlikely that their theories regarding the effects of connecting the world’s computers via the Internet would bear any resemblance the interconnected world we live in today.  The Internet ushered in a digital age, and today we see the far-reaching effects of connecting computers together in every aspect of our daily lives.  Today, think-tanks at GE are now asking, “Why stop at computers?”

With the Big Data movement changing the way we look at raw information, GE is proposing connecting the vast multitudes of machinery to each other and to sensors.  In particular, GE has plans to attach a jet engine with a variety of different sensors to collect information that could potentially lead to improvements in design.  In fact, GE proposes that we could be doing this with all of our machines.  Though the technology needed to attach such sensors to all of our machines does currently exist, the initial buy-in is huge, and the investment is one that not many companies are likely to make in our current economic situation.  Though the current economic climate does dictate choosing one’s investments wisely, Automation World’s article “Can the Industrial Internet Unleash the Next Industrial Revolution” supports the investment, saying that even if such sensors were able to discover a mere one percent improvement, such a discovery could make a huge difference to a company’s economic situation; for example, a one percent increase in fuel efficiency for airline companies constitutes a $30 billion savings over the course of 15 years.  Optimizing these frequently overlooked aspects of business can create incremental improvements that propel our economy forward.  It’s simply a matter of applying big data concepts to things beyond traditional computing systems, and interpreting the data effectively.

eBay and Big Data

eBay’s revenue is largely dependent on big data; by using, sorting, and filtering massive amounts of data, eBay makes sure that their customers see information that is catered to their individual interests.  I suppose when people deal with the concept of Big Data so much on a daily basis, they start to think of other ways to usefully implement Big Data concepts. Even questioning the productivity of their own servers displays some thinking outside the box on eBay’s part.

Which begs the question: if a savvy tech company like eBay is able to save millions of dollars by applying Big Data concepts to their own servers, how much could other companies that deal with large volumes of data save?  This is the relatively unexplored potential use of Big Data; Lisa Arthur mentions in her Forbes article “The Surprising Way eBay Used Big Data Analytics to Save Millions” that eBay’s success demonstrates the “critical importance of tearing down corporate silos”.  eBay’s initiative should be the start of a widespread scramble for companies to save money by using and improving their own infrastructure. What are other points in the process that people can gather data from?  How many millions of dollars could it save?  These questions are probably running through the heads of creative data analysts around the clock as they work to find new and innovative ways to put Big Data to work.

The Internet Association

The Internet Association will be the “first and only trade association representing Internet companies and the interests of their users,” President and CEO Michael Beckerman told Mashable.  The goal of the Internet Association is to work towards “political solutions that will push for protecting a free and open Internet” and, according to Beckerman, to defend the Internet from what its members view as excessive regulation. Mashable’s “Internet’s Biggest Companies Joining Forces to Lobby Washington” article reports that there are many more companies included in the Internet Association, but the entirety of the group’s members won’t be disclosed until September.

The question then becomes a matter of policy. Made up of companies whose monetary value is staggering, the Internet Association has the potential to carry considerable weight in politics; however, how will the organization decide what stance to hold when there is such a large group of people to protect? The organization’s creation is likely a response to policies like SOPA and PIPA, and having an association to lobby against these policies will help protect users of the Internet, but it also has the potential for negative impact.  Currently, there is no global association that has the final say on the Internet. This lack of a global ruling body on the Internet means that countries will individually decide what to do with the Internet, and in America, for better or for worse, means that businesses with money will influence democracy.

So how will the affect the average Joe? It will depend on the policies that the Internet Association supports, but I am optimistic that this organization will fight for the users.  Technology companies are especially mindful of those who use their products, as the relationship between the consumer and the business is very close amongst tech companies, and I’m hopeful that the same will be true of the Internet Association.