Private Apps in the Public Sphere

In an article from CNET, “New York City Puts the Brakes on New Uber Cab-Hailing App”, author Steven Musil discusses the expansion of Uber, a company that creates “private car-summoning” apps. In particular, Uber is working to create smartphone apps that would allow people to not just see where taxis are in relation to themselves but allow them to hail said cabs from the app itself. I have to say that the idea behind the app is rather creative; it’s a service that seems to be rather obvious on the face of it, just a means to efficiently hail a taxi using modern technology. The difficulty of implementation seems to come in at the contractual level, though.

Specifically, it seems as though Uber has come up against quite a lot of bureaucratic red tape while trying to expand into viable markets. In particular, they are having issues expanding into cities which may already have their own cab-hailing apps in the works; in New York City, for example, Uber has been possibly prevented from operating because the Taxi & Limousine Commission prefers to meet and “work collaboratively with the livery, black car, and taxi industries to address their concerns about the impact of apps on existing business models”. Uber has had similar difficulties in other cities such as Boston and Washington D.C., but said difficulties were ultimately resolved. I can understand the need for limitations as new companies extend into new territories, but the fact that a “lack of national guidelines” resulted in a cease and desist letter, I have to wonder if there aren’t opposing factors or special interests involved. However, progress is being made toward balancing municipal and private interests.

eBay and Big Data

eBay’s revenue is largely dependent on big data; by using, sorting, and filtering massive amounts of data, eBay makes sure that their customers see information that is catered to their individual interests.  I suppose when people deal with the concept of Big Data so much on a daily basis, they start to think of other ways to usefully implement Big Data concepts. Even questioning the productivity of their own servers displays some thinking outside the box on eBay’s part.

Which begs the question: if a savvy tech company like eBay is able to save millions of dollars by applying Big Data concepts to their own servers, how much could other companies that deal with large volumes of data save?  This is the relatively unexplored potential use of Big Data; Lisa Arthur mentions in her Forbes article “The Surprising Way eBay Used Big Data Analytics to Save Millions” that eBay’s success demonstrates the “critical importance of tearing down corporate silos”.  eBay’s initiative should be the start of a widespread scramble for companies to save money by using and improving their own infrastructure. What are other points in the process that people can gather data from?  How many millions of dollars could it save?  These questions are probably running through the heads of creative data analysts around the clock as they work to find new and innovative ways to put Big Data to work.

Big Data and the Damage of Droughts

Despite the awful droughts of the summer, many farmers have been able to avoid economic disaster with the help of big data; as discussed in Wired’s “Big Data Shows Hyperlocal Harshness of 2012 Drought”, crop insurance companies have been processing large amounts of weather data in many states and using it to accurately determine appropriate amounts of financial compensation for drought-damaged crops. In addition, by compiling the data, these companies are able to display a close-up, detailed view of the drought’s impact on America’s crop yields as a whole.

One insurance company that seems to be taking a great interest in this modern face of insurance is The Climate Corporation, a six-year-old crop insurance company based in San Francisco. By doing daily measurements of ambient temperature and soil moisture, the company can determine exactly how many days presented crop-damaging levels of stress and provide financial aid accordingly. In addition, the large amounts of data processed can be used to predict future temperatures, precipitation levels, and subsequent crop yields with unrivaled accuracy.

For example, The Climate Corporation tracked the number of days in which temperatures and soil moisture reached the “heat stress” and “wilting” points in one particular Oklahoma farm. For every day that the crops suffered these undesirable conditions, the farm’s owners were compensated by a certain amount of money per acre. In this case, the farm spent most of the summer in the heat stress zone and the entirety of July and August in the wilting zone; fortunately, these recordings ensured that the farm received the economic assistance it needed in order to avoid major losses.

While the crop insurance data helps to secure the business against major losses, The Climate Corporation claims that it is possible that climate conditions could become so poor that crop insurance wouldn’t make economic sense. In that situation, farmers would have to either accept lower profit margins or raise their prices; over time, such difficulties could make farming unsustainable in some areas. Fortunately, however, that point has not yet arrived; in spite of the severity of this summer’s drought, The Climate Corporation states that it was not unprepared for it; CEO David Friedberg remarked that, “It’s not like a 99.999-percent thing that we never accounted for… We can have a good sense of the range of uncertainty.”

As a resident of a state whose income is heavily crop-based, I have to admit to a good deal of relief; without protections like these, it’s probable that my own community would have been far more devastated by the hardships of this past summer than it was. Also, I really like the idea that, by analyzing all of this data, these crop insurance companies are going to be more capable of predicting problems like these in the future; that way, our farmers will have more of a chance to prepare for them, and we might be able to avoid the worst of what nature has to offer. Chalk one up for big data!

Crowdsourcing the Dictionary

In hopes of discovering and recording new and creative words, the staff at the UK’s Collins English Dictionary has begun crowdsourcing the entries in their dictionary. Collins, which added “crowdsourcing” to its own dictionary in 2009, is asking internet users to contribute their own words to this project. So far, there’s been a good response; “[in] the first two weeks of the initiative, there were 2,637 suggestions from more than 2,000 different users.”

While the Boston Globe’s article “Crowdsourcing the Dictionary” initially suggests that Collins’ project may initially seem more similar in nature to online dictionaries such as Urban Dictionary, it soon becomes clear that the concept of dictionaries incorporating user-submitted content is not in fact a new one; the Oxford English Dictionary, for example, put out its first call for user submissions in 1879. Rather, it is the ease of submission to and user influence on the dictionary that makes the Collins English Dictionary unique. While Merriam-Webster has a similar word submission project called Open Dictionary that has received “nearly 20,000 suggestions from users since…2005”, the Merriam-Webster editors use it simply as research inspiration. In contrast, the Collins English Dictionary aims integrate user submissions with their current online dictionary.

This project has interesting implications for future projects both within and outside of the lexical world. While soliciting user input is not a new business model for dictionary groups, the amount of moderated, user-generated content within Collins’ final product is unique. Though Collins’ model of user-based submissions makes improvements on past models, there is still much progress that could be made in terms of user interactivity and influence. However, such concessions are necessary if a specific level of quality is desired for the final product.