Crowdsourcing and Online Job Creation

Crowdsourcing is quickly becoming a powerful productivity tool in the modern market. In many fields, crowdsourcing solutions are ultimately cheaper and more efficient than traditional methods. This trend of increased efficiency and decreased cost is particularly noticeable in the realm of the voice-over; in an article from Marketplace, “Could Crowdsourcing Talent Online Create Jobs?”, author David Brancaccio explores the effect that crowdsourcing has had on the voice-over industry, and what these effects imply regarding the future of the market.

In particular, Brancaccio focuses on a company called VoiceBunny. VoiceBunny is an online service that allows clients and voice actors to more easily connect with one another; “client offers a script online and people who know how to read aloud offer their services”. Additionally, the software itself helps clients find the most suitable talent for their specific script.

Ultimately, this service is significantly cheaper for companies because of the lack of overhead costs. Rather than hiring voice over actors, perhaps renting studio space, and distributing the final recording themselves, companies are able to get a good-quality voiceover for “$11…plus a $2.20 service fee”. Because they work as independent contractors, the voice-over artists themselves are responsible for providing and maintaining their own equipment. Additionally, VoiceBunny offers podcasting services; through these services, clients can purchase spoken versions of their articles. These spoken versions can then be distributed by VoiceBunny or by the authors themselves.

Because of its cheapness and its ease of use, however, VoiceBunny does have interesting implications regarding the future of the voice-over market. Though many voice-over artists still receive the majority of their income “from bigger gigs outside of [the VoiceBunny] service”, many of the jobs available on VoiceBunny are ones that, in the past, would have gone to more local voice-recording artists. What was once a local market has become a globalized one and, consequently, more people are able to offer and be paid for their services. Indeed, services like VoiceBunny have the potential to change how the market as a whole operates; it is not inconceivable to imagine that hosting duties of local radio shows may one day be crowdsourced as well.

Private Apps in the Public Sphere

In an article from CNET, “New York City Puts the Brakes on New Uber Cab-Hailing App”, author Steven Musil discusses the expansion of Uber, a company that creates “private car-summoning” apps. In particular, Uber is working to create smartphone apps that would allow people to not just see where taxis are in relation to themselves but allow them to hail said cabs from the app itself. I have to say that the idea behind the app is rather creative; it’s a service that seems to be rather obvious on the face of it, just a means to efficiently hail a taxi using modern technology. The difficulty of implementation seems to come in at the contractual level, though.

Specifically, it seems as though Uber has come up against quite a lot of bureaucratic red tape while trying to expand into viable markets. In particular, they are having issues expanding into cities which may already have their own cab-hailing apps in the works; in New York City, for example, Uber has been possibly prevented from operating because the Taxi & Limousine Commission prefers to meet and “work collaboratively with the livery, black car, and taxi industries to address their concerns about the impact of apps on existing business models”. Uber has had similar difficulties in other cities such as Boston and Washington D.C., but said difficulties were ultimately resolved. I can understand the need for limitations as new companies extend into new territories, but the fact that a “lack of national guidelines” resulted in a cease and desist letter, I have to wonder if there aren’t opposing factors or special interests involved. However, progress is being made toward balancing municipal and private interests.

eBay and Big Data

eBay’s revenue is largely dependent on big data; by using, sorting, and filtering massive amounts of data, eBay makes sure that their customers see information that is catered to their individual interests.  I suppose when people deal with the concept of Big Data so much on a daily basis, they start to think of other ways to usefully implement Big Data concepts. Even questioning the productivity of their own servers displays some thinking outside the box on eBay’s part.

Which begs the question: if a savvy tech company like eBay is able to save millions of dollars by applying Big Data concepts to their own servers, how much could other companies that deal with large volumes of data save?  This is the relatively unexplored potential use of Big Data; Lisa Arthur mentions in her Forbes article “The Surprising Way eBay Used Big Data Analytics to Save Millions” that eBay’s success demonstrates the “critical importance of tearing down corporate silos”.  eBay’s initiative should be the start of a widespread scramble for companies to save money by using and improving their own infrastructure. What are other points in the process that people can gather data from?  How many millions of dollars could it save?  These questions are probably running through the heads of creative data analysts around the clock as they work to find new and innovative ways to put Big Data to work.

Big Data and the Damage of Droughts

Despite the awful droughts of the summer, many farmers have been able to avoid economic disaster with the help of big data; as discussed in Wired’s “Big Data Shows Hyperlocal Harshness of 2012 Drought”, crop insurance companies have been processing large amounts of weather data in many states and using it to accurately determine appropriate amounts of financial compensation for drought-damaged crops. In addition, by compiling the data, these companies are able to display a close-up, detailed view of the drought’s impact on America’s crop yields as a whole.

One insurance company that seems to be taking a great interest in this modern face of insurance is The Climate Corporation, a six-year-old crop insurance company based in San Francisco. By doing daily measurements of ambient temperature and soil moisture, the company can determine exactly how many days presented crop-damaging levels of stress and provide financial aid accordingly. In addition, the large amounts of data processed can be used to predict future temperatures, precipitation levels, and subsequent crop yields with unrivaled accuracy.

For example, The Climate Corporation tracked the number of days in which temperatures and soil moisture reached the “heat stress” and “wilting” points in one particular Oklahoma farm. For every day that the crops suffered these undesirable conditions, the farm’s owners were compensated by a certain amount of money per acre. In this case, the farm spent most of the summer in the heat stress zone and the entirety of July and August in the wilting zone; fortunately, these recordings ensured that the farm received the economic assistance it needed in order to avoid major losses.

While the crop insurance data helps to secure the business against major losses, The Climate Corporation claims that it is possible that climate conditions could become so poor that crop insurance wouldn’t make economic sense. In that situation, farmers would have to either accept lower profit margins or raise their prices; over time, such difficulties could make farming unsustainable in some areas. Fortunately, however, that point has not yet arrived; in spite of the severity of this summer’s drought, The Climate Corporation states that it was not unprepared for it; CEO David Friedberg remarked that, “It’s not like a 99.999-percent thing that we never accounted for… We can have a good sense of the range of uncertainty.”

As a resident of a state whose income is heavily crop-based, I have to admit to a good deal of relief; without protections like these, it’s probable that my own community would have been far more devastated by the hardships of this past summer than it was. Also, I really like the idea that, by analyzing all of this data, these crop insurance companies are going to be more capable of predicting problems like these in the future; that way, our farmers will have more of a chance to prepare for them, and we might be able to avoid the worst of what nature has to offer. Chalk one up for big data!

Crowdsourcing the Dictionary

In hopes of discovering and recording new and creative words, the staff at the UK’s Collins English Dictionary has begun crowdsourcing the entries in their dictionary. Collins, which added “crowdsourcing” to its own dictionary in 2009, is asking internet users to contribute their own words to this project. So far, there’s been a good response; “[in] the first two weeks of the initiative, there were 2,637 suggestions from more than 2,000 different users.”

While the Boston Globe’s article “Crowdsourcing the Dictionary” initially suggests that Collins’ project may initially seem more similar in nature to online dictionaries such as Urban Dictionary, it soon becomes clear that the concept of dictionaries incorporating user-submitted content is not in fact a new one; the Oxford English Dictionary, for example, put out its first call for user submissions in 1879. Rather, it is the ease of submission to and user influence on the dictionary that makes the Collins English Dictionary unique. While Merriam-Webster has a similar word submission project called Open Dictionary that has received “nearly 20,000 suggestions from users since…2005”, the Merriam-Webster editors use it simply as research inspiration. In contrast, the Collins English Dictionary aims integrate user submissions with their current online dictionary.

This project has interesting implications for future projects both within and outside of the lexical world. While soliciting user input is not a new business model for dictionary groups, the amount of moderated, user-generated content within Collins’ final product is unique. Though Collins’ model of user-based submissions makes improvements on past models, there is still much progress that could be made in terms of user interactivity and influence. However, such concessions are necessary if a specific level of quality is desired for the final product.

The Internet Association

The Internet Association will be the “first and only trade association representing Internet companies and the interests of their users,” President and CEO Michael Beckerman told Mashable.  The goal of the Internet Association is to work towards “political solutions that will push for protecting a free and open Internet” and, according to Beckerman, to defend the Internet from what its members view as excessive regulation. Mashable’s “Internet’s Biggest Companies Joining Forces to Lobby Washington” article reports that there are many more companies included in the Internet Association, but the entirety of the group’s members won’t be disclosed until September.

The question then becomes a matter of policy. Made up of companies whose monetary value is staggering, the Internet Association has the potential to carry considerable weight in politics; however, how will the organization decide what stance to hold when there is such a large group of people to protect? The organization’s creation is likely a response to policies like SOPA and PIPA, and having an association to lobby against these policies will help protect users of the Internet, but it also has the potential for negative impact.  Currently, there is no global association that has the final say on the Internet. This lack of a global ruling body on the Internet means that countries will individually decide what to do with the Internet, and in America, for better or for worse, means that businesses with money will influence democracy.

So how will the affect the average Joe? It will depend on the policies that the Internet Association supports, but I am optimistic that this organization will fight for the users.  Technology companies are especially mindful of those who use their products, as the relationship between the consumer and the business is very close amongst tech companies, and I’m hopeful that the same will be true of the Internet Association.

Private/Public Partnerships in Broadband

Forbes’ “Bring on the Broadband with Private/Public Partnerships” discusses the nation’s changing interactions with broadband services and how the market is readjusting to accommodate customers’ new needs. I confess a certain bewilderment that a joint venture between municipal governments and private internet service providers has taken this long to come to fruition. For some time I have been reading about some small towns that, after deciding the services rendered by ISPs were not worth the relative cost, established their own public internet providers; in these success stories, the individual towns were able to use their citizens’ taxes to create infrastructures with much greater speeds, making internet access easy for whole population centers. In the wake of this, I would have thought that ISPs would be trying to prevent such action through “joint venture” and “compromise”.

I can see the value in working with cities to use their pre-existing fiber-optic networks to augment the capacities of internet services, especially where doing so would cut back on the cost to the private companies to expand their networks. However, this move still seems rather uninspired and even a bit shortsighted. In an age where Finland gets an average broadband speed of 22Mbps for an average of about $3.00 per 1Mbps, the proposed 5Mbps for a $300 installation fee and $70/month charge that Google has put forward for Kansas City can hardly be considered advancement.

Personally, I am much more interested in efficiency over competition. I can’t speak for the average consumer, but I can say that as a nation, we have a great deal of progress to make in this information age to catch up with the rest of the world; we could benefit not so much from collaboration between companies and governments as outright mergers of the two. (Unprecedented, perhaps, but so are most new and interesting things.) Empires these days seem to be powered by data; it behooves us as a country to look nationally into the necessity of building our networks for our own citizens, creating a stable infrastructure for the nation, rather than simply expecting ISP companies to do the same.

Free Versus Open in the World of Software

As discussed in Forbes’ “Free Versus Open: Does Open Source Software Matter in the Cloud Era?”, open source software has been a major player in the growth of internet-based companies and tech-savvy businesses. Despite this popularity, however, open sourcing may be losing its grip on the software market; the importance of the open source option is taking a back seat to the desire for quick and efficient access to the software and its capabilities—access commonly provided by free software. Although the terms “free” and “open source” are often used interchangeably, free software is marked by its ability to provide users expedient and easily attainable access to software utilities, while open source software is driven more by a desire to improve software from a practical standpoint.

As the consumers’ desire for the accessibility of free software grows, the market for open source software becomes less and less relevant; although the idea behind open source software has a certain grass-roots, collaborative appeal, consumers are more likely to purchase products that showcase convenience over versatility. Open source isn’t necessarily even profitable among the techie market; even open source software consumers tend to pay lots of money into certain proprietary companies, like Apple. Proponents of open source software claim that it functions well as a loss leader in the age of internet-based companies, enticing new user traffic for the relevant company, but opponents argue that the open source model works more as a marketing ploy than as a functional business model.

As a not-particularly-tech-savvy person, I have to say that I’m a little torn on this subject. Like many consumers, I have no idea how to alter any open source programs I use; I just want them to work properly when I need them. On the other hand, though, I like the idea of the software I use being functional on multiple devices. I have an Android tablet, and I spend a lot of time waiting for Apple-based apps to be converted; open source software is much more open to adaption than proprietary software, so I imagine that open sourcing improves my chances of getting some long-desired apps for Android in the near future. Overall, though, I’m your average consumer. I’m not going to check to see if software is open source before I buy it; I’m going to check and see if it works efficiently, and the software that passes that test is the one that gets my money.

The Geography of Twitter

As social networking continues to erode the barriers of geographical separation, researchers at the Oxford Internet Institute have created a data visualization that illustrates the “geography” of Twitter. These results, displayed in a treemap, were attained by collecting all georeferenced tweets posted between March 5th and March 13th of this year. That data was condensed into a randomly chosen 20% sample set, which was then spatially organized by country. In the treemap, countries are sorted by continent; the size of each country’s block indicates the number of tweets made from that country, and the variation in color reveals the proportion of georeferencing Twitter users in comparison to the total number of internet users in that country.

According to the data, the geography of tweets is nowhere near equally distributed; one or two countries are responsible for far more tweets than many of the others combined. It is interesting to note, however, that of the top six tweeting countries, only two are commonly considered centers of codified knowledge output; America and the United Kingdom took first and third, respectively, but Brazil, Indonesia, Mexico, and Malaysia filled in the rest of the top six list.

This globalization of information output could have a very real impact on the scope of knowledge distribution. Social media sites like Twitter provide a free outlet of expression for anyone capable of accessing the internet, so those who previously would have been unable to globally communicate ideas can their voices heard around the world. Personally, I welcome this connection; thanks to social sites like Twitter and Facebook, I have been able to make friends all over the world. These people have provided me with cultural and social insights that I would never have gotten otherwise, and I don’t think I would be the person I am today without their input.

As the geography of information continues to change and grow with time, more research on this subject will definitely be necessary; still, I think that the results of this study are promising. More and more countries are getting in on the information trade, and their contributions are expanding our awareness of the transcultural significance of the internet upon the world we live in; now that’s something to tweet about.

The Dawn of Data Companies

I remember that when I took my first programming class in high school, I felt as if an entire world of possibilities was at my fingertips; if there was ever a need or a problem to be solved, I was now more prepared to develop software to address it.  But that was quite a while ago, and the climate has taken a dramatic shift away from the familiar world of software and into the brave new world of data which, unfortunately, puts my own personal graveyard of software ideas even further into the ground!

Nevertheless, this progression away from software towards data-based companies does make economic and business sense.  Forbes’ article, “RIP ‘Software’ Companies; Hello ‘Data’ Companies” cites the shift in cloud computing and the aggregation of data that has become more valuable as it increases in size.  Mike Hoskins is mentioned in the article as saying, “[is] Google a software company?  Is Facebook a software company?  They’re not, they’re data companies.  The value they and many other companies provide to the market is their ability to manage data and provide analysis.”

It is with that statement that we can see that our lives revolve around the distribution and processing of data.  Facebook profits from the data that users input into their systems, Google collects data from its searches, and Forbes processes such business-related data and has writers talk about it.  Once big data sets became a big deal, the point was reached where collections of data themselves can provide profitable information to many types of companies.

Today we see a demand for the manipulation of data, where companies are being paid to sell, analyze, process, and reevaluate data.  Multiple datasets are  brought together for companies to be able to piece together the bigger picture, and patterns are being discerned that were previously unknown. Because of all of these new situations and opportunities, it is all the more necessary for companies to be able to process their new and massive data sets in efficient ways. Through the effective utilization of big data analysis, businesses are able to make better and faster decisions because these massive data is being collected and analyzed in ways never before possible.