Week 13

Data aggregation!

This is such a cool topic, if not insanely complicated for me specifically to try to figure out. 5 exabytes of data encompasses everything we know up until 2003.

That number is swiftly going to climb. In fact, as covered in the intro to the white paper, it already has. We double that number exponentially, it seems. With the ability we have now to digitize, in high quality, almost everything that could ever happen from a billion different angles, how do we mine that data for the important information?

We don’t. Currently. The processing power would be insane. But I really like the idea that this could happen some day, using our crowdsourced data collection and using it to press all of that information together in a way that benefits us so well.

In the meantime….Search engines! Selective processing! We can take a guestimate of the data out there and work to provide a little thumbnail view of that data. The problem is we can generate hundreds of times more data these days than we could ever process as a human being, so we really need to start generating ideas for how to handle all of this data.

1. What big issues do you think could be solved just by combining and collecting all the data that’s available out there, if we could do it in the right way?

2. Random guestimate: How much data do you produce daily? Weekly? Yearly?

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to Week 13

  1. amandacbilly says:

    I think that, if researchers could combine and analyze all the studies that have been done, we could already have cures for all kinds of diseases, or at least really strong indications of what those cures will look like. In terms of the data _I_ produce… I have no clue. How big is a tweet? Or a Facebook comment? Your question did bring to mind an infographic I remember seeing on Mashable about the estimated data humans would produce in 2011: http://mashable.com/2011/06/28/data-infographic/. One of their researchers estimated that humans would create and replicate 1.8 zettabytes of data in the course of the year, or about 200 billion 2-hour movies, so many that, if a person were to sit down and watch those movies 24/7, it would take them 47 million years to finish. And really, that’s the problem. Those quantities are so unfathomably large that we can’t hope to process the information ourselves.

  2. Hoh. That mashable infographic just made my day.

  3. That info graphic is incredible. The numbers it referenced and that Amanda mentioned are beyond anything I can remotely comprehend. I, too, have no idea how much data I produce. But your question about what issues could be solved with current data is interesting. If there were ways to collect and analyze all that data, I think more than just diseases would be cured. More issues than we could come up with here. Consider Felix Baumgartner’s record-breaking jump last month. The data generated and collected for the scientific community alone was staggering, much less the data for social media, military, sports, and countless other avenues. I’m not sure all the data available will ever be thoroughly analyzed, but I do know all this data will outlive us.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s