A Technical Look at How Parallel Processing Brings Vast New Capabilities to Large-Scale BI and Data Analysis Internet-scale data gathering, swarms of sensors outputs, and content signals from the mobile device fabric -- as well as enterprises piling up ever more kinds of metadata to analyze -- have stretched traditional data-management models to the breaking point. Yet advances in parallel processing using multi-core chipsets have prompted new software approaches such as MapReduce that can handle these data chores at surprisingly low total cost. What are the technical underpinnings that support the new demands being placed on, and by, extreme data sets? What economies of scale can we anticipate? To provide a technical look at how parallelism, modern data infrastructure, and MapReduce technologies come together, BriefingsDirect's Dana Gardner spoke with Joe Hellerstein, professor of computer science at UC Berkeley; Robin Bloor, analyst at Hurwitz & Associates, and Luke Lonergan, CTO and co-founder at Greenplum. Read a full transcript of the discussion at: http://briefingsdirect.blogspot.com/2009/01/technical-look-at-how-parallel.html. Sponsor: Greenplum.