Big Data Exchanges: Of Shopping Malls and the Law of Gravity

Working for a Storage Systems company, we are constantly looking at both the technical as well as social/marketplace challenges to our business strategy. Leading to the coining of “Cloud Meets Big Data” from EMC last year, EMC has been looking at the trends that “should” tip the balances around real “Cloud Information Management” as opposed to “data management” which is really what dominates todays practice.

There are a couple of truisms [incomplete list]:

  1. Big Data is Hard to Move = get optimal [geo] location right the first time
  2.  Corollary = Move the Function, across Federated Data
  3. Data Analytics are Context Sensitive = meta-data helps to align/select contexts for relevancy
  4. Many Facts are Relative to context = Declare contexts of derived insight (provenance amp; Scientific Method)
  5. Data is Multi-Latency & needs Deterministic support for temporality= key declarative information architectural requirement
  6. Completeness of Information for Purpose (e.g. making decision) = dependent on stuff I have, and stuff I get from others, but everything that I need to decide.

I believe that 1) and 6) above point to an emerging need for Big Data Communities to arise supporting the requirements of the others. Whether we talk about these as communities of interest, or Big Data Clouds. There are some very interesting analogies that I see in the way we humans act; namely, the Shopping Mall. Common wisdom points to the mall as providing an improved shopping efficiency, but also in the case of inward malls, a controlled environment (think walled garden). I think that both efficiency in the form of “one stop”, and control are critical enablers in the information landscape.

Big Data Mall slideThis slide from one of my presentations supports the similarities of building a shopping mall alongside the development of a big data community. Things like understanding the demographics of the community (information needs, key values), the planning of roads to get in/out. And of course how to create critical mass = the anchor store.

The interesting thing about critical mass is that it tends to have a centricity around a key [Gravitational] Force. Remember:

Force = Mass * Acceleration (change in velocity).

This means that in order to create communities and maximize force you need Mass [size/scope/scale of information] and improving Velocity [timelyness of information]. In terms of mass, truism #1 above, and the shear cost / bandwidth availability make moving 100TB of data hard, and petabytes impracticable. Similarly, velocity change does matter, whether algorithmically trading on the street (you have to be in Ft Lee, NJ or Canary Warf, London) or a physician treating a patient, the timeliness of access to emergent information is critical. So correct or not, gravitational forces do act to geo-locate information.

Not trying to take my physics analogy too far, but Energy is also interesting. This could be looked at as “activity” in a community. For energy there is an interesting both kinetic and potential models. In the case of the internet, the relative connectedness of information required for a decision could be viewed in light of “potential”. Remember:

Ep (potential energy) = Mass x force of Gravity x Height (mhg)

In our case Height could be looked at as the bandwidth between N information participant sites, Mass as the amount of total information needed to process, and Gravity as a decentralization of information = the Outer Joins required for optimal processing. If I need to do a ton of outer joins across the Internet in order to get an answer, then I need to spend a lot of energy.

So if malls were designed for optimal [human] energy efficiency, then big data malls could do exactly the same for data.

Greenplum the “Big Data” Cloud Database

It has been a long time since EMC completed the acquisition of Greenplum and we have been mighty busy. I’ve met with the biggest and smallest of customers, and have heard literally 50′s of feature / product requests. We’re truly listening, Hadoop, ETL, systemic management, BI/BA deep integrations, and improvements in multi-tenancy for governed derivative cubes and marts. Leave me a comment, tell me what you’re thinking. If you want me to keep it private, just put <private>text here</private> in the contents, and I’ll just get back with you personally. Stay tuned, we’re up to something interesting.

The Greenplum “Big Data” Cloud Warehouse

The Data Warehouse space has been red hot lately. Everyone knows the top tier players, as well as the emergents. What have become substantial issues are the complexity of scale/growth of enterprise analytics (every department needs one) and increasing management burden that business data warehouses are placing on IT. Like the wild west, a business technology selection is made for “local” reasons, and the more “global” concerns are left to fend for themselves. The trend toward physical appliances has only created islands of data, the ETL processes are ever more complex, and capital/opex efficiencies ignored. Index/Schema tuning has become a full time job, distributed throughout the business. Lastly, these systems are hot because they are involved in the delivery of revenue… anyone looking at SARBOX compliance?

Today EMC announced the intent to acquire Greenplum software of San Mateo, CA. Greenplum is a leading data warehousing company with a long history of exploiting the open-source postgres codebase, with a substantial amount of work in taking that codebase to both a horizontal scale out architecture, but also a focus on novel “polymorphic data storage” which supports new ways to manage data persistence to provide deep structural optimizations including row, column and row+column at sub-table granularity*. In order to begin to make sense of EMC’s recent announcement around Greenplum one must look at the trajectory of both EMC and Greenplum.

EMC, with it’s VMware/Microsoft and Cisco alliances, and recent announcements around vMAX, vPlex… virtual storage becomes a dynamically provision-able, multi-tenant, SLA policy driven element of the cloud triple (Compute, Network, Storage). But, it’s one thing to just move virtual machines around seamlessly and provide consolidation and improved opex/capex – IT improvements. In my mind “virtual data” is all about an end-user (and maybe developer) efficiency… giving every group within the enterprise the ability to have their own data either federated to, or loaded into a data platform; where it can be appropriately* shared with other enterprise user as well as enterprise master data. The ability to “give and take” is a key value in improving data’s “local” value, and the ease with which this can be provisioned, managed, and of course analyzed defines an efficient “Big Data” Cloud (or Enterprise Data Cloud in GP’s terms).

The Cloud Data Warehouse has some discrete functional requirements, the ability to:

  • create both materialized and non-materialized views of shared data… in storage we say snapshots
  • subscribe to a change queue… keeping these views appropriately up to date, while appropriately consistent
  • support the linking of external data via load, link, link & index to accelerate associative value
  • support mixed mode operation… writes do happen and will happen more frequently
  • accelerate linearly with addition of resources in both the delivery of throughput and the reductions in analytic latency
  • exploit analyst natural language… whether SQL, MapReduce or other higher level programming languages

These functions drive some interesting architectural considerations:

  • Exploit Massively Parallel Processing (MPP) techniques for shared minimal designs
  • Federate external data through schema & data discovery models, building appropriate links, indicies and loads for optimization & governed consistency
  • Minimize tight coupling of schemas through meta-data and derived transformations
  • Allow users to self provision, self manage, and self tune through appropriately visible controls and metrics
    • This needs to include the systemic virtual infrastructure assets.
  • Manage hybrid storage structures within single database/table space to help ad-hoc & update perform
  • Support push down optimizations between the database cache and the storage cache/persistency for throughput and latency optimization
    • From my perspective, FAST = Fully Automated Storage Tiering might get some really interesting hints from the GreenPlum polymorphic storage manager

Overall, the Virtual “Big Data” Cloud should be just as obvious an IT optimization as VDI and virtual servers are. The constraints are typically a bit different as these data systems are among the most throughput intensive (Big Data, Big Compute) and everyone understands the natural requirements around “move compute to the data” in these workloads. We believe that, through appropriate placement of function, and appropriate policy based controls, there is no reason why a VBDC cannot perform better in a virtual private cloud, and why the boundaries of physical appliances cannot be shed.

Share your data, exploit shared data, and exploit existing pooled resources to deliver analytic business intelligence; improve both your top line, and bottom.

Technorati Tags: , , , , , ,

ETL & Hadoop/Map-Reduce… a match made in Orlando!

I’ve been thinking hard as of late on the challenges associated with exploiting massively parallel Hadoop/Map-Reduce clusters for analytics. As most know the NoSQL movement has been growing at a strong pace. What very few seem to want to talk about, is how NoSQL can actually present an analytic query language? Yes the xQL…

We all know that MR is great for limited schema, large cardinality data, but DWH’s typically have stronger schemas and substantial dimensional data, not to mention normal forms. Today Pentaho Corporation has released capabilities into it’s BI suite which extends their ETL (Pentaho Data Integration – PDI) to support processes that exploit (read and write) Hadoop structures. In talking with James Dixon, their CTO, the next step is to support a richer set of analytic query languages.

Press Release: Pentaho… Analytics & MR

MR is well suited for simple query tasks, but analytic workloads make extensive use of meta-data and dimension tables to optimize analytic performance and consistency. In a simple Tuple-Store model (name-value pair), this is a bit of a challenge, as is the availability of structural meta-data that helps to providing basic typing and vocabulary mapping to an appropriate dictionary. Some warehouse implementations, like Hive, leverage a meta-store to define basic primitive types which are recursively defined through compositional maps/lists and vectors, and further supports inspectors/evaluators to support basic predicate operations across these type models. This meta-data, whether co-located or adjacent to the fact data, provides a valuable layer for query and analytics as we move from strongly typed, fully structured systems to late/lazy/loosely typed stores. It’s well known that many emerging DWH vendors ( Aster Data, Greenplum, Paraccel and,Vertica) are listening to the NoSQL crowd, and it’s great to see the BI crowd begin to look at new ways to manage the analytic information across the data landscape.

Great job Pentaho team, and I look forward to discussing your analytic strategy!

Technorati Tags: , , , , , , , , , , ,