Big Data Exchanges: Of Shopping Malls and the Law of Gravity

Working for a Storage Systems company, we are constantly looking at both the technical as well as social/marketplace challenges to our business strategy. Leading to the coining of “Cloud Meets Big Data” from EMC last year, EMC has been looking at the trends that “should” tip the balances around real “Cloud Information Management” as opposed to “data management” which is really what dominates todays practice.

There are a couple of truisms [incomplete list]:

  1. Big Data is Hard to Move = get optimal [geo] location right the first time
  2.  Corollary = Move the Function, across Federated Data
  3. Data Analytics are Context Sensitive = meta-data helps to align/select contexts for relevancy
  4. Many Facts are Relative to context = Declare contexts of derived insight (provenance amp; Scientific Method)
  5. Data is Multi-Latency & needs Deterministic support for temporality= key declarative information architectural requirement
  6. Completeness of Information for Purpose (e.g. making decision) = dependent on stuff I have, and stuff I get from others, but everything that I need to decide.

I believe that 1) and 6) above point to an emerging need for Big Data Communities to arise supporting the requirements of the others. Whether we talk about these as communities of interest, or Big Data Clouds. There are some very interesting analogies that I see in the way we humans act; namely, the Shopping Mall. Common wisdom points to the mall as providing an improved shopping efficiency, but also in the case of inward malls, a controlled environment (think walled garden). I think that both efficiency in the form of “one stop”, and control are critical enablers in the information landscape.

Big Data Mall slideThis slide from one of my presentations supports the similarities of building a shopping mall alongside the development of a big data community. Things like understanding the demographics of the community (information needs, key values), the planning of roads to get in/out. And of course how to create critical mass = the anchor store.

The interesting thing about critical mass is that it tends to have a centricity around a key [Gravitational] Force. Remember:

Force = Mass * Acceleration (change in velocity).

This means that in order to create communities and maximize force you need Mass [size/scope/scale of information] and improving Velocity [timelyness of information]. In terms of mass, truism #1 above, and the shear cost / bandwidth availability make moving 100TB of data hard, and petabytes impracticable. Similarly, velocity change does matter, whether algorithmically trading on the street (you have to be in Ft Lee, NJ or Canary Warf, London) or a physician treating a patient, the timeliness of access to emergent information is critical. So correct or not, gravitational forces do act to geo-locate information.

Not trying to take my physics analogy too far, but Energy is also interesting. This could be looked at as “activity” in a community. For energy there is an interesting both kinetic and potential models. In the case of the internet, the relative connectedness of information required for a decision could be viewed in light of “potential”. Remember:

Ep (potential energy) = Mass x force of Gravity x Height (mhg)

In our case Height could be looked at as the bandwidth between N information participant sites, Mass as the amount of total information needed to process, and Gravity as a decentralization of information = the Outer Joins required for optimal processing. If I need to do a ton of outer joins across the Internet in order to get an answer, then I need to spend a lot of energy.

So if malls were designed for optimal [human] energy efficiency, then big data malls could do exactly the same for data.

Fallacies of Enterprise Information Management (part deux)…

With some hearty comments from Tom Maguire, I’ve been forced to adjust some of these fallacies:
1. Data quality is perfect - data is correct, complete and coherent across all enterprise contexts
- People will remediate bad data – if inaccuracies are found (contrary to the axiom above) users will willingly and proactively make changes, and all users will agree with those changes
2. Relationships are Known – The linkages between data entities are well known, hierarchical, navigable and everlasting
3. There is a singular master model that is explicitly and consistently factorable for all enterprise uses
- One dictionary – There is a consistent dictionary with well agreed and complete set of meta-data supporting the modeled domain
Static model – The model is complete, and no changes will ever be necessary
4. Expectation of XA/2Phase Transactionality – The data exchanges will be ACIDly transactional, and based upon XA/2-phase transactional mechanisms
- Transactions complete in a timely fashion and are not affected by Deutsch’s fallacies
6. Idempotent Data – There is one master copy of enterprise data, and application specific “caches” are always synchronized and consistent
Working currently on more consistent “information exchanges” these fallacies have been driving some specific architectural artifacts, including:
- the need for appropriately targeted abstractions providing consistent to/from canonical forms,
– the need to support similar meta data/policy models and transformations in support of context bridges and securitization
– support for multi-master synchronization
– needs for distributed model governance, non-destructive model mutation and potentially late-modeled forms
(though OWL/RDF brings some new challenges in transactional systems wrt. transitive closure)